score
stringclasses 605
values | text
stringlengths 4
618k
| url
stringlengths 3
537
| year
int64 13
21
|
---|---|---|---|
14 | In the introduction to his book Did Slavery Pay? (1971: xii), a collection of readings on the economic effects of slavery, Hugh G. J. Aitken discusses what we can learn from these texts.
He says they ‘have much to tell us about slavery, and about the plantation economy, and the South, but they have little to tell us about the black man’ (1971: xii). To get a fuller understanding of the subject the narratives on slavery are extremely useful. They paint a vivid picture of what life was like for black men, women and children at the time.
However it is important to keep in mind the differences between and the limitations of both these kinds of sources. Both types raise questions of bias and reliability. Everyone who gives an account of history does so with a purpose. We must carefully analyse each source and make clear what we can and cannot learn from it.
The economic accounts of slavery are presented as objective. ‘Facts’ and figures are used to analyse the profitability of slavery. Ernest Williams, for example in From Columbus to Castro: the history of the Caribbean, 1492-1969 (1970) provides a lot of numerical data detailing the rise and fall of slavery.How accurate these figures are is an important question. The reader needs to consider how they were constructed and for what purpose. Aitken suggests that historians often picked up unexamined assertions and used them as facts.
They based their arguments on ‘stereotypes of the negro and of plantation slavery they found in earlier writings’ (1971: 2). If true, this means the conclusions of these arguments are of little value. In Capitalism and Slavery (1944) Williams discusses the ‘origins of Negro slavery’. He stresses the idea that black slavery was ‘an economic phenomenon.
Slavery was not born of racism: rather racism was a consequence of slavery’ (1944: 7). Williams supports this by telling how ‘unfree labour in the New World was brown, white, black and yellow; Catholic, Protestant and pagan’ (1944: 7). It is true that many indentured white servants were taken to the Americas to work, however there were distinct differences between these white servants and the black slaves. White labour was bought for a temporary period; their freedom was theirs again once they had served their contracts.
Blacks, on the other hand were bought for life. This seems to show that racist moral assumptions were being made about which types of people could acceptably be bought outright as property. Many economic accounts discuss the profitability of slavery, for the slaveholders individually and for the South’s economy but vary on their conclusions (Aitken 1971: xii). Many writers focus on the importance of slavery for the European nations involved. Adam Smith sees slavery as the basis for the industrial revolution in Britain.However this idea is also debatable. David Landes (1998) criticises this view, claiming that important advances that led to the industrial revolution (for example steam engines, mechanised wool spinning) developed independently of the Atlantic system. If theses ideas are debatable, then we must question the way they are presented as facts.
It is also important to keep in mind that no author is completely neutral. Every writer has political beliefs that are likely to influence his or her work.For example, a writer who is anti-capitalist is likely to emphasise evidence that shows the negative effects of capitalism, while a writer who is pro-capitalism would be likely to do the opposite.
Slavery is a very emotive subject, Aitken says ‘there is a lot of suppressed guilt involved, though it is seldom permitted to show clearly through the polished academic veneer’ (1971: ix). As he says it can be ‘amazingly difficult to filter out the ideological component’ and discover what is the objective truth.The economic texts tend to talk of slaves in numbers as a commodity.
This is perhaps an echo of how they were seen at the time: a non-human mode of production. Slavery was discussed as efficient or inefficient, profitable or unprofitable. The Slave narratives gave this ‘mode of production’ a voice, allowing the reader a glimpse of the human side of slavery. Many of the slave narratives share several characteristics. They describe the physical and emotional hardships they endured during their time as a slave.Like in many narratives Mary Prince told of her separation from her family, ‘we had not the sad satisfaction of being partners in bondage’ (in The Classic Slave Narratives 1987: 191), and of the cruelty of her masters, ‘to strip me naked – to hang me up by the wrists and lay my flesh open with the cow-skin, was an ordinary punishment for even a slight offence’ (1987: 194). They tell of their quest for freedom and their escape from slavery. Henry Louis Gates, Jr.
in the introduction to The Classic Slave Narratives states ‘the slave narratives came to resemble each other, both in their content and in their formal shape’ (1987: x).The narratives were extremely popular in America and Britain at the time it could be argued that some of the narratives were simply repeating what had worked for other authors. Although it is equally possible that the experiences of slaves in various places were similar.
They are primary sources, eyewitness accounts of what took place and in this way have a value that the economic texts do not. They were written by those experience slavery, rather than some studying it possibly many years later.The first slave narratives were originally published to arouse the sympathy of readers in the hope of gaining support for abolition. This should be kept in mind when reading the narratives. The political motive may have led authors to exaggerate the cruelties they suffered.
Frederick Douglass, probably the most famous writer of slave narratives, confessed to doing this. In Frederick Douglass; New literary and Historical Essays Sundquist tells how ‘his charges of brutality against Thomas Auld were deliberately inaccurate’.Douglass later apologised to Auld for this, admitting that his intention was to make Auld ‘a weapon with which to assail the system of slavery’ (1990: 6). Although published for abolitionist purposes most slave narratives were very personal accounts, snapshots of lives rather than political statements. Douglass, however, ‘made the facts of his life a dramatic representation of the great political forces that constituted the nations over slavery’ (1990: 3) It is important to recognise the differences between the economic texts and the narratives on slavery to get the fullest and most accurate understanding of the subject.The economic texts tell us of the influence that the slave trade and the production using slavery had on the economies in America and in Europe.
It can provide us with information on the long-term consequences of slavery. The slave narratives give an insight as to what life was like on a personal level. They give the reader a picture of how the master and the slave interacted. They can help us begin to understand some of the atrocities suffered by the slaves. The slave narratives debatably played a part in the abolition of slavery and so provide us with a background for that.
Both types of source provide a lot of interesting and useful information, but as with all historical sources they must be assessed for reliability. The economic accounts are secondary sources of information, so the reader must question the evidence that the arguments and assertions are rested on. The slave narratives were written with political motives that may have influenced the content. With both types of writing it is important to read various accounts, and look for both conflicting and consistent views. The reader must always keep in mind the limitations of each source and be aware of who was writing it and why. | https://wallaceandjames.com/economic-texts-slavery-differ-narratives-slavery-important-analyse/ | 21 |
92 | Israel and Palestine, the Middle East
On 2 November 1917, the British Foreign Secretary Arthur Balfour signed a promise that her majesty’s government would aim to facilitate the establishment in Palestine of a national home for the Jewish people. The Balfour Declaration would be written into the terms of the League of Nations Mandate granted to Britain at the end of the First World War, legally obligating Britain to pursue this end in an area whose inhabitants were predominantly non-Jewish. This arrangement would bring to an end the relatively peaceful Arab-Jewish relations which had existed throughout the Middle East under Ottoman rule, precipitating a decades-long conflict which shows no signs of abating. The clear impossibility of establishing a Jewish national home in Palestine without prejudicing the “civil and religious rights” of the area’s Arab inhabitants was a contradiction very quickly recognised as unworkable by the British administration on the ground.
The Mandate lasted until 1948, during which time enormous waves of Jewish immigration to Palestine were largely supported by the British administration. The Yishuv (the Jewish community in Palestine) increased to the point that they represented a third of the population, having accounted for less than 10% of Palestinian inhabitants prior to the Mandate period. The new settlers, aided by purchases made by the Jewish National Fund, were able to occupy much of the country’s arable land, dispossessing the Arab felaheen of their homes and livelihoods. Driven to cities in search of work, the nascent urban Arab working class harboured a growing anti-Zionist sentiment which fuelled increasing nationalist fervour. Tensions between the two communities had turned violent as early as 1920 when riots marred the Muslim spring festival of Nebi Musa. Sporadic riots were a feature of the British Mandate period, worsening after the 1929 Wailing Wall conflagrations and culminating in the 1936-39 Arab Revolt. By this point, the British government was coming to recognise the unworkability of the Mandate and beginning to consider partition.
At the end of the Second World War, the issue was handed over to the UN, which recommended the division of the country into separate Jewish and Arab states. The partition plan was reluctantly accepted by the Jewish Agency but rejected outright by Arab leaders. The UN General Assembly adopted the resolution in November 1947, triggering the outbreak of civil war across the country. When the British left in May the following year, Jewish leaders in Palestine declared the establishment of the State of Israel, prompting an invasion by the surrounding Arab states. The ensuing war saw Israel emerge victorious, during which conflict 700,000 Palestinians were driven from their homes, becoming lifelong refugees. By the time of the 1949 armistice agreement, Israel had acquired territory far greater than what it was allocated in the UN Partition Plan of the previous year – territory it would expand upon further during the 1957 Suez Crisis, the Six-Day War of 1967 and the 1973 Yom Kippur War. In response to these latter conflicts, the UNSC adopted resolutions 242 and 338, calling for a ceasefire and the withdrawal of Israeli troops from territories occupied during the Six Day War. These resolutions formed the basis of the Oslo Accords, signed between 1993 and 1995, in which Israel agreed to recognise the Palestinian people’s right to self-determination. Despite this relatively hopeful period, the turn of the century saw a return to violence as Israel stepped-up its building of illegal settlements in the Palestinian territories (a practice adopted in earnest following the 1967 conflict). This government-led, illegal settlement activity continues to be a cause of extreme tension today and is now openly supported by the government of the Unites States.
Where: Israel – Palestine
– Israel: 8.7 million
– Palestine: 4.9 million
Refugees/Displaced People: 5.3 million between the West Bank, Gaza Strip, Jordan, Syria and Lebanon.
Combatants: Israel, Palestine Liberation Organisation, Hamas, Fatah, Iraq, Lebanon, Egypt, Syria, Jordan
The Key Actors
Declared independence in 1948 after the Palestinian leadership rejected the partition plan proposed by the UN at the time, as it was seen as a surrender of their land at the time. As the world’s only Jewish-majority state, it has repeatedly fought violent wars against its Arab neighbours.
In the early 1900s, the region of the eastern Mediterranean, which is the centre of the conflict today, was under the control of the Ottoman empire. The people that inhabited this region were all Muslims, Christians and Jewish – the three abrahamaic religions that originated from this region.
With the rise of nationalism in the decades prior to the First World War, the Muslim and Christian inhabitants of the region started developing a sense of national identity, as Arab Palestinians. At the same time, Jews in Palestine and across European countries started joining the Zionism nationalist movement, believing that Judaism is not only an ethnic religion but is also a national identity that deserves to have its own sovereign nation. Subsequently, people of jewish faith from all over the world began immigrating to Palestine.
After the end of the First World War, the British Empire took control over Palestine, calling it the British Mandate and allowing more Jewish immigration. The numbers of people of Jewish faith in Palestine had significantly increased by that point, which escalated the underlying tensions between Arab Palestinians, and the jews in the region, which neither identified as Arab, nor Palestinian. At that point, the British empire had promised the ethnic palestinians that they would get an independent state at the expiration of the Biritsh Mandate over Palestine. At the same time, the Brish Empire had also promised the same thing to the World Zionist Organization – promising to create a national jewish homeland in Palestine, in what is famously known as the Balfour Declaration. In effect, the British Mandate had played the ethnic and religious tensions of both groups against one another, promising each an independent state over the same land, which was an impossible promise to make. The anger and betrayal felt by both sides at the time only further instigated the underlying land-tensions between both groups. This led to very frequent, violent skirmishes by the groups, both of which were not significantly armed at the time.
The ideology of the Nazi regime in Germany had seized all state institutions, leading to the jewish population under the control of Nazi Germany to be subjected to the Holocaust. The number of Jews fleeing the horrors of the Holocaust led to an even higher number of jews settling in what was then either referred too as Palestine by the indigenous muslim and christian inhabitants, or the British Mandate of Palestine for the international community, which continued to amplify the land tensions between the two sides, causing sectarian violence to ensue.
The UN proposed a two-state solution to stop the violence. However, the Arab states, supportive of a full independent Palestinian state, saw the UN proposal as an act of settler-colonialism and declared war on the yet-undeclared Israeli state. The 1947 war resulted in the defeat of Arab states. The Palestinians, fearing repercussions by the Haganah, the now fearsome and heavily armed paramilitary group that would eventually become the Israeli Defence Forces, began fleeting their land for neighbouring states such as Lebanon, Jordan and Syria.
David Ben-Gurion, the leader of the Haganah at the time, and the first Israeli-Prime Minister, declared the independence of the state of Israel. In the weeks and months to follow, even more ethnic palestinians fled.
By the early 1960s, Israel’s independence had been recognized by a dozen states around the world, then swelling to much of North & South America, Africa, and Western Europe. The palestinians never recognized Israeli control over the land that they either still lived on or had fled from, and were supported by all Arab states and the Arab League in their refusal of recognition. In the meantime, tensions continued to boil to the highest levels, something that has until this day never dissipated.
The Six-Day War changed the course of history, when a large number of Arab states declared war on the State of Israel. Although who the attack was initialized by is still historically contentious, Israel’s military power had taken the Arab states by surprise and it quickly gained the upper hand in combat, and expanded its self-declared borders into the Golan Heights (Syria), West Bank (Jordan) and the Sinai Peninsula (Egypt). The Arab states, shocked and dismayed at the outcome, as well as the international support they were lacking, threatened and implemented an oil embargo on all states supporting or recognizing Israel, which led to most African and Asian states severing diplomatic relations with Israel.
Egypt & Syria launched was is known as the 1973 War or the Yom Kippur War to retake the Golan Heights and the Sinai Peninsula from the Israeli state. Egypt & Syria were then supported by a number of other Arab states, leading to early successes and advancements in the war. Israel eventually repulsed these advancements and began to advance onto Egyptian and Syrian territory itself, leading to a ceasefire being called. The Arab states directly embargoed the United States for its role in heavily assisting the Israeli side in resupplying ammunition during the war.
Has both sides began to realize that they may never be militarily ahead of the other side, the Camp David Peace Accord was signed by Egypt & Israel, formally creating a peace treaty between these two Arab states and Israel. The peace treaty was, and is, extremely controversial, with an overwhelming majority of Egyptians believing that it was a mistake. However, the Sinai Peninsula was returned to Egypt as a result of the Peace Accords. The Camp David Peace Accords started a new era in Arab-Israeli relations and took the conflict into a new stage. The peace treaty had secured the western israeli border; instead, the conflict began on the northern border, with Israel invading southern Lebanon. Israel mostly withdrew from southern Lebanon a year lebanon. The conflict began to take a more diplomatic perspective, as war had become too costly on all sides, with the Palestinian leadership taking their right to self-determination to the United Nations General Assembly, where recognition was and is important to the Palestinian cause.
The Oslo Peace Accords had begun, with the Palestinian Liberation Organization and Israel signing the first ever framework towards the resolution of the conflict. As per the framework, Israel would withdraw from Gaza City and Jericho, while continuing the occupation and control over the West Bank till a two-state solution was found. The following year saw Jordan & Israel sign their own peace treaty, making Jordan the custodian of the Al Aqsa Mosque in Jerusalem, the third holiest city in Islam.
Oslo II marked the beginning of the peace process between Israel and Palestine.
Israel withdrew from all of southern Lebanon, with the exception of a couple of villages that it continues to occupy. Over the next decade, the conflict remained frozen, with both sides engaged in a peace process that never substantially reached any positive outcomes.
The Israelis unilaterally disengaged from Gaza, moving all military outposts and settlements out of the Gaza Strip. However, Israel continued to control Palestinian airspace, entry, exit and territorial water zone of the Gaza Strip. Thus, the occupation remained in place.
Hamas won by a landslide majority during the 2006 Palestinian legislative elections. Israel, the US, and the EU, as well as other western countries, cut off their aid to the Palestinians as a result of the democratic elections which did not go as the had expected; as they viewed the islamist political party who rejected Israel’s right to exist as a terrorist entity.
Hezbollah infiltrated Israel in a cross-border raid, captured two soldiers and killed three others. After failing to rescue the captured, with 5 more Israeli soldiers being killed in the attempt, Israel’s military responded in a large-scale attack that became the 2006 Israel-Lebanon conflict. The conflict led to the deaths of 1,191 Lebanese people and 165 Israelis in the one-month war. Approximately one million Lebanese and 300,000-500,000 Israelis were displaced.
The battle of Gaza began, which led to Hamas taking Gaza from Fatah.
After rockets were fired from the Gaza Strip into Israel, Operation “Hot Winter” was launched by Israel, resulting in 112 Palestinian deaths and 3 Israeli deaths.
Israel launched a full-scale invasion of Gaza, code-named Operation “Cast Lead”. The 22 days of fighting between Israel & Hamas only ended after each declared separate unilateral ceasefires. The casualties of what became known as the Gaza War are disputed but according to the testimony of three Guardian films, 1,400 Palestinians were killed, including more than 300 children.
Turkish activists with the Free Gaza flotilla tried to break Israel’s naval blockade of hamas-controlled Gaza, but were intercepted by the Israeli Defence Forces (IDF). After an altercation on board the ship, nine Turks were shot dead by IDF gunfire.
- Following the Islamic Revolution in Iran, about 30,000 Iranian Jews migrated to Israel.
Palestine became a full member of UNESCO, the education and cultural arm of the United Nations.
The United Nations General Assembly voted to upgrade Palestine to a non-member observer state status in the UN through resolution 67/19. It was adopted by the 67th session of the UN General Assembly, the date of the International Day of Solidarity with the Palestininian People.
Israeli jets & helicopters launched dozens of air strikes across the Gaza Strip overnight, just hours after the bodies of three abducted Israeli teenagers were found in a shallow grave near the southern West Bank city of Hebron. Following the discovery of the bodies, Israeli Prime Minister Netanyahu issued a statement blaming Hamas. Hamas denied involvement. In retaliation to the news about the three abducted israeli teenagers, 16-year-old Mohammed Abu Khdeir was kidnapped by Israelis who beat him and burned him alive. They later confessed. Two weeks later, thousands of Israeli soldiers backed by tanks initiated an invasion of the Gaza Strip. All border areas came under fire, with tank shelling occurring every minute.
An increase of violence occurred in the Israeli-Palestinian conflict starting early September 2015 and lasting into the first half of 2016, known as the “intifada of the individuals”. Some commentators have atteibuted the increase in Palestinian violence against Israelis due to the spread of social media and the ongoing frustration over the failure of peace talks to end the decades-long occupation and the suppression of human rights.
Hamas signs a reconciliation deal intended to administrative control of Gaza transferred to the Palestinian Authority, but disputes stalled the deal’s implementation. US President Donald Trump also recognizes Jerusalem as the capital of Israel, infuriating the Arab world and western allies.
An upsurge in violence on the Gaza Border from March to August led to a long-term ceasefire being brokered by the UN and Egypt between Israel and Hamas.
There is no peace process in place. Violence & tension is rising again. Quality of life in Palestinian territories is decreasing. Israeli settlements continue to be built in violation of international law.
Israel has approved the construction of at least 6,000 new homes for Jewish settlers in the occupied West Bank. At the same time, it gave the green light for the construction of 700 new homes for the Palestinians, an official Israeli source reported, on condition of anonymity. The announcement for Area C housing comes ahead of an expected visit to Israel on Wednesday by US envoy Jared Kushner, son-in-law of US President Donald Trump.
At least three Palestinians have been killed by Israeli forces in the north of the Gaza Strip, according to Palestinian health officials and local media, hours after three rockets were allegedly fired from the blockaded enclave.
Hezbollah militants in Lebanon on Sunday fired a barrage of anti-tank missiles into Israel, prompting a reprisal of heavy Israeli artillery fire in a rare burst of fighting between them. Although the shooting quickly subsided without casualties on either side, the situation remained volatile. The bitter enemies, which fought a month-long war in 2006, have indicated they do not want to go to war but appeared on a collision course in recent days after a pair of Israeli strikes against Hezbollah. The militant group vowed it would retaliate.
Embattled Israeli Prime Minister Benjamin Netanyahu, running in an election that could be the fight of his political life, said he hopes to annex all Jewish West Bank settlements. Israel will build more settlements and won’t uproot settlers, Netanyahu said Sunday in a speech in the West Bank settlement of Elkana.
Qatar has cut the amount of fuel it funds for the Gaza Strip by half, sources in the Palestinian Energy Authority told Haaretz Sunday. As a result, Gazans will now get only five to six hours of electricity per day, down from the eight they were getting until now.
Israeli lawmakers have given the go-ahead to a small settlement in the West Bank, following an election campaign pledge from PM Benjamin Netanyahu to annex the Jordan Valley to Israel if he wins Tuesday’s polls.
The Palestinians condemned the Israeli government on Sunday for holding its weekly cabinet meeting in the Jordan Valley, and accused it of “undermining any chance for achieving a just and everlasting peace based on international legitimacy and the two-state solution.” “We reject and condemn this action,” said Palestinian Authority presidential spokesperson Nabil Abu Rudaineh, adding that convening the cabinet meeting in the Jordan Valley “will not give any legitimacy to settlements built on the 1967 lands of the State of Palestine, including Jerusalem.”
Israel’s election put the incumbent leader Benjamin Netanyahu neck and neck with Mr Gantz, and the two are now vying to build a governing coalition. The Joint List, the bloc of Arab parties that came in third, says it wants to oust Mr Netanyahu from power. This is the first time since 1992 that an Arab political group has issued an endorsement for prime minister.
Israeli forces arrested a prominent Palestinian politician from her home in the occupied West Bank city of Ramallah overnight on Thursday. Khalida Jarrar, a former member of the defunct Palestinian Legislative Council, was
arrested at 3am local time (00:00 GMT) and taken to an unknown area, local media reported.
Israeli air raids on the Gaza Strip have killed one person, Palestinian officials have said. The Gaza Ministry of Health said 27-year-old Ahmed al-Shehri was killed during pre-dawn attacks on Saturday. Two others were wounded in the strikes.
Jordan on Sunday received two stretches of land it had allowed Israel to use for decades, amid tense relations between the neighbours 25 years after they signed a landmark peace deal.
In 2018, amid mounting public pressure not to renew the arrangement relating to the two territories, Jordan’s King Abdullah II submitted a one-year notice of termination to Israel.
Amman strongly backs the establishment of a Palestinian state and has been frustrated by the lack of progress in the Israeli-Palestinian peace process.
The U.S. no longer will consider Israeli settlements to be illegal under international law, officials said Monday, in a move that formalizes the Trump administration’s treatment of the West Bank and shifts decades of U.S. policy.
The ICC’s chief prosecutor, Fatou Bensouda, announced last week that there is a basis to investigate Israel for its actions in the occupied Palestinian territories, particularly the deadly airstrikes during the 50-day Gaza war in 2014, but she is first asking the court whether it has jurisdiction there.
The prosecutor would also like to investigate shootings by Israel Defence Forces of Palestinian protesters on the Gaza-Israel border during demonstrations in the spring of 2018 when Palestinian-Canadian doctor Tarek Loubani was wounded by an Israeli sniper while providing medical care.
The formal investigation that Ms. Bensouda is prepared to initiate would also examine possible war crimes by Hamas, the militant Islamic group that rules Gaza.
A group of British MPs has called for the UK to recognize the state of Palestine ahead of a visit by Prince Charles to Israel and the occupied Palestinian territories.
In a letter to The Times, the MPs, along with figures from think tanks and pressure groups, said the move was long overdue and would help fulfill Britain’s “promise of equal rights for peoples in two states.”
Palestinian leaders threatened to withdraw from key provisions of the Oslo Accords, which define arrangements with Israel, if US President Donald Trump announces his proposal for Israel and Palestine this week.
US President Donald Trump has presented his long-awaited Middle East peace plan, promising to keep Jerusalem as Israel’s undivided capital. He proposed an independent Palestinian state and the recognition of Israeli sovereignty over West Bank settlements. Standing alongside Israeli PM Benjamin Netanyahu at the White House, Mr Trump said his proposals “could be the last opportunity” for Palestinians. Palestinian President Mahmoud Abbas dismissed the plans as a “conspiracy”.
Joining global critics of a plan that President Donald Trump unveiled last week to address the decades-long Israel-Palestine conflict, the Organization of Islamic Cooperation on Monday rejected the “biased” proposal and urged members states not to cooperate with U.S. efforts to enforce it.
Over 100 Democrats in the House of Representatives issued stark criticism of the plan U.S. President Donald Trump released to end the Israel-Palestine conflict, saying it would ultimately lead to greater hostilities if enacted.
With the Palestinian President present in the Council chamber, together with Israel’s Ambassador, the UN chief reiterated the Organization’s continued support for a two-State solution: “This is a time for dialogue, for reconciliation, for reason”, he said.
After years of unexplained delays, the United Nations released a list of over 100 companies with ties to illegal Israeli settler colonies in the occupied West Bank of Palestine. In a statement, the UN Human Rights Office identified 112 businesses profiting from the Jews-only settlements.
Of those, 94 are based in Israel, while 18 are headquartered in countries including the United States, United Kingdom, France, the Netherlands, Luxembourg and Thailand. The UN report is a response to a 2016 United Nations Human Rights Council (UNHCR) resolution calling for a “database for all businesses engaged in specific activities related to Israeli settlements in the occupied Palestinian territory.”
Democratic hopeful Bernie Sanders tells a town hall in Nevada that it is time for the US to adopt a more balanced policy in the Middle East, describing the current Israeli government as “right-wing” and “racist”.
During a visit to Jabal Abu Ghneim, an illegal West Bank settlement, the Israeli Prime Minister vows to construct 5,000 new homes in East Jerusalem.
A Palestinian man, allegedly attempting to carry out a stabbing attack near the Old City in Jerusalem, is shot dead by Israeli police near the Bab al-Asbat.
The Israeli military confirms that it has shot dead a Palestinian man in the Gaza Strip, after he allegedly attempted to place an explosive device near a security fence. A bulldozer is reportedly used to remove the body.
Six people are killed in an Israeli bombing raid on the Syrian capital Damascus, including two members of the armed group PIJ. Raids are also carried out in the Gaza Strip.
Palestinians in the West Bank took to the streets to protest against the clearing of land by Israeli bulldozers. It is feared that the clearing is to make way for further illegal Israeli settlements.
A state of emergency is declared in the occupied West Bank over the coronavirus pandemic.
The first Palestinian cases of coronavirus are confirmed in Bethlehem, prompting the Palestinian authority to place the city on lockdown.
A Palestinian teenager is shot dead in the town of Beita near Nablus in the West Bank, as Israeli security forces open fire on a Palestinian demonstration.
The Al-Aqsa Mosque and the Dome of the Rock are closed to worshippers to protect against the spread of Coronavirus.
Authorities in the Gaza Strip shut cafes and restaurants and suspend Friday prayers as the area’s first Covid-19 cases are confirmed.
The Palestinian Liberation Organisation’s (PLO) Commission for Prisoners Affairs calls upon the UN to pressurise Israel into releasing Palestinian captives to help curb the spread of the coronavirus.
22-year-old Islam Dweikat dies three weeks after sustaining wounds inflicted by rubber bullets fired on protestors by the Israeli military in mid-March.
Adnan Ghaith, the Palestinian governor of Jerusalem, is arrested by Israeli authorities over alleged “illegal activities”.
The UN envoy for Palestine urges all sides in the ongoing Israeli-Palestinian feud to stop fighting each other in order to focus on tackling the coronavirus outbreak.
Palestinians mark Prisoners’ Day as fears grow for those at risk of coronavirus in Israeli prisons.
Benjamin Netanyahu and rival Benny Gantz agree to form emergency coalition government, ending months of stalemate. Under the three-year agreement, Netanyahu is to remain Prime Minister for one-and-a-half years, with Gantz taking over thereafter.
Meanwhile, Palestinians take to the street to demonstrate solidarity with women exposed to worsening domestic abuse amid the coronavirus lockdown.
Ibrahim Halsa, a Palestinian who stabbed an Israeli police officer near the Jewish settlement of Maale Adumim in the West Bank, is shot dead by Israel forces.
Meanwhile, US Secretary of State Mike Pompeo says that it is ultimately up to Israel whether or not they make further annexations of Palestinian territory. The UN and EU warn Israel against such action.
Israeli Prime Minister Benjamin Netanyahu announces during an online video address that he is confident the US will give its approval to further annexations by Israel of West Bank territories within the next two months.
The US State Department confirms the Trump administration is prepared to recognise Israel’s annexation of large swathes of the West Bank. Palestinian spokesmen say such actions will prevent any chance at a future two-state solution.
The presumptive Democratic presidential nominee Joe Biden confirms that, if victorious in this year’s election, he would not move the US embassy in Israel back to Tel Aviv from Jerusalem (albeit, with the dubious caveat that Trump’s initial decision to do so was a “mistake”).
Meanwhile, the German government bans Hezbollah – a decision which is praised by the Israeli government.
The Arab League releases a joint statement condemning Israel’s US-approved plans to expand their annexation of territory in the Palestinian West Bank.
Hundreds take to the streets of Tel Aviv to protest against the newly-formed coalition government. Organised by the Movement for Quality Government in Israel, the demonstrators contest the legality of Netanyahu and Gantz’s power-sharing deal given the impending trial of prime minister Netanyahu on charges of corruption.
The Supreme Court of Israel begins to hear arguments regarding the legality of Prime Minister Benjamin Netanyahu, who has been indicted on charges of corruption, forming a new government.
Meanwhile, the Palestinian Authority and Israel reach an agreement to allow some 40,000 Palestinians to cross the border into Israel to return to work, mainly in the agricultural and construction sectors.
The governing body in the West Bank, the Palestinian Authority, extends the length of the state of emergency declared in the wake of the coronavirus pandemic by a further month. The number of confirmed cases in the Palestinian territories is 362, including 4 deaths.
The Israeli Defence Force responds to an alleged Hamas rocket strike with tank fire at numerous Hamas military posts. No casualties are reported for either side.
The Israeli government announces plans to build a further 7,000 illegal houses in the West Bank, in anticipation of the visit of US Secretary of State Mike Pompeo next week. The announcement comes just hours before the country’s Supreme Court dismisses two legal challenges against the government elect, paving the way for the Netanyahu-Gantz coalition to take office pending parliamentary approval in the coming week.
The Israeli parliament votes in favour of the coalition deal arranged between Benjamin Netanyahu and Benny Gantz by 71 to 37, bringing an end to over a year of political deadlock in the country.
The US Secretary of State Mike Pompeo is set to visit Israel within the next week to discuss the new unity government’s plans to annex areas of the occupied West Bank.
James Cleverly, the UK’s minister of state for the Middle East and North Africa tells parliament that the government’s position is opposed to the annexation by Israel of parts of the West Bank, as it would compromise attempts at a lasting two-state solution.
A 21-year-old Israeli soldier is killed after being hit in the head by a rock thrown from a roof in Yabad village close to Jenin. In a separate incident, a Palestinian is left in critical condition after attempting to stab security staff at Qandiya checkpoint.
Zaid Fadl Qaisia, a 15-year-old Palestinian, is shot in the head and killed by an Israeli soldier during clashes in the West Bank’s al-Fawar refugee camp. The murder occurs just a day prior to the 72nd anniversary of the founding of the state of Israel.
Following 500 days without a fixed government, the Israeli Knesset approved the new coalition government of Benjamin Netanyahu and Benny Gantz by 73 votes to 49. Netanyahu vows to “write another glorious chapter in the history of Zionism” by illegally declaring sovereignty over Jewish settlements in the West Bank.
Amiram Ben-Uliel, a settler in the occupied West Bank, is convicted by an Israeli court of racially motivated murder after a 2015 arson attack in the village of Duma, near Nablus. The attack killed 18-month-old baby Ali Dawabsheh and his parents.
Palestinian President Mahmoud Abbas announces that the PLO and the Palestinian State are no longer bound by agreements signed with Israel and the United States. The comments come during an emergency meeting held in Ramallah to discuss the impending Israeli annexation of one third of the occupied West Bank.
Nickolay Mladenov, the UN special Middle East envoy, calls on Israel to abandon its plans to annex large parts of the West Bank and urges Palestinians to resume talks with major powers about moving towards peace.
The Biden campaign confirms that it ‘firmly rejects’ the Boycott, Divestment and Sanctions movement across the globe, alleging that criticism of Israel from the left too often “morphs into anti-Semitism”.
Meanwhile, the Palestinian Authority (PA) rejects a shipment of aid from the UAE which had arrived via Tel Aviv, accusing the emirate’s government of undermining Palestinian sovereignty by failing to coordinate the transfer with them.
A 77-year-old woman becomes the first confirmed coronavirus-related death in Gaza, after dying in a hospital close to the Rafah Crossing. Concerns grow that the recent decision to allow some Palestinians stranded in Egypt to return home may have led to a spike in cases.
Israeli Prime Minister Benjamin Netanyahu’s corruption trial begins in Jerusalem District Court, one-week after he was sworn in for his fifth term in office. He is accused of accepting bribes, breach of trust and fraud.
The Palestinian Authority (PA) announces a reduction in the restrictions imposed upon the population inside the occupied West Bank as a result of the coronavirus pandemic. Shops are set to open from 26 May alongside the resumption of public transport services, with government staff returning to work the following day. The Palestinian coronavirus count stands at over 400 confirmed cases and 3 deaths.
32-year-old Iyad el-Hallak is shot dead by Israeli police in occupied East Jerusalem. The victim attended a special needs school near to the place he was shot. Police has allegedly suspected him of carrying a pistol, so they shot him and left him to bleed to death in the street.
The World Bank warns of a potential doubling of poverty in the West Bank after the fallout from the coronavirus pandemic. Forecasts warn that the economy could shrink by as much as 11% in the next quarter, leading to fears that the number of West Bank Palestinians living below the poverty line could become as high as 30%.
Meanwhile, Anwar Gargash – a senior UAE official – warns against Israel taking unilateral action to annex large sections of the West Bank in line with Donald Trump’s “deal of the century”. Gargash claims the move would constitute a “setback” for peace negotiations.
Protests spread among Palestinian communities across Israel following the murder of an autistic Palestinian man by Israeli police. Thirty-two-year-old Iyad el-Hallak was shot twice in the Torso while on his way to a special needs school in East Jerusalem. The wounds were fatal.
Mansour Abu Wardieh, the cousin of the autistic man killed by Israeli police on his way to school last week, says that the Israeli inquiry into the circumstances surrounding his death “means nothing” and that they will likely “twist the facts”. Of the 3,408 Palestinians killed by security personnel within the occupied territories in the last 10 years, only five were convicted.
Taxes collected on Palestine’s behalf by Israel – an arrangement dating to the 1990s’ Oslo Accords – are refused by Palestinian President Mahmoud Abbas as part of a concerted effort by the Palestinian Authority to undermine any coordination with the Israeli state in response to the impending annexation of parts of the West Bank. The taxes, worth approximately $190m per month, would leave a major hole in the budget of the Palestinian Authority.
Ramadan Shallah, the former leader of Palestine Islamic Jihad (PIJ) from 1995 until 2018, dies two years after slipping into a coma. Shallah had been suffering from complications due to heart and kidney disease.
Meanwhile, in Tel Aviv, left-leaning protesters rally against the proposed annexation by the Israeli government of the Jordan Valley and Jewish settlements in the West Bank. Thousands turn out with some waving Palestinian flags, although opinion polls indicate that a majority of the country’s inhabitants support the government’s plans.
The Palestinian Authority (PA) sends a counter-proposal to Trump’s Middle East Plan, which proposes the establishment of a sovereign, demilitarised Palestinian state in Gaza and the West Bank with East Jerusalem as its capital. The deal would allow for potential revisions of borders, as long as any land-swaps were equal in size.
The legal representative of the murdered Palestinian Dawabsheh family, who were killed in an arson attack carried out by Jewish settler Amiram Ben-Uliel while thy slept at home in July 2015, has called for the maximum possible sentence to be given to the perpetrator. This would constitute a term greater than a life sentence for a man who spray painted “Long Live King Messiah” on the house before torching it assuming (correctly) that people were inside.
German Foreign Minister Heiko Maas warns Israel against its plans to annex large swathes of the West Bank in coming months, saying that it would violate international law during a visit to Jerusalem.
Israeli officials confirm they will build a new settlement in the illegally occupied Golan Heights called ‘Ramat Trump’ in honour of the US President. It is due to be a significant expansion of the current settlement in the area known as Bruchim, which houses just 10 people.
Salah al-Bardawil, a senior Hamas official, says that unity will be the “bedrock of national strength” when resisting Israeli plans to annex large swaths of the West Bank. Bardawil called for a “union of political leadership” between the Hamas run Gaza Strip and the Palestinian Authority (PA) administered West Bank
For the first time since the Palestinian Authority (PA) announced that it would end security coordination with Israel in light of their proposed annexation of significant portions of the occupied West Bank, the Israeli army raids Ramallah. One man is arrested during the infiltration of a refugee camp where tear gas was used against the local population.
Meanwhile, nearly fifty independent experts sign a joint statement condemning Israel’s proposed annexation of parts of the West Bank as “unlawful”.
Jordan’s foreign minister Ayman Safadi visits Ramallah to meet with Palestinian President Mahmoud Abbas, during which meeting he condemns Israel’s annexation plans. Safadi warns that any such unilateral action undertaken by Israel would pose a serious threat to stability in the region.
An enormous demonstration against Israel’s plans to annex large swathes of the West Bank takes place in Jericho with thousands in attendance. Many diplomats also attended the rally, including the UN’s peace envoy for the Middle East Nickolay Mladenov.
The UN’s secretary-general Antonio Guterres condemns Israel’s plans to annex around a third of the occupied West Bank, describing the proposed action as a violation of international law. Speaking before the UN Security Council, Guterres warns that the move could spell the end of any hopes for a future two-state solution to the Israeli-Palestinian conflict.
Meanwhile, a 27-year-old Palestinian who was related to the secretary-general of the PLO is shot dead by Israeli border police after allegedly trying to run over a female officer. The Israeli account of Ahmad Erakat’s death is dismissed by Palestinian officials, who cite the fact that he was due to be married the following month as evidence that he would not have carried out such a crime.
Over 1,000 Members of the European Parliament join together to sign a letter condemning Israel’s plans to annex large parts of the occupied West Bank, warning that the move could spell the end of any hopes of a solution to the near century old conflict.
Despite having no formal diplomatic ties, the UAE and Israel announce that they will work together to fight against the spread of coronavirus in the region. The commitment to sharing research and technology comes a month after the first flight was made by a UAE carrier to Israel in May.
Two rockets are fired at Israel from the Gaza strip for the first time since May. The strikes come the day following an announcement by Hamas that Israel’s proposed annexation of parts of the West Bank constitutes a “declaration of war” against the Palestinians.
The UN High Commissioner for Human Rights Michelle Bachelet condemns Israel’s proposed annexation of 30% of the West Bank, stating that such a move would be both “illegal” and “disastrous” for the stability of the region. The comments come following indications by alternate Israeli Prime Minister Benny Gantz that annexation plans may not go ahead this week as initially planned.
Civil society organisations in Palestine come together to rejects EU funding which is conditional upon these organisations abiding by the controversial “anti-terror clause”. This clause, only included in EU grants within the last year, calls upon such groups to vet members to make sure they do not have links to seven bodies identified by the EU as “terrorist groups”. Palestinian organisations have accused this policy of ignoring the political reality of occupation and forcing them to police their fellow nationals.
Palestinians gather in Gaza City and the seat of the Palestinian Authority, Ramallah, to demonstrate against Israel’s proposal to annex one-third of the West Bank. Despite no real moves being taken by Israel on the provisional date announced by Prime Minister Benjamin Netanyahu amid disagreements with his coalition partner Benny Gantz, fears remain that annexation is still imminent.
Fatah and Hamas, rival factions in Palestinian politics, pledge to work together to oppose Israel’s planned annexation of one-third of the West Bank. In a joint video conference, the groups agreed to set aside their differences to create a united front against the plans.
Israeli jets target Hamas positions in the Gaza Strip in response to the alleged firing of 3 rockets from the area in recent days. The increase in tension comes just four days after missiles were fired into the sea from Gaza as a warning to Israel over its annexation plans.
The foreign ministers of Egypt, Jordan, France and Germany warn Israel of ramifications for their respective relationships with the state should the Netanyahu government press on with its plans to annex large parts of the West Bank. In a joint statement, the ministers warned of dire consequences for peace and security in the region should the move go ahead.
Israelis gather outside the residence of Benjamin Netanyahu in Jerusalem to protest the Prime Minister’s handling of the coronavirus pandemic. The country’s levels of unemployment have increased dramatically in the months since the start of the crisis, from 3.4% in February to 27% in April.
Protests against the alleged corruption of Israeli Prime Minister Benjamin Netanyahu continue outside of his Jerusalem residence, with demonstrators also angered by the large increase in unemployment consequent upon the coronavirus pandemic.
The Jordanian Prime Minister Omar al-Razzaz speaks in support of the possible creation of a democratic, binational Israeli-Palestinian state should the widely favoured “two-state solution” no longer prove possible.
Protestors continue to gather outside the official residence of Israeli Prime Minister Benjamin Netanyahu to protest his government’s handling of the coronavirus pandemic and the premier’s own alleged corruption. These represent the largest such demonstrations against the Israeli government in a decade.
Meanwhile, the UN’s special envoy for Middle East peace tells the Security Council that efforts to combat the coronavirus in the area have been hampered by the breakdown in the relationship between Israeli and Palestinian authorities following Israel’s threat to annex one-third of the West Bank. Nicholay Mladenov’s comments come shortly after Israeli forces destroyed a quarantine centre in the Palestinian city of Hebron.
At least twelve people are arrested in Jerusalem following continued protests against the government of Benjamin Netanyahu. Demonstrators have taken to streets for weeks to voice opposition to the alleged corruption of the Israeli Prime Minister and his government’s handling of the coronavirus crisis.
Israeli forces and Hezbollah fighters exchange fire on the border between Lebanon and Israel. No fighters are killed in the fighting, which Israeli army spokesmen blame on the infiltration of Hezbollah forces into Israeli territory.
Israeli forces and Hezbollah fighters exchange fire on the border between Lebanon and Israel. No fighters are killed in the fighting, which Israeli army spokesmen blame on the infiltration of Hezbollah forces into Israeli territory.
Hezbollah has accused the Israeli army of fabricating the clash on its border in order to create “false victories” after denying any infiltration by its troops of Israeli territory. The Lebanon-based armed group has, however, promised to exact revenge for the killing of one of its fighters in an Israeli air strike in Syria last week.
Demonstrations demanding the resignation of Israeli Prime Minister Benjamin Netanyahu continue outside his official residence in Jerusalem. Some 10,000 protesters reportedly gathered outside of the Premier’s home, demanding that he step down in light of allegations of corruption and his government’s handling of the coronavirus crisis.
Israeli Prime Minister Benjamin Netanyahu delivered a six-minute rant at a meeting of his cabinet, slamming the protests which have swept across the country in recent weeks calling for this resignation. Netanyahu also criticised Israeli media for allegedly fanning the flames of the protests by giving them disproportionate coverage.
Israel has struck what it has described as underground Hamas “terror facilities” in response to the alleged firing of a rocket into southern Israel. The move comes hours before the army attacks a group of four people who it claims were planting explosives along the Israeli-Syrian de facto border.
In apparent retaliation for the alleged planting of explosives along Syria’s de facto border with the Israeli occupied Golan Heights, the Israeli army 1)claims to strike observation posts, anti-aircraft batteries, intelligence collection systems, and bases in southwestern Syria. Israel holds the Syrian government responsible for the explosive-planting incident, in which all four of the alleged perpetrators were apparently killed.
Dalia Samudi, a 23-year-old Palestinian woman, is killed by an Israeli army gunshot wound inflicted while she sat at home near to the site of clashes between occupation forces and local Palestinian youths. The Israeli army denies using live ammunition, accusing the youths of firing on them.
The protests against Benjamin Netanyahu’s alleged corruption and his government’s poor handling of the coronavirus crisis continue outside of his official residence in Jerusalem. Numbers appeared to have increased on recent outings, with many hooting horns and chanting anti-Netanyahu slogans.
The Israeli army has moved in to begin its demolition of Palestinian homes in the occupied West Bank, as the country begins its illegal annexation of substantial swathes of territory there. One family’s home has already been demolished, with the Israeli government claiming it is because it was built “without a permit”.
Israel launches overnight attacks on Hamas positions in Gaza, allegedly in response to the release of floating fire balloons from the area in recent days. No casualties are reported despite tanks, fighter jets and attack helicopters being used to attack various positions.
A US-brokered deal sees the UAE and Israel reach an agreement for the normalisation of relations between the two middle eastern countries. The UAE joins Egypt and Jordan as the only Arab nations to establish active relations with the state, a move Palestinian spokesmen described as a “stab in the back”. In exchange for the normalisation of ties, Netanyahu’s government has agreed to temporarily postpone its annexation of large parts of the West Bank.
Meanwhile, continuing to cite the alleged launching of incendiary balloons by Hamas forces in Gaza as the motive, Israel steps up its assault on the Palestinian enclave of Gaza. Warplanes carried out bombing raids on sites in Beit Hanoun and Rafah, causing extensive damage to infrastructure but no civilian casualties.
Protests over the Israeli government’s alleged corruption and mishandling of the coronavirus continue outside of the official residence of Prime Minister Benjamin Netanyahu, despite the recent announcement of an historic peace deal with the UAE.
Meanwhile, the Israeli air force resumes its aerial attacks on Hamas positions in the Gaza Strip in apparent retaliation for the repeated launching of incendiary balloons from the area. The heightened tensions in recent days has culminated in the closing by Israel of the Karem Abu Salem border crossing with the Palestinian enclave.
A deaf Palestinian is shot and wounded while walking in a vehicles-only area near the Qalandiya Crossing in the West Bank. Security personnel ordered the man to stop – a command he did not hear – before shooting him in the legs. The attack occurs the same day that Israel closes the fishing zone off the coast of Gaza, allegedly in response to fire balloons launched in recent days. Tanks also target Hamas targets in the strip.
Meanwhile, Jared Kushner signals that the US will not consent to Israel’s proposed annexation of large swathes of the West Bank for “some time”, as its diplomatic efforts will be focuses on ensuring the success of the Israeli-UAE accord of normalisation.
After allegedly attempting to carry out a stabbing attack, a Palestinian man is shot dead by Israeli forces in the occupied Old City of East Jerusalem. The man was shot near the Al Aqsa Mosque compound and later pronounced dead. The man’s death comes as Israel continues its bombing of Gaza for the seventh night straight.
The Israeli bombing raids on Hamas posts in Gaza continues into the eighth consecutive night, amid accusations that a rocket was fired over the border into southern Israel. Israeli President Reuven Rivlin warns of “war” if the clashes continue to escalate. Due to punitive measures, hospitals in the Gaza Strip are only able to operate on four hours of power supplied by the Israeli grid. The attacks arrive as protests sweep across the West Bank and Gaza in response to the normalisation of relations between Israel and the UAE.
Israeli tanks shell Gaza for the ninth consecutive night, with Israeli officials citing the continued launching of incendiary balloons over the border as the provocation. Both Egypt and Qatar are attempting to mediate between the two sides in order to broker a de-escalation in tensions.
Meanwhile, in the West Bank, three Palestinians are shot during an ambush set up by the Israeli security forces. One of the victims, 16-year-old Mohammad Damir Matar, dies from his wounds, with the others taken to hospital nearby. Damar’s family refused permission for an autopsy to be performed for fear that his organs will be illegally harvested in the infamous Abu Kabir Forensic Medical Institute where his body is being held.
Allegedly responding to the firing of a rocket into the south of the country – an attack which was intercepted by the country’s Iron Dome defence system without causing any damage – Israel attacks Hamas positions in Gaza. The strikes represent a continuation of shelling of the strip by Israel, which has been taking place almost daily since August 6th.
Protests against Benjamin Netanyahu continue, with seven protesters arrested while demonstrating outside of the Prime Minister’s official residence in Jerusalem.
Meanwhile, an Israeli drone is shot down by Hezbollah after crossing over the Blue Line demarcating the nominal border between Israel and Lebanon. The drone hits the ground near to the town of Aita al-Shaab.
Israel’s near month-long bombardment of the Gaza Strip continues, with rockets allegedly targeting Hamas military positions and tunnels in response to the launching of incendiary balloons from the territory in recent weeks. The attacks, coupled with an increase in the severity of the blockade imposed on the Palestinian enclave, have left Gaza largely without power since 6 August.
Meanwhile, the US Secretary of State Mike Pompeo begins a tour of the Middle East in Jerusalem. He begins the tour by assuring Israel that the US will ensure it retains a “qualitative military edge” in the region, amid concerns that the recent normalisation of ties with the UAE may lead the gulf state to obtain sophisticated weaponry from the US.
Basem Naim, a Hamas official, tells Al Jazeera news agency that the territory will “not remain silent” in the face of increased Israeli pressure against inhabitants of the area, despite being eager to prevent tensions from “exploding” into another full blown war. The comments come as Israeli launches strikes in the strip for the 15th consecutive night.
Bahrain’s King Hamad has intimated to the US Secretary of State Mike Pompeo that his country’s commitment to a Palestinian state remains, despite pressure from the US administration to normalise ties. The statement arrives as Israel’s bombardment of the Gaza strip continues into its 16th consecutive day and another flare-up occurs on the northern border with Lebanon.
The Israeli Supreme Court has accepted a petition by Palestinian claimants who alleged that houses in the Mitzpe Kramim settlement were built on privately owned Palestinian land and consequently ought to be removed. The state will be allowed 36 months to re-house the settlers.
The three-week flare-up between the Israeli military and Hamas in Gaza continues to escalate as Israeli tanks and warplanes bomb Hamas positions in the strip. In retaliation, rockets were fired into Israel. No casualties are reported no either side. Attempts to end the confrontations have become more urgent in light of a lockdown implemented in the wake of new positive coronavirus tests.
Protests against Israeli Prime Minister Benjamin Netanyahu continue to grow in strength, as around 37,000 people are believed to have joined anti-government protests in the capital Jerusalem. The demonstrators continue to demand the resignation of Netanyahu as he goes on trial for corruption – a process that takes place while unemployment sits above 20%.
Meanwhile, tensions between Israel and the Gaza Strip continue as incendiary balloons and tank-fire are exchanged over the border once again.
Hamas announces a Qatari-mediated deal to end the month-long flare-ups across the Gaza-Israeli border. The deal appears essentially to restore the situation prior to the beginning of cross-border fire on 6 August, meaning that Hamas will prevent incendiary balloons from being launched into Israel in exchange for an ease on import restrictions a restoration of fuel supplies, and the rights for fisherman to resume fishing in the Mediterranean.
Qatari Emi Sheikh Tamim bin Hamad Al Thani tells visiting White House adviser Jared Kushner of his country’s continued support for a two-state solution to the Israeli-Palestinian conflict, which sees an independent Palestinian state with its capital in East Jerusalem. The communication comes as US officials tour the region in an attempt to increase the number of states willing to normalise relations with Israel.
Israeli Prime Minister Benjamin Netanyahu announces that Serbia will become the first European nation to move its embassy from Tel Aviv to Jerusalem. The move is expected to be completed by 2021, with Kosovo anticipated to join them and in so doing become the first Muslim-majority country to recognise Jerusalem as Israel’s capital.
Protests against Israeli Prime Minister Benjamin Netanyahu enter their eleventh straight week. Demonstrators gathered in their thousands outside of the official residence of the Prime Minister in Jerusalem to make known their displeasure with Netanyahu’s handling of the coronavirus pandemic and his ongoing corruption trial.
Saudi Arabia’s King Salman bin Abdulaziz spoke to Donald Trump by telephone to urge the US President to pursue a peace initiative in the Middle East which is fair to Palestinians. The conversation took place on the same day that an Israeli soldier knelt on the neck of an elderly Palestinian man protesting the building of an illegal Israeli settlement in the occupied West Bank.
US President Donald Trump announces that Bahrain will become the fourth Arab state to normalise relations with Israel, following the UAE’s decision to do so in recent weeks. Palestinian officials are united in their condemnation of the move, which they describe as a “stab in the back”. The Palestinian ambassador to Bahrain is withdrawn from the country in response to a move which seems to have been primarily motivated by a fear of Iran.
Palestinians take to the streets across the Gaza Strip to demonstrate against the recent US-brokered peace deal between Bahrain and Israel. Protestors burned pictures of US, Bahraini and UAE leaders in the Hamas organised demonstration.
For the twelfth consecutive week, thousands of protestors gather in the Israeli capital Jerusalem to demonstrate against Prime Minister Benjamin Netanyahu. The demonstrators outside his official residence continue to cite his ongoing corruption trial and poor handling of the coronavirus pandemic as reasons for their demand that he resigns.
Meanwhile, Bahraini opposition parties condemn the decision of the government to normalise ties with Israel. Ayatollah Sheikh Isa Qassim – an influential Shia cleric based in Iran – calls upon the people of the region to resist the move.
Israel announces a three-week national lockdown in an apparent attempt to curb the rise in coronavirus cases across the country. The announcement comes ahead of the Jewish holiday season, which runs from 18 September to 9 October, and follows an increase in the new daily infections to over 4,000. The lockdown will also signal an end to the anti-government demonstrations, which have taken place in the capital for 12 consecutive weeks.
Meanwhile, the Palestinian Prime Minister Mohammed Ishtayeh states his intention to recommend to President Mahmoud Abbas that Palestinians reconsider their relationship with the Arab League, calling the body “a symbol of Arab inaction”. The announcement comes the same day that an Israeli court issues a demolition order for a mosque in an occupied town in East Jerusalem, with residents of the town allowed 21 days to challenge the decision.
Several Palestinian factions, including Fatah and Hamas, work to restore unity between the Gaza Strip and the West Bank as Israel continues to improve relations with surrounding Arab states. The aim of the unified leadership will be to lead collective, popular resistance to Israeli domination. These attempts take place while Qatar rules out normalising ties with Israel, claiming it cannot be the solution to the ongoing conflict.
Meanwhile, the normalisation deals between UAE-Israel and Bahrain-Israel are signed into force. Palestinians continue to protest the moves, with over 200,000 signing an anti-normalisation charter.
Israel fires rockets into the Gaza Strip in response to the alleged firing of rockets over the border by Hamas. The exchange follows the signing by Bahrain and the UAE of normalisation deals orchestrated by the US government.
Honduras states its intention to move its embassy from Tel Aviv to Jerusalem in Israel, following in the footsteps of the US. The population of Palestinians in Honduras is the second largest in Latin America after Chile.
Palestinian officials have stepped down from their role as chair of ongoing Arab League meetings, in protest at recent agreements by Arab states to normalise ties with Israel. Foreign Minister Riyad al-Maliki claimed there was “no honour” in holding the position while fellow Arab states moved towards normalisation.
Meanwhile, the two rival Palestinian factions of Fatah and Hamas meet in the Turkish capital Ankara to discuss ways of bringing an end to the internal division of the national movement with a view to holding general elections.
The rival Palestinian factions of Fatah and Hamas reach agreement to hold the first Palestinian elections in over 14 years. The deal will see elections held within the next six months.
Protests against the endemic corruption of Israeli Prime Minister Benjamin Netanyahu’s government – as well as his administration’s handling of the coronavirus crisis -enter their 14th consecutive week, despite a nationwide lockdown having been announced.
Following 14 straight weeks of anti-government demonstrations, the Israeli administration passes a law curbing the freedom of people to protest during the nationwide coronavirus lockdown. The law, which prevents people holding demonstrations further than a kilometre from their residence, has been accused of silencing criticism of the government by opponents.
Spokespersons for the Lebanese and Israeli governments have announced their agreement to framework for talks over a proposed settlement to ongoing border disputes. The US-mediated negotiations will attempt to bring an end to a formal state of war between the two states
Despite the imposition of a new law aimed at curbing anti-government protests during a nationwide coronavirus lockdown, demonstrators took to the streets of Israel for the 15th week running to call for the resignation of Prime Minister Benjamin Netanyahu. Around 100,000 people are believed to have joined the protests across the country.
The emergency law banning Israelis from demonstrating further than 1 kilometre from their homes is extended by a further week after a telephone vote. Critics continue to accuse the government of using the restrictions, which are ostensibly aimed at halting the spread of coronavirus, to curb dissent.
Months’-long protests against Israeli Prime Minister Benjamin Netanyahu continue as tens of thousands of demonstrators gather across the country. The previous site of the protests – the official residence of the Prime Minister – was off bounds for demonstrators, due to a nationwide lockdown allegedly aimed at curbing the spread of the coronavirus.
As part of ongoing efforts to secure strategic regional partnerships in the run-up to the Presidential election, the US urges Saudi Arabia to normalise ties with Israel. US Secretary of State Mike Pompeo urged the gulf country to recognise the Jewish state and, in doing so, join a growing regional alliance against Iran.
Meanwhile, Israeli and Lebanese officials hold talks over a disputed maritime border. Both countries remain in a state of war and the talks are not believed to represent any sort of normalisation of ties.
2020 becomes one of the most prolific years for the building of illegal settlement homes by Israel in the occupied West Bank, as more than 3,000 are built and 12,000 approved in a year. These represent the highest annual figures since they started to be recorded in 2012.
The key PLO negotiater Saeb Erekat is taken to hospital in Israel 10 days after contracting the novel coronavirus, Covid-19.
Several senior UAE officials, including Finance Minister Obaid Humaid al-Tayer, make their first visit to Israel since the deal to normalise relations between the two countries. The visit sees representatives of each state sign four agreements concerning economic cooperation, aviation, visa exemptions and investments.
The Israeli military conducts yet another air raid against Hamas positions in the Gaza Strip, once again allegedly as a response to rocket fire across the border.
Gaza nurses take to the streets in Gaza to protest an Israeli travel ban which has led to them losing their jobs in Jerusalem’s Makassed Hospital. The group of nurses had all been working in the hospital for over 20 years.
At the behest of US President Donald Trump, Sudan and Israel agree to the normalisation of relations between the two countries. Palestinian groups condemn the move as another “stab in the back”.
Meanwhile, in response to the alleged firing of rockets over the border, the Israeli military launches an overnight air attack against targets in Gaza.
The Sudanese government’s decision to bow to US pressure and normalise relations with Israel is met with derision by opposition parties in the country, who say they will form a united front against the deal.
Allegedly responding to an incident involving the hurling of rocks at a security vehicle, Israeli forces severely beat a Palestinian teenager northeast of Ramallah. The man was beaten on the neck and later died from his injuries.
Following the signing of a string of agreements aimed at ensuring Israel’s qualitative military edge (QME) over rivals in the region, two legislators in the US Congress propose a bill requiring the US to consider selling bunker-buster bombs to Israel.
Situated in the illegally occupied West Bank, the Palestinian village of Khirbet Humsa – home to 74 residents – is demolished by Israeli army bulldozers and diggers. People were reportedly allowed only 10 minutes to vacate their homes before they were destroyed.
Maher al-Akhras has ended his hunger strike after 103 days. The Palestinian was imprisoned by Israel in July for allegedly being a member of an armed group and refused food in protest at his incarceration.
The United Nations Relief and Works Agency for Palestine Refugees (UNRWA) informs its staff that they will need to defer their salaries for the remainder of the year in the light of a shortage of funds. Some 28,000 workers will be affected unless the organisation can raise $70m dollars by the end of November.
A prominent Palestinian spokesperson and peace negotiator, Saeb Erekat, dies aged 65 after contracting the novel coronavirus. A member of the Fatah party, Erekat had been involved in negotiations over a peaceful settlement to the Israel-Palestine conflict for over two decades.
Outgoing US Secretary of State Mike Pompeo is set to become the first in such a position to visit an illegal Israeli settlement, a move that Palestinian Prime Minister Mohammed Shtayyeh has warned would set a “dangerous precedent” which would “legitimise the settlements” and violate international law.
In response to the alleged firing of two rockets from the Gaza Strip into southern Israel, the Israeli military confirmed that it struck numerous positions in Gaza. The air raids were aimed at apparent underground infrastructure and military posts in the area. No casualties on either side have yet been reported.
In the wake of Israel announcing its intention to construct more illegal settlements in occupied East Jerusalem, the UN’s Middle East envoy Nickolay Mladenov expressed concern that the plans will represent another block to a two-state solution.
The Palestinian Authority (PA) announces that it will resume coordination efforts with Israel, including cooperation on issues relating to security, following the receipt of written assurances that Israel will commit to past agreements. The PA had announced the end of all cooperation in May, in response to the declaration by Israel that they would annex large swathes of the West Bank.
Following the normalisation of relations between the two countries in September, Bahrain sends its first government delegation to Israel, headed by its Foreign Minister Abdullatif Al Zayani. Pictures of the event are displayed on Israeli television, as the Bahraini delegation was welcomed into Tel Aviv.
Israeli jets launch a barrage of missiles at Gaza, causing limited property damage. The strikes were aimed at Gaza City, Khan Younis and Rafah. No casualties were reported.
Gaza City’s medical system faces potential imminent collapse, according to health officials in the besieged strip. A sudden increase in infections threatens disastrous implications for one of the most crowded places on the planet.
Israeli soldiers attempt to remove a Palestinian man from a Red Crescent ambulance in the occupied West Bank. The man had been injured during protests against the Israeli policy of demolishing homes in the Jordan Valley region. Soldiers pushed an aid worker in an attempt to detain the wounded man.
A Palestinian man is shot dead by Israeli forces outside of Jerusalem after allegedly ramming border police with his car. Police reports claim that he attempted to flee a checkpoint inspection and hit an officer in the process, lightly injuring them.
Meanwhile, the UN Relief and Works Agency for Palestine Refugees announces that it has run out of money for the remainder of the year, after receiving its lowest level of contributions in 8 years.
Saudi Arabia agrees to allow Israeli airliners permission to use its airspace when travelling to the UAE, following talks with the US senior adviser Jared Kushner. The announcement is the latest move in a general rapprochement between the two countries, as Saudi Arabia moves gradually towards the normalisation of relations with Israel.
A-year-and-a-quarter after being arrested on charges of being a member of the Democratic Progressive
Student Pole – a group banned by the Israeli military – the 22-year-old Palestinian student Mays Abu Ghosh is released from Damon Prison. Abu Ghosh was also fined 2,000 shekels.
Following the renewal of cooperation between the Palestinian Authority (PA) and Israeli officials in recent weeks, the Israeli government releases over $1bn in withheld tax funds to the PA. The funds represent over half of the PA’s budget.
Meanwhile, Israeli legislators vote through a draft bill to dissolve Parliament, tabled by Benjamin Netanyahu’s coalition partner Benny Gantz. If the final proposals are approved next week, Israel will face a fresh round of elections in the new year.
Bahrain has revealed that laws governing the goods imported from Israel will not distinguish between those products made within Israel and those originating from illegal settlements in occupied Palestinian or Syrian territories. Palestinian spokespersons have condemned the decision.
Following a visit to Jordan last week by the head of the Palestinian Authority (PA) Mahmoud Abbas, Jordanian Foreign Minister Ayman Safadi meets with Israeli Foreign Minister Gabi Ashkenazi to discuss the near-century old conflict between Israel and the Palestinians. The Jordanian official stressed the importance of a two-state solution.
Saudi Arabia’s Foreign Minister Prince Faisal bin Farhan also stressed the importance of the establishment of a Palestinian state prior to any normalisation of ties with Israel. His comments put paid to rumours that the Gulf state may be on the brink of normalising ties with Israel.
Meanwhile, during a protest in the occupied West Bank, thirteen-year-old Palestinian teenager Ali Ayman Nasr Abou Aliya is shot by Israeli military personnel and later dies in hospital. An Israeli army spokesman denies that live ammunition was used.
Bahrain backtracks on its earlier stated intention to allow the importation of goods produced in illegal Israeli settlement territories. Bahrain’s Industry, Commerce and Tourism Minister had earlier voiced an openness to such imports, but the government claims his comments had been misinterpreted.
Health officials in the besieged Gaza Strip have announced that they are no longer able to carry out coronavirus tests due to a lack of testing kits. International organisations have been called upon to send urgent aid to the blockaded territory.
A leading member of the executive committee of the Palestine Liberation Organization (PLO), Hanan Ashrawi, has announced her resignation. She is apparently stepping down in protest against the decision of the Palestinian Authority (PA) to resume coordination with Israel.
An Israeli soldier who admitted to killing 22-year-old Ahmad Manasra and shooting another man in the West Bank in March 2019 has been granted his plea bargain by a military tribunal. The contentious agreement sees the soldier, who claimed to have mistaken the victims for attackers, given three months of military labour.
Meanwhile, a US-brokered agreement sees Israel and Morocco normalise ties. In exchange for agreeing to normalise relations with Israel, the US agrees to recognise Morocco’s claims over the dispute Western Sahara territory.
Bhutan becomes the latest country to establish full diplomatic relations with Israel. This agreement follows years of secret contacts between the two states and is apparently unrelated to the recent spree of agreements with nations in the Middle East.
Following a two-year hiatus, Turkey has returned an ambassador to Israel. Turkey had previously withdrawn its Israel envoy following deadly attacks in Gaza in May 2018. Ufuk Ulutas will adopt the position.
Meanwhile, Hamas announces its willingness to resume talks with the Palestinian Authority (PA) over attempts to heal intra-Palestinian tensions. Announcements over the weekend from leading member Hossam Badran spoke of a need to “restore unity” and rebuild “national institutions”.
Following the normalisation of ties between Morrocco and Israel, both Qatar and Tunisia make statements indicating that they will not become the next in line to abandon any support for the Palestinian cause. During a meeting with Palestinian Authority (PA) President Mahmoud Abbas in Doha, Qatar’s Emir emphasised his country’s continued support for “the Palestinian people and their just cause”. Meanwhile, Tunisian Prime Minister Hichem Mechichi announces that a normalisation with Israel is “not on the agenda” for his country.
Israeli Prime Minister Benjamin Netanyahu secures some eight million Coronavirus vaccines from the pharmaceutical company Pfizer, which will cover roughly half of its 9 million population for the required two doses. Palestinians will not be offered the jabs.
Mahmoud Omar Kameel, a seventeen-year-old Palestinian boy, is shot dead by Israeli forces after he allegedly opened fired on them near the Lion’s Gate entrance to the Old City. Witnesses say the boy was shot multiple times by security forces.
Meanwhile, the US continues its push to secure the normalisation of ties between Israel and Muslim states, offering Indonesia to potential to unlock billions of dollars of US finance should they agree to cordial ties with the country.
Prime Minister Benjamin Netanyahu’s failure to pass a budget in parliament will trigger Israel’s fourth election in two years. The fragile coalition between election rivals Benny Gantz and Netanyahu has been verging on collapse for weeks and has finally led to a fourth successive poll amid public anger over the government’s handling of the coronavirus pandemic and high levels of unemployment.
The UN General Assembly (UNGA) passed more resolutions condemning actions taken by Israel than against any other nation in 2020. Seventeen resolutions have been passed against the country this year, nearly three times as many as passed against all of the other world’s nations combined.
Air raids carried out by the Israeli military in Gaza have left two people injured and caused damage to infrastructure, according to reports form Palestinian media. A series of missiles were apparently fired in response to an alleged rocket attack launched from the strip.
According to the Palestinian Prisoner Society (PPS), Israeli authorities have taken the decision to shut down the Ramon prison due to an outbreak of Covid-19 infections among its inmates. The facility, north of Jerusalem, holds some 360 Palestinian detainees.
A Palestinian man has been left quadriplegic after being shot through the neck by Israeli army forces in the West Bank. Haroun Rasmi Abu Aram, just 24 years of age, was allegedly trying to prevent the troops from stealing his electric generator when they shot him.
Sudan signs the “Abraham Accords” with the USA which offers to release $1bn dollars of annual funding to the North African country in exchange for the normalisation of relations with Israel.
Recently, in September 2018, President Trump attempted to start a peaceful conversation on a two-state solution to the Israeli-Palestinian conflict. This began with Israel and
Trump Tells Israel Of The Need To Compromise, Amid Accusations Of Pro-Israel Bias Against His Middle East Envoy
This past Friday, U.S. President Donald Trump told Israel that it too would need to make “significant compromises” for peace with the Palestinians. All of
- U.S. Coercion: Trump Threatened Tariffs On European Powers Over Iran Deal - January 28, 2020
- Libyan Conflict: Russia And Turkey Vie For A Solution - January 16, 2020
- Waning U.S. Power: The Nord Stream 2 Pipeline - January 11, 2020 | https://theowp.org/crisis_index/israel-palestine-conflict-2/ | 21 |
16 | Individual differences |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |
Bilingualism is the ability to speak two languages competently.
- 1 Cognitive ability
- 2 Receptive bilingualism
- 3 Personality
- 4 Learning language
- 5 Neuroscience
- 6 Multilingualism within communities
- 7 Multilingualism between different language speakers
- 8 Multilingualism at the linguistic level
- 9 References
- 10 See also
Cognitive ability[edit | edit source]
- Main article: Cognitive advantages to bilingualism
Bilinguals who are highly proficient in two or more languages are reported to have enhanced executive function and are better at some aspects of language learning compared to monolinguals. Research indicates that a multilingual brain is nimbler, quicker, better able to deal with ambiguities, resolve conflicts, and resist Alzheimer’s disease and other forms of dementia longer.
There is also a phenomenon known as distractive bilingualism or semilingualism. When acquisition of the first language is interrupted and insufficient or unstructured language input follows from the second language, as sometimes happens with immigrant children, the speaker can end up with two languages both mastered below the monolingual standard. Literacy plays an important role in the development of language in these immigrant children. Those who were literate in their first language before arriving, and who have support to maintain that literacy, are at the very least able to maintain and master their first language.
There is, of course, a difference between those who learn a language in a class environment, and those who learn through total immersion, usually living in the country where the target language is the exclusive.
Without the possibility to actively translate, due to a complete lack of any first language communication opportunity, the comparison between languages is reduced. The new language is almost independently learned - like the mother tongue for a child - with direct concept-to-language translation that can become more natural than word structures learned as a subject. Added to this, the uninterrupted, immediate and exclusive practise of the new language reinforces and deepens the attained knowledge.
Receptive bilingualism[edit | edit source]
- Main article: Passive speakers (language)
Receptive bilinguals are those who have the ability to understand a second language but who cannot speak it or whose abilities to speak it are inhibited by psychological barriers. Receptive bilingualism is frequently encountered among adult immigrants to the U.S. who do not speak English as a native language but who have children who do speak English natively, usually in part because those children's education has been conducted in English: While the immigrant parents can understand both their native language and English, they speak only their native language to their children. If their children are likewise receptively bilingual but productively English-monolingual, throughout the conversation the parents will speak their native language and the children will speak English. If their children are productively bilingual, however, those children may answer in the parents' native language, in English, or in a combination of both languages, varying their choice of language depending on factors such as the communication's content, context, and/or emotional intensity and the presence or absence of third-party speakers of one language or the other. The third alternative represents the phenomenon of "code-switching" (also styled "code switching"), in which the productively bilingual party to a communication switches languages in the course of that communication. Receptively bilingual persons, especially children, may rapidly achieve oral fluency by spending extended time in situations where they are required to speak the language that they theretofore understood only passively. Until both generations achieve oral fluency, not all definitions of bilingualism accurately characterize the family as a whole, but the linguistic differences between the family's generations often constitute little or no impairment to the family's functionality.
Receptive bilingualism in one language as exhibited by a speaker of another language, or even as exhibited by most speakers of that language, is not the same as mutual intelligibility of languages: The latter is a property of a pair of languages, namely a consequence of objectively high lexical and grammatical similarities between the languages themselves (e.g., Iberian Spanish and Iberian Portuguese), whereas the former is a property of one or more persons and is determined by subjective or intersubjective factors such as the respective languages' prevalence in the life history (including family upbringing, educational setting, and ambient culture) of the individual person or persons in question.
Personality[edit | edit source]
Some bilinguals feel that their personality changes depending on which language they are speaking; thus multilingualism is said to create multiple personalities. Xiao-lei Wang states in her book Growing up with Three Languages: Birth to Eleven: “Languages used by speakers with one or more than one language are used not just to represent a unitary self, but to enact different kinds of selves, and different linguistic contexts create different kinds of self-expression and experiences for the same person.” However, there has been little rigorous research done on this topic and it is difficult to define “personality” in this context. Francois Grosjean writes: “What is seen as a change in personality is most probably simply a shift in attitudes and behaviors that correspond to a shift in situation or context, independent of language.”
Learning language[edit | edit source]
One view is that of the linguist Noam Chomsky in what he calls the human 'language acquisition device '— a mechanism which enables an individual to recreate correctly the rules (grammar) and certain other characteristics of language used by speakers around the learner. This device, according to Chomsky, wears out over time, and is not normally available by puberty, which he uses to explain the poor results some adolescents and adults have when learning aspects of a second language (L2).
If language learning is a cognitive process, rather than a language acquisition device, as the school led by Stephen Krashen suggests, there would only be relative, not categorical, differences between the two types of language learning.
Rod Ellis quotes research finding that the earlier children learn a second language, the better off they are, in terms of pronunciation. See Critical period hypothesis. European schools generally offer secondary language classes for their students early on, due to the interconnectedness with neighbour countries with different languages. Most European students now study at least two foreign languages, a process strongly encouraged by the European Union.
Based on the research in Ann Fathman’s The Relationship between age and second language productive ability, there is a difference in the rate of learning of English morphology, syntax and phonology based upon differences in age, but that the order of acquisition in second language learning does not change with age.
People who have Multilanguage background will find out their native language would influence their second language in any other ages.
In second language class, students will commonly face the difficulties on thinking in the target language because they are influenced by their native language and culture patterns. Robert B. Kaplan thinks that in second language classes, the foreign-student paper is out of focus because the foreign student is employing rhetoric and a sequence of thought which violate the expectations of the native reader. Foreign students who have mastered syntactic structures have still demonstrated inability to compose adequate themes, term papers, theses, and dissertations. Robert B. Kaplan describes two key words that affect people when they learn a second language. Logic in the popular, rather than the logician's sense of the word, which is the basis of rhetoric, is evolved out of a culture; it is not universal. Rhetoric, then, is not universal either, but varies, from culture to culture and even from time to time within a given culture. Language teachers know how to predict the differences between pronunciations or constructions in different languages, but they might be less clear about the differences between rhetoric, that is, in the way they use language to accomplish various purposes, particularly in writing.
Neuroscience[edit | edit source]
- Main article: Neuroscience of multilingualism
Various aspects of multilingualism have been studied in the field of neuroscience. These include the representation of different language systems in the brain, the effects of multilingualism on the brain's structural plasticity, aphasia in multilingual individuals, and bimodal bilingualisms (people who can speak one sign language and one oral language). Neuroscientific studies of multilingualism are carried out with functional neuroimaging, electrophysiology, and through observation of people who have suffered brain damage.
Centralization of Language areas in the Brain[edit | edit source]
Language acquisition in multilingual individuals is contingent on two factors: age of the language acquisition and proficiency. Specialization is centered in the Perisylvian cortex of the left hemisphere. Various regions of both the right and left hemisphere activate during language production. Multilingual individuals consistently demonstrate similar activation patterns in the brain when using either one of the two or more languages they fluently know. Age of acquiring the second-or-higher language, and proficiency of use determine what specific brain regions and pathways activate when using (thinking or speaking) the language. Contrast to those who acquired their multiple languages at different points in their life, those who acquire multiple languages when young, and at virtually the same time, show similar activations in parts of Broca’s area and left inferior frontal lobe. If the second-or-higher language is acquired later in life, specifically after the critical period, the language becomes centralized in a different part of Broca’s area than the native language and other languages learned when young.
Brain plasticity in multilingualism[edit | edit source]
A greater density of grey matter in the inferior parietal cortex is present in multilingual individuals. It has been found that multilingualism affects the structure, and essentially, the cytoarchitecture of the brain. Learning multiple languages re-structures the brain and some researchers argue that it increases the brain’s capacity for plasticity. Most of these differences in brain structures in multilinguals may be genetic at the core. Consensus is still muddled; it may be a mixture of both—experiential (acquiring languages during life) and genetic (predisposition to brain plasticity).
Aphasia in multilingualism[edit | edit source]
An abundance of insight about language storage in the brain comes from studying bilingual/ mulilingual individuals afflicted with a form of aphasia. The symptoms and severity of aphasia in bilinguals/ mulitlinguals depend on how many languages the individual knows, what order they have them stored in the brain, how frequently they use each one, and how proficient they are in using those languages. Two primary theoretical approaches to studying and viewing bilingual/ multilingual aphasics exist—the localizationalist approach and the dynamic approach. The localizationalist approach views different languages as stored in different regions of the brain; and therefore, is the reason why bilingual/ multilingual aphasics may lose one language they know, but not the other(s). The dynamical theory approach suggests that the language system is supervised by a dynamic equilibrium between the existing language capabilities and the constant alteration and adaptation to the communicative requirements of the environment. The dynamic approach views the representation and control aspects of the language system as compromised as a result of brain damage to the brain’s language regions. The dynamic approach offers a satisfactory explanation for the various recovery times of each of the languages the aphasic has had impaired or lost because of the brain damage. Recovery of languages varies across aphasic patients. Some may recover all lost or impaired languages simultaneously. For some, one language is recovered before the others. In others, an involuntary mix of languages occurs in the recovery process; the aphasic would intermix words from the various languages he/she knows when speaking.
PET scan studies on Bimodal Individuals[edit | edit source]
Neuroscientific research on Bimodal individuals—those who speak one oral language and one sign language—has been carried out. Pet scans from these studies show that there is a separate region in the brain for working memory related to sign language production and use. These studies also find that Bimodal individuals use different areas of the right hemisphere depending on whether if they are speaking using verbal language or gesticulating using sign-language. Studies with bimodal bilinguals have also provided insight into the tip of the tongue phenomenon and into patterns of neural activity when recognizing facial expressions.
The Executive Control System’s Role in Preventing Cross Talk[edit | edit source]
There are sophisticated mechanisms to prevent cross talk in brains where more than one language is stored. The executive control system might be implicated to prevent one language from interfering with another in multilinguals.The executive control system is responsible for processes that are sometimes referred to as executive functions, and among others includes supervisory attentional system, or cognitive control. Despite the fact that most research on the executive control system pertains to nonverbal tasks, there is some evidence that the system might be involved in resolving and ordering the conflict generated by the competing languages stored in the mulitlingual’s brain. During speech production there is a constant need to channel attention to the appropriate word associated with the concept, congruent with the language being used. The word must be placed in the appropriate phonological and morphological context. Multilinguals constantly utilize the general executive control system to resolve interference/conflicts among the known languages, enhancing the system’s functional performance, even on nonverbal tasks. In studies, multilingual subjects of all ages, showed overall enhanced executive control abilities. This may indicate that the multilingual experience leads to a transfer of skill from the verbal to the nonverbal. There is no one specific domain of language modulation in the general executive control system, as far as studies reveal. Studies show that the speed with which multilingual subjects perform tasks, with-and-without mediation required to resolve language-use conflict, is better in bilingual than monolingual subjects.
Health Benefits of Multilingualism and Bilingualism[edit | edit source]
Researcher Ellen Bialystok examined the effect of multilingualism on Alzheimer’s disease and found that it delays its onset by about 4 years. The researcher’s study found that those who spoke more than two languages acquired Alzheimer’s disease at a later time than speakers of a single language. Interestingly, the study found that the more languages the multilingual knows, the later the onset of Alzheimer’s disease. Both bilingualism and multilingualism aid in the building up of cognitive reserves in the brain; these cognitive reserves force the brain to work harder—they themselves, restructure the brain. Multilingualism and bilingualism lead to greater efficiency of use in the brain, and organize the brain to be more efficient and conservative in using energy. More research is required to determine whether if learning another language later in life has the same protective effects; nonetheless, it is evident from the variety of studies performed on the effects of multilingualism and bilingualism on the brain, that learning and knowing multiple languages sets the stage for a cognitive healthy life.
Multilingualism within communities[edit | edit source]
- Further information: List of multilingual countries and regions
Widespread multilingualism is one form of language contact. Multilingualism was more common in the past than is usually supposedTemplate:Weasel-inline: in early times, when most people were members of small language communities, it was necessary to know two or more languages for trade or any other dealings outside one's own town or village, and this holds good today in places of high linguistic diversity such as Sub-Saharan Africa and India. Linguist Ekkehard Wolff estimates that 50% of the population of Africa is multilingual.
In multilingual societies, not all speakers need to be multilingual. Some states can have multilingual policies and recognise several official languages, such as Canada (English and French). In some states, particular languages may be associated with particular regions in the state (e.g., Canada) or with particular ethnicities (Malaysia/Singapore). When all speakers are multilingual, linguists classify the community according to the functional distribution of the languages involved:
- diglossia: if there is a structural functional distribution of the languages involved, the society is termed 'diglossic'. Typical diglossic areas are those areas in Europe where a regional language is used in informal, usually oral, contexts, while the state language is used in more formal situations. Frisia (with Frisian and German or Dutch) and Lusatia (with Sorbian and German) are well-known examples. Some writers limit diglossia to situations where the languages are closely related, and could be considered dialects of each other. This can also be observed in Scotland where in formal situations, English is used. However, in informal situations in many areas, Scots is the preferred language of choice. Similar phenomenon is also observed in Arabic spoken region. The effects of diglossia could be seen if you look at the difference between Written Arabic(Modern Standard Arabic) and Colloquial Arabic. However, as time goes, the Arabic language somewhere between the two have been created which we would like to call Middle Arabic or Common Arabic. Because of this diversification of the language, the concept of spectroglossia have been suggested.
- ambilingualism: a region is called ambilingual if this functional distribution is not observed. In a typical ambilingual area it is nearly impossible to predict which language will be used in a given setting. True ambilingualism is rare. Ambilingual tendencies can be found in small states with multiple heritages like Luxembourg, which has a combined Franco-Germanic heritage, or Malaysia and Singapore, which fuses the cultures of Malays, China, and India. Ambilingualism also can manifest in specific regions of larger states that have both a clearly dominant state language (be it de jure or de facto) and a protected minority language that is limited in terms of distribution of speakers within the country. This tendency is especially pronounced when, even though the local language is widely spoken, there is a reasonable assumption that all citizens speak the predominant state tongue (E.g., English in Quebec vs. Canada; Spanish in Catalonia vs. Spain). This phenomenon can also occur in border regions with many cross-border contacts.
- bipart-lingualism: if more than one language can be heard in a small area, but the large majority of speakers are monolinguals, who have little contact with speakers from neighbouring ethnic groups, an area is called 'bipart-lingual'. An example of this is the Balkans.
N.B. the terms given above all refer to situations describing only two languages. In cases of an unspecified number of languages, the terms polyglossia, omnilingualism, and multipart-lingualism are more appropriate.
Multilingualism between different language speakers[edit | edit source]
Whenever two people meet, negotiations take place. If they want to express solidarity and sympathy, they tend to seek common features in their behavior. If speakers wish to express distance towards or even dislike of the person they are speaking to, the reverse is true, and differences are sought. This mechanism also extends to language, as described in the Communication Accommodation Theory.
Some multilinguals use code-switching, a term that describes the process of 'swapping' between languages. In many cases, code-switching is motivated by the wish to express loyalty to more than one cultural group, as holds for many immigrant communities in the New World. Code-switching may also function as a strategy where proficiency is lacking. Such strategies are common if the vocabulary of one of the languages is not very elaborated for certain fields, or if the speakers have not developed proficiency in certain lexical domains, as in the case of immigrant languages.
This code-switching appears in many forms. If a speaker has a positive attitude towards both languages and towards code-switching, many switches can be found, even within the same sentence. If, however, the speaker is reluctant to use code-switching, as in the case of a lack of proficiency, he might knowingly or unknowingly try to camouflage his attempt by converting elements of one language into elements of the other language through calquing. This results in speakers using words like courrier noir (literally mail that is black) in French, instead of the proper word for blackmail, chantage.
Sometimes a pidgin language may develop. A pidgin language is basically a fusion of two languages, which is mutually understandable for both speakers. Some pidgin languages develop into real languages (such as papiamento at Curaçao) while other remain as slangs or jargons (such as Helsinki slang, which is more or less mutually intelligible both in Finnish and Swedish). In other cases, prolonged influence of languages on each other may have the effect of changing one or both to the point where it may be considered that a new language is born. For example, many linguists believe that the Occitan language and the Catalan language were formed because a population speaking a single Occitano-Romance language was divided into political spheres of influence of France and Spain, respectively. Yiddish language is a complex blend of Middle High German with Hebrew and borrowings from Slavic languages.
Bilingual interaction can even take place without the speakers switching. In certain areas, it is not uncommon for speakers each to use a different language within the same conversation. This phenomenon is found, amongst other places, in Scandinavia. Most speakers of Swedish and Norwegian, and Norwegian and Danish, can communicate with each other speaking their respective languages, while few can speak both (people used to these situations often adjust their language, avoiding words that are not found in the other language or that can be misunderstood). Using different languages is usually called non-convergent discourse, a term introduced by the Dutch linguist Reitze Jonkman. To a certain extent this situation also exists between Dutch and Afrikaans, although everyday contact is fairly rare because of the distance between the two respective communities. The phenomenon is also found in Argentina, where Spanish and Italian are both widely spoken, even leading to cases where a child with a Spanish and an Italian parent grows up fully bilingual, with both parents speaking only their own language yet knowing the other. Another example is the former state of Czechoslovakia, where two languages (Czech and Slovak) were in common use. Most Czechs and Slovaks understand both languages, although they would use only one of them (their respective mother tongue) when speaking. For example, in Czechoslovakia it was common to hear two people talking on television each speaking a different language without any difficulty understanding each other. This bilinguality still exists nowadays, although it has started to deteriorate after Czechoslovakia split up .
Multilingualism at the linguistic level[edit | edit source]
Models for native language literacy programs[edit | edit source]
Sociopolitical as well as socio-cultural identity arguments may influence native language literacy. While these two camps may occupy much of the debate about which languages children will learn to read, a greater emphasis on the linguistic aspects of the argument is appropriate. In spite of the political turmoil precipitated by this debate, researchers continue to espouse a linguistic basis for it. This rationale is based upon the work of Jim Cummins (1983).
Sequential model[edit | edit source]
- Main article: Sequential bilingualism
In this model, learners receive literacy instruction in their native language until they acquire a "threshold" literacy proficiency. Some researchers use age 3 as the age when a child has basic communicative competence in L1 (Kessler, 1984). Children may go through a process of sequential acquisition if they migrate at a young age to a country where a different language is spoken, or if the child exclusively speaks his or her heritage language at home until he/she is immersed in a school setting where instruction is offered in a different language.
The phases children go through during sequential acquisition are less linear than for simultaneous acquisition and can vary greatly among children. Sequential acquisition is a more complex and lengthier process, although there is no indication that non language-delayed children end up less proficient than simultaneous bilinguals, so long as they receive adequate input in both languages.
Bilingual model[edit | edit source]
- Main article: Simultaneous bilingualism
In this model, the native language and the community language are simultaneously taught. The advantage is literacy in two languages as the outcome. However, the teacher must be well-versed in both languages and also in techniques for teaching a second language.
Coordinate model[edit | edit source]
This model posits that equal time should be spent in separate instruction of the native language and of the community language. The native language class, however, focuses on basic literacy while the community language class focuses on listening and speaking skills. Being a bilingual does not necessarily mean that one can speak, for example, English and French.
Outcomes[edit | edit source]
Cummins' research concluded that the development of competence in the native language serves as a foundation of proficiency that can be transposed to the second language — the common underlying proficiency hypothesis. His work sought to overcome the perception propagated in the 1960s that learning two languages made for two competing aims. The belief was that the two languages were mutually exclusive and that learning a second required unlearning elements and dynamics of the first in order to accommodate the second (Hakuta, 1990). The evidence for this perspective relied on the fact that some errors in acquiring the second language were related to the rules of the first language (Hakuta, 1990). How this hypothesis holds under different types of languages such as Romance versus non-Western languages has yet to undergo research.
Another new development that has influenced the linguistic argument for bilingual literacy is the length of time necessary to acquire the second language. While previously children were believed to have the ability to learn a language within a year, today researchers believe that within and across academic settings, the time span is nearer to five years (Collier, 1992; Ramirez, 1992).
An interesting outcome of studies during the early 1990s however confirmed that students who do successfully complete bilingual instruction perform better academically (Collier, 1992; Ramirez, 1992). These students exhibit more cognitive elasticity including a better ability to analyse abstract visual patterns. Students who receive bidirectional bilingual instruction where equal proficiency in both languages is required perform at an even higher level. Examples of such programs include international and multi-national education schools.
References[edit | edit source]
- Bialystok E, Martin MM (2004). Attention and inhibition in bilingual children: evidence from the dimensional change card sort task. Dev Sci 7 (3): 325–39.
- Bialystok E, Craik FIM, Grady C, Chau W, Ishii R, Gunji A, Pantev C (2005). Effect of bilingualism on cognitive control in the Simon task: evidence from MEG. NeuroImage 24 (1): 40–49.
- Kaushanskaya M, and Marian V (2009). The bilingual advantage in novel word learning. Psychonomic Bulletin & Review 16 (4): 705–710.
- Kluger, Jeffrey How the Brain Benefits from Being Bilingual.
- Ethnologue report for language code: spa. Ethnologue.com. URL accessed on 2010-07-10.
- Tokuhama-Espinosa, T. (2003). The multilingual mind: Issues discussed by, for, and about people living with many languages, Westport, Connecticut: Praeger Publishers.
- Wang, X. (2008). Growing up with three languages: Birth to eleven, Briston, United Kingdom: Multilingualism Matters.
- François Grosjean (author of chapter); Editor: I. Parasnis (1996.). Living with two languages and two cultures, chapter in: Cultural and Language Diversity and the Deaf Experience., Cambridge University Press.
- Santrock, John W. (2008). Bilingualism and Second-Language Learning. A Topical Approach to Life-Span Development (4Th ed.) (pp. 330-335). New York, NY: McGraw-Hill Companies, Inc.
- EurActiv: Most EU students learn two foreign languages: Study, 28 September 2009, retrieved november 2011
- Fathman, Ann. The Relationship between age and second language productive ability. 27 October 2006
- Kaplan, Robert B. "Cultural thought patterns in inter-cultural education language learning. 16.1-2(2006). 1-20. Wiley Online Library. Web. 9 November 2010. Cite error: Invalid
<ref>tag; name "Kaplan" defined multiple times with different content
- Gadda, George. Writing and Language Socialization Across Cultures: Some Implications for the classroom. Addison Wesley LongMan. Print.
- Collier, Virginia (1988). The Effect of Age on Acquisition of a Second Language for School. The national cleringhouse for bilingual education 2.
- Dehaene, S. (1999). "Fitting two languages into one brain.". A Journal Of Neurology. doi:10.1093/brain/122.12.2207.
- Abutalebi, J., Cappa, S. F., Perani, D. (2001). The bilingual brain as revealed by functional neuroimaging. Bilingualism: Language and Cognition, 4, 179-190.
- Hyashizaki, Y. (2004). Structural plasticity in the bilingual brain. Nature, 431, 757.
- Poline, J. B., et al. (1996). NeuroImage, 4, 34–54.
- Warburton, E. A., et al. (1996). Brain, 119, 159–179.
- Connor L.T., Obler L.K., Tocco M., Fitzpatrick P.M., Albert M.L. (2001). Effect of socioeconomic status on aphasia severity and recovery. Brain & Language, 78(2), 254–257.
- (1978) The bilingual brain: Neuropsychological and neurolinguistic aspects of bilingualism, London: Academic Press.
- De Bot, Kess, Lowie, Verspoor (2007). A Dynamic System Theory Approach to second language acquisition. Bilingualism:Language and Cognition 10: 7–21.
- Wanner, Anja Review: Applied Linguistics; Language Acquisition: Verspoor et al. (2011). URL accessed on 13 November 2012.
- (2007). Bilingual language production: The neurocognition of language representation and control. Journal of Neurolinguistics 20 (3): 242–275.
- (2008). Understanding the link between bilingual aphasia and language control. Journal of Neurolinguistics 21 (6): 558–576.
- Paradis, M. (1998). Language and communication in multilinguals. In B. Stemmer & H. Whitaker (Eds.), Handbook of neurolinguistics (pp. 417–430). San Diego, CA: Academic Press.
- Ronnberg, J., Rudner, M., & Ingvar, M. (2004). Neural correlates of working memory for sign language. Cognitive Brain Research, 20, 165-182.
- Pyers, J.E., Gollan, T.H., Emmorey, K. (2009). Bimodal bilinguals reveal the source of tip-of-the-tongue states. Cognition, 112, 323-329.
- Emmorey, K., & McCullough, S. (2009). The bimodal bilingual brain: Effects of sign language experience. Brain & Language, 109, 124-132.
- Bialystok, E. (2011). "Reshaping the Mind: The benefits of Bilingualism". Canadian Journal of Experimental Psychology. 4 60: 229-235.
- Costa, A. "Executive control in Bilingual contexts." Brainglot. http://brainglot.upf.edu/index.php?option=com_content&task=view&id=86.
- Peterson, R. (2011). "Benefits of Being Bilingual".
- Wolff, Ekkehard (2000). Language and Society. In: Bernd Heine and Derek Nurse (Eds.) African Languages - An Introduction, 317. Cambridge University Press.
- M.HBakalla(1984), Arabic Culture through its Language and Literature, Kegan Paul International,London
- Poplack Shana (1980). Sometimes I'll start a sentence in Spanish y termino en español": toward a typology of code-switching. Linguistics 18 (7/8): 581–618.
- One Language or Two: Answers to Questions about Bilingualism in Language-Delayed Children
See also[edit | edit source]
- Bilingual lexography
- Code switching
- Cross cultural communication
- English as a second language
- Foreign language learning
- Language Education
- Language proficiency
- Multicultural education
- Multilingual Education
- Native language
- Second language acquisition
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | https://psychology.wikia.org/wiki/Bilingualism | 21 |
43 | Production, Costs, and Industry Structure
By the end of this section, you will be able to:
- Understand the relationship between production and costs
- Understand that every factor of production has a corresponding factor price
- Analyze short-run costs in terms of total cost, fixed cost, variable cost, marginal cost, and average cost
- Calculate average profit
- Evaluate patterns of costs to determine potential profit
We’ve explained that a firm’s total costs depend on the quantities of inputs the firm uses to produce its output and the cost of those inputs to the firm. The firm’s production function tells us how much output the firm will produce with given amounts of inputs. However, if we think about that backwards, it tells us how many inputs the firm needs to produce a given quantity of output, which is the first thing we need to determine total cost. Let’s move to the second factor we need to determine.
For every factor of production (or input), there is an associated factor payment. Factor payments are what the firm pays for the use of the factors of production. From the firm’s perspective, factor payments are costs. From the owner of each factor’s perspective, factor payments are income. Factor payments include:
- Raw materials prices for raw materials
- Rent for land or buildings
- Wages and salaries for labor
- Interest and dividends for the use of financial capital (loans and equity investments)
- Profit for entrepreneurship. Profit is the residual, what’s left over from revenues after the firm pays all the other costs. While it may seem odd to treat profit as a “cost”, it is what entrepreneurs earn for taking the risk of starting a business. You can see this correspondence between factors of production and factor payments in the inside loop of the circular flow diagram in (Figure).
We now have all the information necessary to determine a firm’s costs.
A cost function is a mathematical expression or equation that shows the cost of producing different levels of output.
What we observe is that the cost increases as the firm produces higher quantities of output. This is pretty intuitive, since producing more output requires greater quantities of inputs, which cost more dollars to acquire.
What is the origin of these cost figures? They come from the production function and the factor payments. The discussion of costs in the short run above, Costs in the Short Run, was based on the following production function, which is similar to (Figure) except for “widgets” instead of trees.
We can use the information from the production function to determine production costs. What we need to know is how many workers are required to produce any quantity of output. If we flip the order of the rows, we “invert” the production function so it shows [latex]L=f\left(Q\right)[/latex].
Now focus on the whole number quantities of output. We’ll eliminate the fractions from the table:
Suppose widget workers receive ?10 per hour. Multiplying the Workers row by ?10 (and eliminating the blanks) gives us the cost of producing different levels of output.
|× Wage Rate per hour||?10||?10||?10||?10|
This is same cost function with which we began! (shown in (Figure))
Now that we have the basic idea of the cost origins and how they are related to production, let’s drill down into the details.
Average and Marginal Costs
The cost of producing a firm’s output depends on how much labor and physical capital the firm uses. A list of the costs involved in producing cars will look very different from the costs involved in producing computer software or haircuts or fast-food meals.
We can measure costs in a variety of ways. Each way provides its own insight into costs. Sometimes firms need to look at their cost per unit of output, not just their total cost. There are two ways to measure per unit costs. The most intuitive way is average cost. Average cost is the cost on average of producing a given quantity. We define average cost as total cost divided by the quantity of output produced. [latex]AC=TC/Q[/latex]
If producing two widgets costs a total of ?44, the average cost per widget is [latex]?44/2=?22[/latex] per widget. The other way of measuring cost per unit is marginal cost. If average cost is the cost of the average unit of output produced, marginal cost is the cost of each individual unit produced. More formally, marginal cost is the cost of producing one more unit of output. Mathematically, marginal cost is the change in total cost divided by the change in output: [latex]MC=\Delta TC/\Delta Q[/latex]. If the cost of the first widget is ?32.50 and the cost of two widgets is ?44, the marginal cost of the second widget is [latex]?44-?32.50=?11.50.[/latex] We can see the Widget Cost table redrawn below with average and marginal cost added.
Note that the marginal cost of the first unit of output is always the same as total cost.
Fixed and Variable Costs
We can decompose costs into fixed and variable costs. Fixed costs are the costs of the fixed inputs (e.g. capital). Because fixed inputs do not change in the short run, fixed costs are expenditures that do not change regardless of the level of production. Whether you produce a great deal or a little, the fixed costs are the same. One example is the rent on a factory or a retail space. Once you sign the lease, the rent is the same regardless of how much you produce, at least until the lease expires. Fixed costs can take many other forms: for example, the cost of machinery or equipment to produce the product, research and development costs to develop new products, even an expense like advertising to popularize a brand name. The amount of fixed costs varies according to the specific line of business: for instance, manufacturing computer chips requires an expensive factory, but a local moving and hauling business can get by with almost no fixed costs at all if it rents trucks by the day when needed.
Variable costs are the costs of the variable inputs (e.g. labor). The only way to increase or decrease output is by increasing or decreasing the variable inputs. Therefore, variable costs increase or decrease with output. We treat labor as a variable cost, since producing a greater quantity of a good or service typically requires more workers or more work hours. Variable costs would also include raw materials.
Total costs are the sum of fixed plus variable costs. Let’s look at another example. Consider the barber shop called “The Clip Joint” in (Figure). The data for output and costs are in (Figure). The fixed costs of operating the barber shop, including the space and equipment, are ?160 per day. The variable costs are the costs of hiring barbers, which in our example is ?80 per barber each day. The first two columns of the table show the quantity of haircuts the barbershop can produce as it hires additional barbers. The third column shows the fixed costs, which do not change regardless of the level of production. The fourth column shows the variable costs at each level of output. We calculate these by taking the amount of labor hired and multiplying by the wage. For example, two barbers cost: 2 × ?80 = ?160. Adding together the fixed costs in the third column and the variable costs in the fourth column produces the total costs in the fifth column. For example, with two barbers the total cost is: ?160 + ?160 = ?320.
|Labor||Quantity||Fixed Cost||Variable Cost||Total Cost|
At zero production, the fixed costs of ?160 are still present. As production increases, we add variable costs to fixed costs, and the total cost is the sum of the two. (Figure) graphically shows the relationship between the quantity of output produced and the cost of producing that output. We always show the fixed costs as the vertical intercept of the total cost curve; that is, they are the costs incurred when output is zero so there are no variable costs.
You can see from the graph that once production starts, total costs and variable costs rise. While variable costs may initially increase at a decreasing rate, at some point they begin increasing at an increasing rate. This is caused by diminishing marginal productivity which we discussed earlier in the Production in the Short Run section of this chapter, which is easiest to see with an example. As the number of barbers increases from zero to one in the table, output increases from 0 to 16 for a marginal gain (or marginal product) of 16. As the number rises from one to two barbers, output increases from 16 to 40, a marginal gain of 24. From that point on, though, the marginal product diminishes as we add each additional barber. For example, as the number of barbers rises from two to three, the marginal product is only 20; and as the number rises from three to four, the marginal product is only 12.
To understand the reason behind this pattern, consider that a one-man barber shop is a very busy operation. The single barber needs to do everything: say hello to people entering, answer the phone, cut hair, sweep, and run the cash register. A second barber reduces the level of disruption from jumping back and forth between these tasks, and allows a greater division of labor and specialization. The result can be increasing marginal productivity. However, as the shop adds other barbers, the advantage of each additional barber is less, since the specialization of labor can only go so far. The addition of a sixth or seventh or eighth barber just to greet people at the door will have less impact than the second one did. This is the pattern of diminishing marginal productivity. As a result, the total costs of production will begin to rise more rapidly as output increases. At some point, you may even see negative returns as the additional barbers begin bumping elbows and getting in each other’s way. In this case, the addition of still more barbers would actually cause output to decrease, as the last row of (Figure) shows.
This pattern of diminishing marginal productivity is common in production. As another example, consider the problem of irrigating a crop on a farmer’s field. The plot of land is the fixed factor of production, while the water that the farmer can add to the land is the key variable cost. As the farmer adds water to the land, output increases. However, adding increasingly more water brings smaller increases in output, until at some point the water floods the field and actually reduces output. Diminishing marginal productivity occurs because, with fixed inputs (land in this example), each additional unit of input (e.g. water) contributes less to overall production.
Average Total Cost, Average Variable Cost, Marginal Cost
The breakdown of total costs into fixed and variable costs can provide a basis for other insights as well. The first five columns of (Figure) duplicate the previous table, but the last three columns show average total costs, average variable costs, and marginal costs. These new measures analyze costs on a per-unit (rather than a total) basis and are reflected in the curves in (Figure).
|Labor||Quantity||Fixed Cost||Variable Cost||Total Cost||Marginal Cost||Average Total Cost||Average Variable Cost|
Average total cost (sometimes referred to simply as average cost) is total cost divided by the quantity of output. Since the total cost of producing 40 haircuts is ?320, the average total cost for producing each of 40 haircuts is ?320/40, or ?8 per haircut. Average cost curves are typically U-shaped, as (Figure) shows. Average total cost starts off relatively high, because at low levels of output total costs are dominated by the fixed cost. Mathematically, the denominator is so small that average total cost is large. Average total cost then declines, as the fixed costs are spread over an increasing quantity of output. In the average cost calculation, the rise in the numerator of total costs is relatively small compared to the rise in the denominator of quantity produced. However, as output expands still further, the average cost begins to rise. At the right side of the average cost curve, total costs begin rising more rapidly as diminishing returns come into effect.
We obtain average variable cost when we divide variable cost by quantity of output. For example, the variable cost of producing 80 haircuts is ?400, so the average variable cost is ?400/80, or ?5 per haircut. Note that at any level of output, the average variable cost curve will always lie below the curve for average total cost, as (Figure) shows. The reason is that average total cost includes average variable cost and average fixed cost. Thus, for Q = 80 haircuts, the average total cost is ?8 per haircut, while the average variable cost is ?5 per haircut. However, as output grows, fixed costs become relatively less important (since they do not rise with output), so average variable cost sneaks closer to average cost.
Average total and variable costs measure the average costs of producing some quantity of output. Marginal cost is somewhat different. Marginal cost is the additional cost of producing one more unit of output. It is not the cost per unit of all units produced, but only the next one (or next few). We calculate marginal cost by taking the change in total cost and dividing it by the change in quantity. For example, as quantity produced increases from 40 to 60 haircuts, total costs rise by 400 – 320, or 80. Thus, the marginal cost for each of those marginal 20 units will be 80/20, or ?4 per haircut. The marginal cost curve is generally upward-sloping, because diminishing marginal returns implies that additional units are more costly to produce. We can see small range of increasing marginal returns in the figure as a dip in the marginal cost curve before it starts rising. There is a point at which marginal and average costs meet, as the following Clear it Up feature discusses.
The marginal cost line intersects the average cost line exactly at the bottom of the average cost curve—which occurs at a quantity of 72 and cost of ?6.60 in (Figure). The reason why the intersection occurs at this point is built into the economic meaning of marginal and average costs. If the marginal cost of production is below the average cost for producing previous units, as it is for the points to the left of where MC crosses ATC, then producing one more additional unit will reduce average costs overall—and the ATC curve will be downward-sloping in this zone. Conversely, if the marginal cost of production for producing an additional unit is above the average cost for producing the earlier units, as it is for points to the right of where MC crosses ATC, then producing a marginal unit will increase average costs overall—and the ATC curve must be upward-sloping in this zone. The point of transition, between where MC is pulling ATC down and where it is pulling it up, must occur at the minimum point of the ATC curve.
This idea of the marginal cost “pulling down” the average cost or “pulling up” the average cost may sound abstract, but think about it in terms of your own grades. If the score on the most recent quiz you take is lower than your average score on previous quizzes, then the marginal quiz pulls down your average. If your score on the most recent quiz is higher than the average on previous quizzes, the marginal quiz pulls up your average. In this same way, low marginal costs of production first pull down average costs and then higher marginal costs pull them up.
The numerical calculations behind average cost, average variable cost, and marginal cost will change from firm to firm. However, the general patterns of these curves, and the relationships and economic intuition behind them, will not change.
Lessons from Alternative Measures of Costs
Breaking down total costs into fixed cost, marginal cost, average total cost, and average variable cost is useful because each statistic offers its own insights for the firm.
Whatever the firm’s quantity of production, total revenue must exceed total costs if it is to earn a profit. As explored in the chapter Choice in a World of Scarcity, fixed costs are often sunk costs that a firm cannot recoup. In thinking about what to do next, typically you should ignore sunk costs, since you have already spent this money and cannot make any changes. However, you can change variable costs, so they convey information about the firm’s ability to cut costs in the present and the extent to which costs will increase if production rises.
Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output produced. We measure these costs in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, we measured them as dollars per haircut. Thus, it would not make sense to put all of these numbers on the same graph, since we measure them in different units (? versus ? per unit of output).
It would be as if the vertical axis measured two different things. In addition, as a practical matter, if they were on the same graph, the lines for marginal cost, average cost, and average variable cost would appear almost flat against the horizontal axis, compared to the values for total cost, fixed cost, and variable cost. Using the figures from the previous example, the total cost of producing 40 haircuts is ?320. However, the average cost is ?320/40, or ?8. If you graphed both total and average cost on the same axes, the average cost would hardly show.
Average cost tells a firm whether it can earn profits given the current price in the market. If we divide profit by the quantity of output produced we get average profit, also known as the firm’s profit margin. Expanding the equation for profit gives:
However, note that:
This is the firm’s profit margin. This definition implies that if the market price is above average cost, average profit, and thus total profit, will be positive. If price is below average cost, then profits will be negative.
We can compare this marginal cost of producing an additional unit with the marginal revenue gained by selling that additional unit to reveal whether the additional unit is adding to total profit—or not. Thus, marginal cost helps producers understand how increasing or decreasing production affects profits.
A Variety of Cost Patterns
The pattern of costs varies among industries and even among firms in the same industry. Some businesses have high fixed costs, but low marginal costs. Consider, for example, an internet company that provides medical advice to customers. Consumers might pay such a company directly, or perhaps hospitals or healthcare practices might subscribe on behalf of their patients. Setting up the website, collecting the information, writing the content, and buying or leasing the computer space to handle the web traffic are all fixed costs that the company must undertake before the site can work. However, when the website is up and running, it can provide a high quantity of service with relatively low variable costs, like the cost of monitoring the system and updating the information. In this case, the total cost curve might start at a high level, because of the high fixed costs, but then might appear close to flat, up to a large quantity of output, reflecting the low variable costs of operation. If the website is popular, however, a large rise in the number of visitors will overwhelm the website, and increasing output further could require a purchase of additional computer space.
For other firms, fixed costs may be relatively low. For example, consider firms that rake leaves in the fall or shovel snow off sidewalks and driveways in the winter. For fixed costs, such firms may need little more than a car to transport workers to homes of customers and some rakes and shovels. Still other firms may find that diminishing marginal returns set in quite sharply. If a manufacturing plant tried to run 24 hours a day, seven days a week, little time remains for routine equipment maintenance, and marginal costs can increase dramatically as the firm struggles to repair and replace overworked equipment.
Every firm can gain insight into its task of earning profits by dividing its total costs into fixed and variable costs, and then using these calculations as a basis for average total cost, average variable cost, and marginal cost. However, making a final decision about the profit-maximizing quantity to produce and the price to charge will require combining these perspectives on cost with an analysis of sales and revenue, which in turn requires looking at the market structure in which the firm finds itself. Before we turn to the analysis of market structure in other chapters, we will analyze the firm’s cost structure from a long-run perspective.
Key Concepts and Summary
For every input (e.g. labor), there is an associated factor payment (e.g. wages and salaries). The cost of production for a given quantity of output is the sum of the amount of each input required to produce that quantity of output times the associated factor payment.
In a short-run perspective, we can divide a firm’s total costs into fixed costs, which a firm must incur before producing any output, and variable costs, which the firm incurs in the act of producing. Fixed costs are sunk costs; that is, because they are in the past and the firm cannot alter them, they should play no role in economic decisions about future production or pricing. Variable costs typically show diminishing marginal returns, so that the marginal cost of producing higher levels of output rises.
We calculate marginal cost by taking the change in total cost (or the change in variable cost, which will be the same thing) and dividing it by the change in output, for each possible change in output. Marginal costs are typically rising. A firm can compare marginal cost to the additional revenue it gains from selling another unit to find out whether its marginal unit is adding to profit.
We calculate average total cost by taking total cost and dividing by total output at each different level of output. Average costs are typically U-shaped on a graph. If a firm’s average cost of production is lower than the market price, a firm will be earning profits.
We calculate average variable cost by taking variable cost and dividing by the total output at each level of output. Average variable costs are typically U-shaped. If a firm’s average variable cost of production is lower than the market price, then the firm would be earning profits if fixed costs are left out of the picture.
The WipeOut Ski Company manufactures skis for beginners. Fixed costs are ?30. Fill in (Figure) for total cost, average variable cost, average total cost, and marginal cost.
|Quantity||Variable Cost||Fixed Cost||Total Cost||Average Variable Cost||Average Total Cost||Marginal Cost|
|Quantity||Variable Cost||Fixed Cost||Total Cost||Average Variable Cost||Average Total Cost||Marginal Cost|
Based on your answers to the WipeOut Ski Company in (Figure), now imagine a situation where the firm produces a quantity of 5 units that it sells for a price of ?25 each.
- What will be the company’s profits or losses?
- How can you tell at a glance whether the company is making or losing money at this price by looking at average cost?
- At the given quantity and price, is the marginal unit produced adding to profits?
- Total revenues in this example will be a quantity of five units multiplied by the price of ?25/unit, which equals ?125. Total costs when producing five units are ?130. Thus, at this level of quantity and output the firm experiences losses (or negative profits) of ?5.
- If price is less than average cost, the firm is not making a profit. At an output of five units, the average cost is ?26/unit. Thus, at a glance you can see the firm is making losses. At a second glance, you can see that it must be losing ?1 for each unit produced (that is, average cost of ?26/unit minus the price of ?25/unit). With five units produced, this observation implies total losses of ?5.
- When producing five units, marginal costs are ?30/unit. Price is ?25/unit. Thus, the marginal unit is not adding to profits, but is actually subtracting from profits, which suggests that the firm should reduce its quantity produced.
How do we calculate marginal product?
What shapes would you generally expect a total product curve and a marginal product curve to have?
What are the factor payments for land, labor, and capital?
What is the difference between fixed costs and variable costs?
Critical Thinking Questions
A common name for fixed cost is “overhead.” If you divide fixed cost by the quantity of output produced, you get average fixed cost. Supposed fixed cost is ?1,000. What does the average fixed cost curve look like? Use your response to explain what “spreading the overhead” means.
How does fixed cost affect marginal cost? Why is this relationship important?
Average cost curves (except for average fixed cost) tend to be U-shaped, decreasing and then increasing. Marginal cost curves have the same shape, though this may be harder to see since most of the marginal cost curve is increasing. Why do you think that average and marginal cost curves have the same general shape?
Return to (Figure). What is the marginal gain in output from increasing the number of barbers from 4 to 5 and from 5 to 6? Does it continue the pattern of diminishing marginal returns?
Compute the average total cost, average variable cost, and marginal cost of producing 60 and 72 haircuts. Draw the graph of the three curves between 60 and 72 haircuts.
- average profit
- profit divided by the quantity of output produced; also known as profit margin
- average total cost
- total cost divided by the quantity of output
- average variable cost
- variable cost divided by the quantity of output
- fixed cost
- cost of the fixed inputs; expenditure that a firm must make before production starts and that does not change regardless of the production level
- marginal cost
- the additional cost of producing one more unit; mathematically, [latex]MC=\Delta TC/\Delta L[/latex]
- total cost
- the sum of fixed and variable costs of production
- variable cost
- cost of production that increases with the quantity produced; the cost of the variable inputs | https://opentextbc.ca/principlesofeconomics2eopenstax/chapter/costs-in-the-short-run/ | 21 |
85 | Nominal Interest Rate Definition
In finance and economics, the Nominal Interest rate refers to the interest rate without the adjustment of inflation. It is basically the rate “as stated”, “as advertised” and so on which does not take inflation, compounding effect of interest, tax, or any fees in the account.
It is also known as Annualized Percent Rate. This is the interest compounded or calculated once in a year.
Mathematically, it can be calculated using the below formula is represented as below,
- Real Interest Rate is the interest rate that takes inflation, compounding effect, and other charges into account.
- Inflation is the most important factor that impacts the nominal interest rate. It increases with inflation and decreases with deflationAnd Decreases With DeflationDeflation is a decrease in the prices of goods and services caused by negative inflation (below 0%). It usually results in increased consumer purchasing power, owing to a simple supply and demand rule in which excess supply leads to lower prices..
Nominal Interest Rate Example
Let us assume that the real interest rate of investment is 3% and the inflation rate is 2%. Calculate the Nominal Interest Rate.
Therefore, it can be calculated using the formula as below,
Nominal interest rate formula = [(1 + 3%) * (1 + 2%)] – 1
So, the Nominal rate will be –
Nominal rate = 5.06%
- It is widely used in banks to describe interest on various loans.
- It is widely used in the investment field to suggest investors for various investment avenues present in the market.
- For example, Car loans available at 10% of the interest rate. This face an interest rate of 10% is the nominal rate. It does not take fees or other charges in an account.
- Bond available at 8% is a coupon rate as it does not consider current inflation This face interest of 8% is the nominal rate.
Calculate Effective Interest Rate from Nominal Rate
The effective interest rateEffective Interest RateEffective Interest Rate, also called Annual Equivalent Rate, is the actual rate of interest that a person pays or earns on a financial instrument by considering the compounding interest over a given period. is the one that caters to the compounding periods during a loan payment plan. The effective interest rate is calculated as if compounded annually, half-yearly, monthly, or daily. On the other side, the stated or nominal rate is less than the effective interest rate. It is the interest rate where interest is calculated only once a year.
The formula for the effective interest rate:
- r the nominal rate (as a decimal),
- and “m” the number of compounding periods per year.
A company XYZ made an investment of Rs.250000 at interest 12% compounded quarterly, calculate the annual effective interest rate.
In the example, investment is made with a nominal rate with 12% compounded quarterly.
- r = 0.12
- m= 4
Effective Interest Rate = (1 + r/m)^m – 1
- =(1+0.12/4)^4 – 1
- =12.55 %
- The nominal rate does not consider inflation and hence cannot be treated as a true indicator of the cost of borrowing or investment.
- It is not a lucrative option in this regard as inflation is inevitable.
- Now, we know that the Nominal rate does not consider inflation. So to avoid purchasing power erosion through inflation, investors must not consider the nominal interest rate stated by bankers or other, rather, they must keep real interest rate in mind to do the actual valuation of investment and return on investment.
- By considering the real interest rate, they will come to know if they are gaining or losing over the time period. It helps an investor to decide whether to choose saving instruments like fixed deposits, pension funds, or investment instruments like shares, mutual funds, etc.
- Also, at the time of assessing the cost of borrowing, a borrower must not consider the nominal rate levied by the lender rather, they must consider the effective interest rates. An effective interest rate gives a clear picture when interest is compounding multiple periods in a year. If a person owes $20000 at 20% p.a, he will pay Rs.4000 as interest. If he owes the same $20000 on a credit card which is compounded daily, the effective rate of interest will be 22.13%. He will have to pay $.4426 as interest.
After reading about the nominal interest rate, we can conclude that nominal interest is a stated interest rate, therefore, is a catchy term and it can deceive borrower or investor as it does not give the true picture of the cost of borrowing or net return from an investment.
As it does not consider inflation, tax, investment fees, compounding effect of interest, we must use alternate interest rate like real interest rate or effective interest rate for actual assessment of our cost of borrowing or investment as and where suited.
This has been a guide on what is Nominal Interest Rate, its definition, significance & applications. Here we also discuss how to calculate using formula and examples. You may learn more about Economics from the following articles – | https://www.wallstreetmojo.com/nominal-interest-rate/ | 21 |
388 | In economics, deflation is a decrease in the general price level of goods and services. Deflation occurs when the inflation rate falls below 0% (a negative inflation rate). Inflation reduces the value of currency over time, but sudden deflation increases it. This allows more goods and services to be bought than before with the same amount of currency. Deflation is distinct from disinflation, a slow-down in the inflation rate, i.e. when inflation declines to a lower rate but is still positive.
Economists generally believe that a sudden deflationary shock is a problem in a modern economy because it increases the real value of debt, especially if the deflation is unexpected. Deflation may also aggravate recessions and lead to a deflationary spiral.
Deflation usually happens when supply is high (when excess production occurs), when demand is low (when consumption decreases), or when the money supply decreases (sometimes in response to a contraction created from careless investment or a credit crunch) or because of a net capital outflow from the economy. It can also occur due to too much competition and too little market concentration.
Causes and corresponding types
This section needs additional citations for verification. (November 2014)
In the IS–LM model (investment and saving equilibrium – liquidity preference and money supply equilibrium model), deflation is caused by a shift in the supply and demand curve for goods and services. This in turn can be caused by an increase in supply, a fall in demand, or both.
When prices are falling, consumers have an incentive to delay purchases and consumption until prices fall further, which in turn reduces overall economic activity. When purchases are delayed, productive capacity is idled and investment falls, leading to further reductions in aggregate demand. This is the deflationary spiral. The way to reverse this quickly would be to introduce an economic stimulus. The government could increase productive spending on things like infrastructure or the central bank could start expanding the money supply.
Deflation is also related to risk aversion, where investors and buyers will start hoarding money because its value is now increasing over time. This can produce a liquidity trap or it may lead to shortages that entice investments yielding more jobs and commodity production. A central bank cannot, normally, charge negative interest for money, and even charging zero interest often produces less stimulative effect than slightly higher rates of interest. In a closed economy, this is because charging zero interest also means having zero return on government securities, or even negative return on short maturities. In an open economy it creates a carry trade, and devalues the currency. A devalued currency produces higher prices for imports without necessarily stimulating exports to a like degree.
Deflation is the natural condition of economies when the supply of money is fixed, or does not grow as quickly as population and the economy. When this happens, the available amount of hard currency per person falls, in effect making money more scarce, and consequently, the purchasing power of each unit of currency increases. Deflation also occurs when improvements in production efficiency lower the overall price of goods. Competition in the marketplace often prompts those producers to apply at least some portion of these cost savings into reducing the asking price for their goods. When this happens, consumers pay less for those goods, and consequently, deflation has occurred, since purchasing power has increased.
Rising productivity and reduced transportation cost created structural deflation during the accelerated productivity era from 1870–1900, but there was mild inflation for about a decade before the establishment of the Federal Reserve in 1913. There was inflation during World War I, but deflation returned again after the war and during the 1930s depression. Most nations abandoned the gold standard in the 1930s so that there is less reason to expect deflation, aside from the collapse of speculative asset classes, under a fiat monetary system with low productivity growth.
In mainstream economics, deflation may be caused by a combination of the supply and demand for goods and the supply and demand for money, specifically the supply of money going down and the supply of goods going up. Historic episodes of deflation have often been associated with the supply of goods going up (due to increased productivity) without an increase in the supply of money, or (as with the Great Depression and possibly Japan in the early 1990s) the demand for goods going down combined with a decrease in the money supply. Studies of the Great Depression by Ben Bernanke have indicated that, in response to decreased demand, the Federal Reserve of the time decreased the money supply, hence contributing to deflation.
Demand-side causes are:
- Growth deflation: an enduring decrease in the real cost of goods and services as the result of technological progress, accompanied by competitive price cuts, resulting in an increase in aggregate demand.
A structural deflation existed from the 1870s until the cycle upswing that started in 1895. The deflation was caused by the decrease in the production and distribution costs of goods. It resulted in competitive price cuts when markets were oversupplied. The mild inflation after 1895 was attributed to the increase in gold supply that had been occurring for decades. There was a sharp rise in prices during World War I, but deflation returned at the war's end. By contrast, under a fiat monetary system, there was high productivity growth from the end of World War II until the 1960s, but no deflation.
Historically not all episodes of deflation correspond with periods of poor economic growth.
Productivity and deflation are discussed in a 1940 study by the Brookings Institution that gives productivity by major US industries from 1919 to 1939, along with real and nominal wages. Persistent deflation was clearly understood as being the result of the enormous gains in productivity of the period. By the late 1920s, most goods were over supplied, which contributed to high unemployment during the Great Depression.
- Cash building (hoarding) deflation: attempts to save more cash by a reduction in consumption leading to a decrease in velocity of money.
Supply-side causes are:
- Bank credit deflation: a decrease in the bank credit supply due to bank failures or increased perceived risk of defaults by private entities or a contraction of the money supply by the central bank.
Debt deflation is a complicated phenomenon associated with the end of long-term credit cycles. It was proposed as a theory by Irving Fisher (1933) to explain the deflation of the Great Depression.
Money supply side deflation
A historical analysis of money velocity and monetary base shows an inverse correlation: for a given percentage decrease in the monetary base the result is nearly equal percentage increase in money velocity. This is to be expected because monetary base (MB), velocity of base money (VB), price level (P) and real output (Y) are related by definition: MBVB = PY. However, it is important to note that the monetary base is a much narrower definition of money than M2 money supply. Additionally, the velocity of the monetary base is interest rate sensitive, the highest velocity being at the highest interest rates.
In the early history of the United States, there was no national currency and an insufficient supply of coinage. Banknotes were the majority of the money in circulation. During financial crises, many banks failed and their notes became worthless. Also, banknotes were discounted relative to gold and silver, the discount depending on the financial strength of the bank.
In recent years changes in the money supply have historically taken a long time to show up in the price level, with a rule of thumb lag of at least 18 months. More recently Alan Greenspan cited the time lag as taking between 12 and 13 quarters. Bonds, equities and commodities have been suggested as reservoirs for buffering changes in money supply.
In modern credit-based economies, deflation may be caused by the central bank initiating higher interest rates (i.e., to 'control' inflation), thereby possibly popping an asset bubble. In a credit-based economy, a slow-down or fall in lending leads to less money in circulation, with a further sharp fall in money supply as confidence reduces and velocity weakens, with a consequent sharp fall-off in demand for employment or goods. The fall in demand causes a fall in prices as a supply glut develops. This becomes a deflationary spiral when prices fall below the costs of financing production, or repaying debt levels incurred at the prior price level. Businesses, unable to make enough profit no matter how low they set prices, are then liquidated. Banks get assets that have fallen dramatically in value since their mortgage loan was made, and if they sell those assets, they further glut supply, which only exacerbates the situation. To slow or halt the deflationary spiral, banks will often withhold collecting on non-performing loans (as in Japan, and most recently America and Spain). This is often no more than a stop-gap measure, because they must then restrict credit, since they do not have money to lend, which further reduces demand, and so on.
Historical examples of credit deflation
In the early economic history of the United States, cycles of inflation and deflation correlated with capital flows between regions, with money being loaned from the financial center in the Northeast to the commodity producing regions of the [mid]-West and South. In a procyclical manner, prices of commodities rose when capital was flowing in, that is, when banks were willing to lend, and fell in the depression years of 1818 and 1839 when banks called in loans. Also, there was no national paper currency at the time and there was a scarcity of coins. Most money circulated as banknotes, which typically sold at a discount according to distance from the issuing bank and the bank's perceived financial strength.
When banks failed their notes were redeemed for bank reserves, which often did not result in payment at par value, and sometimes the notes became worthless. Notes of weak surviving banks traded at steep discounts. During the Great Depression, people who owed money to a bank whose deposits had been frozen would sometimes buy bank books (deposits of other people at the bank) at a discount and use them to pay off their debt at par value.
Deflation occurred periodically in the U.S. during the 19th century (the most important exception was during the Civil War). This deflation was at times caused by technological progress that created significant economic growth, but at other times it was triggered by financial crises – notably the Panic of 1837 which caused deflation through 1844, and the Panic of 1873 which triggered the Long Depression that lasted until 1879. These deflationary periods preceded the establishment of the U.S. Federal Reserve System and its active management of monetary matters. Episodes of deflation have been rare and brief since the Federal Reserve was created (a notable exception being the Great Depression) while U.S. economic progress has been unprecedented.
A financial crisis in England in 1818 caused banks to call in loans and curtail new lending, draining specie out of the U.S. The Bank of the United States also reduced its lending. Prices for cotton and tobacco fell. The price of agricultural commodities also was pressured by a return of normal harvests following 1816, the year without a summer, that caused large scale famine and high agricultural prices.
There were several causes of the deflation of the severe depression of 1839–1843, which included an oversupply of agricultural commodities (importantly cotton) as new cropland came into production following large federal land sales a few years earlier, banks requiring payment in gold or silver, the failure of several banks, default by several states on their bonds and British banks cutting back on specie flow to the U.S.
This cycle has been traced out on a broad scale during the Great Depression. Partly because of overcapacity and market saturation and partly as a result of the Smoot-Hawley Tariff Act, international trade contracted sharply, severely reducing demand for goods, thereby idling a great deal of capacity, and setting off a string of bank failures. A similar situation in Japan, beginning with the stock and real estate market collapse in the early 1990s, was arrested by the Japanese government preventing the collapse of most banks and taking over direct control of several in the worst condition.
Scarcity of official money
The United States had no national paper money until 1862 (greenbacks used to fund the Civil War), but these notes were discounted to gold until 1877. There was also a shortage of U.S. minted coins. Foreign coins, such as Mexican silver, were commonly used. At times banknotes were as much as 80% of currency in circulation before the Civil War. In the financial crises of 1818–19 and 1837–41, many banks failed, leaving their money to be redeemed below par value from reserves. Sometimes the notes became worthless, and the notes of weak surviving banks were heavily discounted. The Jackson administration opened branch mints, which over time increased the supply of coins. Following the 1848 finding of gold in the Sierra Nevada, enough gold came to market to devalue gold relative to silver. To equalize the value of the two metals in coinage, the US mint slightly reduced the silver content of new coinage in 1853.
When structural deflation appeared in the years following 1870, a common explanation given by various government inquiry committees was a scarcity of gold and silver, although they usually mentioned the changes in industry and trade we now call productivity. However, David A. Wells (1890) notes that the U.S. money supply during the period 1879-1889 actually rose 60%, the increase being in gold and silver, which rose against the percentage of national bank and legal tender notes. Furthermore, Wells argued that the deflation only lowered the cost of goods that benefited from recent improved methods of manufacturing and transportation. Goods produced by craftsmen did not decrease in price, nor did many services, and the cost of labor actually increased. Also, deflation did not occur in countries that did not have modern manufacturing, transportation and communications.
By the end of the 19th century, deflation ended and turned to mild inflation. William Stanley Jevons predicted rising gold supply would cause inflation decades before it actually did. Irving Fisher blamed the worldwide inflation of the pre-WWI years on rising gold supply.
In economies with an unstable currency, barter and other alternate currency arrangements such as dollarization are common, and therefore when the 'official' money becomes scarce (or unusually unreliable), commerce can still continue (e.g., most recently in Zimbabwe). Since in such economies the central government is often unable, even if it were willing, to adequately control the internal economy, there is no pressing need for individuals to acquire official currency except to pay for imported goods. In effect, barter acts as a protective tariff in such economies, encouraging local consumption of local production. It also acts as a spur to mining and exploration, because one easy way to make money in such an economy is to dig it out of the ground.
Increasing competition by internal or external economic liberalisation generally has a price-cutting effect. Measures of deregulation like the abolition of (e.g. state-owned) monopolies or the elimination of price maintenance as well as increased free trade can therefore cause deflation as far as a multitude of sectors is affected.
Currency pegs and Monetary unions
If a country pegs its currency to the one of another country that features a higher productivity growth or a more favourable unit cost development, it must – to maintain its competitiveness – either become equally more productive or lower its factor prices (e.g. wages). Cutting factor prices fosters deflation. Monetary unions have a similar effect to currency pegs.
Deflation was present during most economic depressions in US history Deflation is generally regarded negatively, as it causes a transfer of wealth from borrowers and holders of illiquid assets, to the benefit of savers and of holders of liquid assets and currency, and because confused pricing signals cause malinvestment, in the form of under-investment.
In this sense, it is the opposite of the more usual scenario of inflation, whose effect is to tax currency holders and lenders (savers) and use the proceeds to subsidize borrowers, including governments, and to cause malinvestment as overinvestment. Thus inflation encourages short term consumption and can similarly over-stimulate investment in projects that may not be worthwhile in real terms (for example the housing or Dot-com bubbles), while deflation retards investment even when there is a real-world demand not being met. In modern economies, deflation is usually caused by a drop in aggregate demand, and is associated with economic depression, as occurred in the Great Depression and the Long Depression.
I agree with Milton Friedman that once the Crash had occurred, the Federal Reserve System pursued a silly deflationary policy. I am not only against inflation but I am also against deflation. So, once again, a badly programmed monetary policy prolonged the depression.— Interview with Diego Pizano (1979)
While an increase in the purchasing power of one's money benefits some, it amplifies the sting of debt for others: after a period of deflation, the payments to service a debt represent a larger amount of purchasing power than they did when the debt was first incurred. Consequently, deflation can be thought of as an effective increase in a loan's interest rate. If, as during the Great Depression in the United States, deflation averages 10% per year, even an interest-free loan is unattractive as it must be repaid with money worth 10% more each year.
Under normal conditions, the Fed and most other central banks implement policy by setting a target for a short-term interest rate – the overnight federal funds rate in the U.S. – and enforcing that target by buying and selling securities in open capital markets. When the short-term interest rate hits zero, the central bank can no longer ease policy by lowering its usual interest-rate target. With interest rates near zero, debt relief becomes an increasingly important tool in managing deflation.
In recent times, as loan terms have grown in length and loan financing (or leveraging) is common among many types of investments, the costs of deflation to borrowers has grown larger. Deflation can discourage private investment, because there is reduced expectations on future profits when future prices are lower. Consequently, with reduced private investments, spiraling deflation can cause a collapse in aggregate demand. Without the "hidden risk of inflation", it may become more prudent for institutions to hold on to money, and not to spend or invest it (burying money). They are therefore rewarded by holding money. This "hoarding" behavior is seen as undesirable by most economists, as Hayek points out:
It is agreed that hoarding money, whether in cash or in idle balances, is deflationary in its effects. No one thinks that deflation is in itself desirable.— Hayek (1932)
Since deflationary periods disfavor debtors (including most farmers), they are often periods of rising populist backlash. For example, in the late 19th century, populists in the US wanted debt relief or to move off the new gold standard and onto a silver standard (the supply of silver was increasing relatively faster than the supply of gold, making silver less deflationary than gold), bimetal standard, or paper money like the recently ended Greenbacks.
A deflationary spiral is a situation where decreases in the price level lead to lower production, which in turn leads to lower wages and demand, which leads to further decreases in the price level. Since reductions in general price level are called deflation, a deflationary spiral occurs when reductions in price lead to a vicious circle, where a problem exacerbates its own cause. In science, this effect is also known as a positive feedback loop. Another economic example of this situation in economics is the bank run.
The Great Depression was regarded by some as a deflationary spiral. A deflationary spiral is the modern macroeconomic version of the general glut controversy of the 19th century. Another related idea is Irving Fisher's theory that excess debt can cause a continuing deflation.
During severe deflation, targeting an interest rate (the usual method of determining how much currency to create) may be ineffective, because even lowering the short-term interest rate to zero may result in a real interest rate which is too high to attract credit-worthy borrowers. In the 21st-century negative interest rate has been tried, but it can't be too negative, since people might withdraw cash from bank accounts if they have negative interest rate. Thus the central bank must directly set a target for the quantity of money (called "quantitative easing") and may use extraordinary methods to increase the supply of money, e.g. purchasing financial assets of a type not usually used by the central bank as reserves (such as mortgage-backed securities). Before he was Chairman of the United States Federal Reserve, Ben Bernanke claimed in 2002, "...sufficient injections of money will ultimately always reverse a deflation", although Japan's deflationary spiral was not broken by the amount of quantitative easing provided by the Bank of Japan.
Until the 1930s, it was commonly believed by economists that deflation would cure itself. As prices decreased, demand would naturally increase and the economic system would correct itself without outside intervention.
This view was challenged in the 1930s during the Great Depression. Keynesian economists argued that the economic system was not self-correcting with respect to deflation and that governments and central banks had to take active measures to boost demand through tax cuts or increases in government spending. Reserve requirements from the central bank were high compared to recent times. So were it not for redemption of currency for gold (in accordance with the gold standard), the central bank could have effectively increased money supply by simply reducing the reserve requirements and through open market operations (e.g., buying treasury bonds for cash) to offset the reduction of money supply in the private sectors due to the collapse of credit (credit is a form of money).
With the rise of monetarist ideas, the focus in fighting deflation was put on expanding demand by lowering interest rates (i.e., reducing the "cost" of money). This view has received a setback in light of the failure of accommodative policies in both Japan and the US to spur demand after stock market shocks in the early 1990s and in 2000–02, respectively. Austrian economists worry about the inflationary impact of monetary policies on asset prices. Sustained low real rates can cause higher asset prices and excessive debt accumulation. Therefore, lowering rates may prove to be only a temporary palliative, aggravating an eventual debt deflation crisis.
With interest rates near zero, debt relief becomes an increasingly important tool in managing deflation.
Special borrowing arrangements
When the central bank has lowered nominal interest rates to zero, it can no longer further stimulate demand by lowering interest rates. This is the famous liquidity trap. When deflation takes hold, it requires "special arrangements" to lend money at a zero nominal rate of interest (which could still be a very high real rate of interest, due to the negative inflation rate) in order to artificially increase the money supply.
Although the values of capital assets are often casually said to deflate when they decline, this usage is not consistent with the usual definition of deflation; a more accurate description for a decrease in the value of a capital asset is economic depreciation. Another term, the accounting conventions of depreciation are standards to determine a decrease in values of capital assets when market values are not readily available or practical.
The inflation rate of Greece was negative during three years from 2013 to 2015. The same applies to Bulgaria, Cyprus, Spain and Slovakia from 2014 to 2016. Greece, Cyprus, Spain and Slovakia are members of the European monetary union. The Bulgarian currency lev is pegged to the Euro with a fixed exchange rate. In the entire European Union and the Eurozone a disinflationary development was to be observed in the years 2011 to 2015.
Following the Asian financial crisis in late 1997, Hong Kong experienced a long period of deflation which did not end until the 4th quarter of 2004. Many East Asian currencies devalued following the crisis. The Hong Kong dollar however, was pegged to the US dollar, leading to an adjustment instead by a deflation of consumer prices. The situation was worsened by the increasingly cheap exports from Mainland China, and "weak Consumer confidence" in Hong Kong. This deflation was accompanied by an economic slump that was more severe and prolonged than those of the surrounding countries that devalued their currencies in the wake of the Asian financial crisis.
In February 2009, Ireland's Central Statistics Office announced that during January 2009, the country experienced deflation, with prices falling by 0.1% from the same time in 2008. This is the first time deflation has hit the Irish economy since 1960. Overall consumer prices decreased by 1.7% in the month.
Brian Lenihan, Ireland's Minister for Finance, mentioned deflation in an interview with RTÉ Radio. According to RTÉ's account, "Minister for Finance Brian Lenihan has said that deflation must be taken into account when Budget cuts in child benefit, public sector pay and professional fees are being considered. Mr Lenihan said month-on-month there has been a 6.6% decline in the cost of living this year."
This interview is notable in that the deflation referred to is not discernibly regarded negatively by the Minister in the interview. The Minister mentions the deflation as an item of data helpful to the arguments for a cut in certain benefits. The alleged economic harm caused by deflation is not alluded to or mentioned by this member of government. This is a notable example of deflation in the modern era being discussed by a senior financial Minister without any mention of how it might be avoided, or whether it should be.[original research?]
This section needs additional citations for verification. (September 2010)
Deflation started in the early 1990s. The Bank of Japan and the government tried to eliminate it by reducing interest rates and 'quantitative easing', but did not create a sustained increase in broad money and deflation persisted. In July 2006, the zero-rate policy was ended.
Systemic reasons for deflation in Japan can be said to include:
- Tight monetary conditions. The Bank of Japan kept monetary policy loose only when inflation was below zero, tightening whenever deflation ends.
- Unfavorable demographics. Japan has an aging population (22.6% over age 65) which has been declining since 2011, as the death rate exceeds the birth rate.
- Fallen asset prices. In the case of Japan asset price deflation was a mean reversion or correction back to the price level that prevailed before the asset bubble. There was a rather large price bubble in stocks and especially real estate in Japan in the 1980s (peaking in late 1989).
- Insolvent companies: Banks lent to companies and individuals that invested in real estate. When real estate values dropped, these loans could not be paid. The banks could try to collect on the collateral (land), but this wouldn't pay off the loan. Banks delayed that decision, hoping asset prices would improve. These delays were allowed by national banking regulators. Some banks made even more loans to these companies that are used to service the debt they already had. This continuing process is known as maintaining an "unrealized loss", and until the assets are completely revalued and/or sold off (and the loss realized), it will continue to be a deflationary force in the economy. Improving bankruptcy law, land transfer law, and tax law have been suggested as methods to speed this process and thus end the deflation.
- Insolvent banks: Banks with a larger percentage of their loans which are "non-performing", that is to say, they are not receiving payments on them, but have not yet written them off, cannot lend more money; they must increase their cash reserves to cover the bad loans.
- Fear of insolvent banks: Japanese people are afraid that banks will collapse so they prefer to buy (United States or Japanese) Treasury bonds instead of saving their money in a bank account. This likewise means the money is not available for lending and therefore economic growth. This means that the savings rate depresses consumption, but does not appear in the economy in an efficient form to spur new investment. People also save by owning real estate, further slowing growth, since it inflates land prices.[dubious ]
- Imported deflation: Japan imports Chinese and other countries' inexpensive consumable goods (due to lower wages and fast growth in those countries) and inexpensive raw materials, many of which reached all time real price minimums in the early 2000s. Thus, prices of imported products are decreasing. Domestic producers must match these prices in order to remain competitive. This decreases prices for many things in the economy, and thus is deflationary.
- Stimulus spending: According to both Austrian and monetarist economic theory, Keynesian stimulus spending actually has a depressing effect. This is because the government is competing against private industry, and usurping private investment dollars. In 1998, for example, Japan produced a stimulus package of more than 16 trillion yen, over half of it public works that would have a quashing effect on an equivalent amount of private, wealth-creating economic activity. Overall, Japan's stimulus packages added up to over one hundred trillion yen, and yet they failed. According to these economic schools, that stimulus money actually perpetuated the problem it was intended to cure.
In November 2009, Japan returned to deflation, according to The Wall Street Journal. Bloomberg L.P. reports that consumer prices fell in October 2009 by a near-record 2.2%. It was not until 2014 that new economic polies laid out by Prime Minister Shinzo Abe finally allowed for significant levels of inflation to return. However, the Covid-19 recession once again led to deflation in 2020, with consumer good prices quickly falling, prompting heavy government stimulus worth over 20% of GDP.
As a result, it is likely that deflation will remain as a long term economic issue for Japan.
During World War I the British pound sterling was removed from the gold standard. The motivation for this policy change was to finance World War I; one of the results was inflation, and a rise in the gold price, along with the corresponding drop in international exchange rates for the pound. When the pound was returned to the gold standard after the war it was done on the basis of the pre-war gold price, which, since it was higher than equivalent price in gold, required prices to fall to realign with the higher target value of the pound.
The UK experienced deflation of approx 10% in 1921, 14% in 1922, and 3 to 5% in the early 1930s.
Major deflations in the United States
There have been four significant periods of deflation in the United States.
The first and most severe was during the depression in 1818–1821 when prices of agricultural commodities declined by almost 50%. A credit contraction caused by a financial crisis in England drained specie out of the U.S. The Bank of the United States also contracted its lending. The price of agricultural commodities fell by almost 50% from the high in 1815 to the low in 1821, and did not recover until the late 1830s, although to a significantly lower price level. Most damaging was the price of cotton, the U.S.'s main export. Food crop prices, which had been high because of the famine of 1816 that was caused by the year without a summer, fell after the return of normal harvests in 1818. Improved transportation, mainly from turnpikes, and to a minor extent the introduction of steamboats, significantly lowered transportation costs.
The second was the depression of the late 1830s to 1843, following the Panic of 1837, when the currency in the United States contracted by about 34% with prices falling by 33%. The magnitude of this contraction is only matched by the Great Depression. (See: Historical examples of credit deflation) This "deflation" satisfies both definitions, that of a decrease in prices and a decrease in the available quantity of money. Despite the deflation and depression, GDP rose 16% from 1839 to 1843.
The Great Sag of 1873–96 could be near the top of the list. Its scope was global. It featured cost-cutting and productivity-enhancing technologies. It flummoxed the experts with its persistence, and it resisted attempts by politicians to understand it, let alone reverse it. It delivered a generation's worth of rising bond prices, as well as the usual losses to unwary creditors via defaults and early calls. Between 1875 and 1896, according to Milton Friedman, prices fell in the United States by 1.7% a year, and in Britain by 0.8% a year.
The fourth was in 1930–1933 when the rate of deflation was approximately 10 percent/year, part of the United States' slide into the Great Depression, where banks failed and unemployment peaked at 25%.
The deflation of the Great Depression occurred partly because there was an enormous contraction of credit (money), bankruptcies creating an environment where cash was in frantic demand, and when the Federal Reserve was supposed to accommodate that demand, it instead contracted the money supply by 30% in enforcement of its new real bills doctrine, so banks toppled one-by-one (because they were unable to meet the sudden demand for cash – see fractional-reserve banking). From the standpoint of the Fisher equation (see above), there was a concomitant drop both in money supply (credit) and the velocity of money which was so profound that price deflation took hold despite the increases in money supply spurred by the Federal Reserve.
Minor deflations in the United States
Throughout the history of the United States, inflation has approached zero and dipped below for short periods of time. This was quite common in the 19th century, and in the 20th century until the permanent abandonment of the gold standard for the Bretton Woods system in 1948. In the past 60 years, the United States has only experienced deflation two times; in 2009 with the Great Recession and in 2015, when the CPI barely broke below 0% at -0.1%.
Some economists believe the United States may have experienced deflation as part of the financial crisis of 2007–10; compare the theory of debt deflation. Year-on-year, consumer prices dropped for six months in a row to end-August 2009, largely due to a steep decline in energy prices. Consumer prices dropped 1 percent in October 2008. This was the largest one-month fall in prices in the US since at least 1947. That record was again broken in November 2008 with a 1.7% decline. In response, the Federal Reserve decided to continue cutting interest rates, down to a near-zero range as of December 16, 2008.
In late 2008 and early 2009, some economists feared the US could enter a deflationary spiral. Economist Nouriel Roubini predicted that the United States would enter a deflationary recession, and coined the term "stag-deflation" to describe it. It is the opposite of stagflation, which was the main fear during the spring and summer of 2008. The United States then began experiencing measurable deflation, steadily decreasing from the first measured deflation of -0.38% in March, to July's deflation rate of -2.10%. On the wage front, in October 2009 the state of Colorado announced that its state minimum wage, which is indexed to inflation, is set to be cut, which would be the first time a state has cut its minimum wage since 1938.
- Robert J. Barro and Vittorio Grilli (1994), European Macroeconomics, chap. 8, p. 142. ISBN 0-333-57764-7
- O'Sullivan, Arthur; Sheffrin, Steven M. (2003). Economics: Principles in Action. Upper Saddle River, New Jersey: Pearson Prentice Hall. p. 343. ISBN 0-13-063085-3.
- Harry Wallop, Harry Wallop (18 November 2008). "Deflation: why it is dangerous". The Daily Telegraph. Telegraph Media Group. Retrieved 20 September 2016.
- "The Economist explains: Why deflation is bad". Economist. Economist magazine. 7 Jan 2015. Retrieved 20 September 2016.
- Krugman, Paul. "Why is Deflation Bad?". The New York Times. Retrieved 20 September 2016.
- Walker, Andrew (29 January 2016). "Is deflation such a bad thing?". BBC. Retrieved 20 September 2016.
- Thoma, Mark. "Explainer: Why is deflation so harmful?". Moneywatch. CBS. Retrieved 20 September 2016.
- Hummel, Jeffrey Rogers. "Death and Taxes, Including Inflation: the Public versus Economists" (January 2007).
- Blanchard, O.; Dell'Ariccia, G.; Mauro, P. (18 August 2010). "Rethinking macroeconomic policy". Journal of Money, Credit and Banking. 42 (1): 199–215. CiteSeerX 10.1.1.153.7293. doi:10.1111/j.1538-4616.2010.00334.x. S2CID 14824203.
- Hussman, Ph.D., John, O. (2010). "Bernanke Leaps into a Liquidity Trap".
- Wells, David A. (1890). Recent Economic Changes and Their Effect on Production and Distribution of Wealth and Well-Being of Society. New York: D. Appleton and Co. ISBN 0-543-72474-3.
RECENT ECONOMIC CHANGES AND THEIR EFFECT ON DISTRIBUTION OF WEALTH AND WELL BEING OF SOCIETY WELLS.Money supply, p. 222
- Beckworth, David. "Aggregate Supply-Driven Deflation and Its Implications for Macroeconomic Stability" (PDF). Cato Journal. Cato Institute. 28 (3). Archived from the original (PDF) on 2011-10-09.
- Stapleford, Thomas (2009). The Cost of Living in America: A Political History of Economic Statistics, 1880-2000. Cambridge University Peess. pp. 69–73.
- Kendrick, John (1991). "U.S. Productivity Performance in Perspective, Business Economics, October 1, 1991". Cite journal requires
- Andrew Atkeson and Patrick J. Kehoe of the Federal Reserve Bank of Minneapolis Deflation and Depression: Is There an Empirical Link?
- Bell, Spurgeon (1940). Productivity, Wages and National Income, The Institute of Economics of the Brookings Institution. Waverly press.
- Beaudreau, Bernard C. (1996). Mass Production, the Stock Market Crash and the Great Depression. New York, Lincoln, Shanghi: Authors Choice Press.
- Carapella, Francesca (2015). "Banking panics and deflation in dynamic general equilibrium". Finance and Economics Discussion Series 2015-018. Washington: Board of Governors of the Federal Reserve System, http://dx.doi.org/10.17016/FEDS.2015.018.
- "The Debt-Deflation Theory of Great Depressions - FRASER".
- Friedman, Milton (1994). Money Mischief: Episodes in Monetary History. Houghton Mifflin Harcourt. p. 38. ISBN 9780547542225.
- Ginsburg, David (2006). Gold Coins of the New Orleans Mint - How Gold Coins Circulated in 19th Century America. pp. 25–33. ISBN 9780974237169.
Taylor, George Rogers (1951). The Transportation Revolution, 1815–1860. The Economic History of the United States. Volume IV. New York: Rinehart & Co. pp. 133, 331–4. ISBN 978-0-87332-101-3.
|volume=has extra text (help)
- Greenspan interview on CNBC 3 Dec. 2010
- Browne, Harry (1981). You Can Profit from a Monetary Crisis. ISBN 4-87187-322-6.
- North, Douglas C. (1966). The Economic Growth of the United States 1790-1860. New York, London: W. W. Norton & Company. ISBN 978-0-393-00346-8.
- Benjamin Roth, ed. James Ledbetter and Daniel B. Roth, "The Great Depression: A Diary". Perseus Books, 2009, p. 36. "A market for buying bank 'passbooks' also cropped up in places like Youngstown. If you were desperate enough in 1931 for money to buy basic necessities, you could get 60 to 70 cents on the dollar for your passbooks' value. Local newspapers even printed the weekly rates for buying and selling these passbooks as they became a commodity; Roth pasted one such rate chart into his diary."
- Taylor 1951, pp. 336
- Wallis, Hohn Joseph; National Bureau of Economic Research. "The Depression of 1839 to 1843" (PDF).
- Stapleford, Thomas (2009). The Cost of Living in America: A Political History of Economic Statistics, 1880-2000. Cambridge University Press. pp. 69–73.
- "The History of Economic Downturns in the US". But Now You Know.
- F. A. Hayek, interviewed by Diego Pizano July, 1979 published in: Diego Pizano, Conversations with Great Economists: Friedrich A. Hayek, John Hicks, Nicholas Kaldor, Leonid V. Kantorovich, Joan Robinson, Paul A.Samuelson, Jan Tinbergen (Jorge Pinto Books, 2009).
- "Hayek's 1932 Letter on the Great Depression". But Now You Know.
- Selgin, George (1997). "Less Than Zero: The Case for a Falling Price Level in a Growing Economy" (PDF). IEA Hobart Paper. London: Institute of Economic Affairs. 32: 87. ISSN 0073-2818. Retrieved 4 December 2014.
- "DEFLATIONARY SPIRALS".
- Grinin, L. E., & Korotayev, A. V. (2018). The future of the global economy in the light of inflationary and deflationary trends and long cycles theory. World Futures, 74(2), 84-103.
- Kagan, Julia. "Deflationary Spiral". Investopedia. Retrieved 20 March 2021.
- "Economics A-Z terms beginning with D". The Economist.
- Deflation: Making Sure "It" Doesn't Happen Here Remarks by Governor Ben S. Bernanke Before the National Economists Club, Washington, D.C. November 21, 2002
- "HICP - inflation rate". Eurostat. Retrieved 8 February 2021.
- Archived March 8, 2005, at the Wayback Machine
- Jao, Y C (2001). "Why Was Hong Kong a Laggard in Economic Recovery". The Asian Financial Crisis and the Ordeal of Hong Kong. Quorum Books. pp. 155–170. ISBN 978-1-56720-447-6.
- Liu, Henry C K (2003-07-04). "Why Hong Kong is in crisis". Asia Times. Archived from the original on 2003-07-08. Retrieved 27 April 2010.CS1 maint: unfit URL (link)
- "First annual negative inflation in 49 years". RTE.ie. 12 February 2009.
- Deflation a factor in Budget cuts - Lenihan, RTE News, 9 December 2009
- RTÉ News - Deflation a factor in Budget cuts - Lenihan Archived February 26, 2010, at the Wayback Machine
- "Meet the new BOJ, same as the old BOJ". TheMoneyIllusion. 2010-10-05. Retrieved 2013-02-14.
- Dooley, Ben (2019-12-24). "Japan Shrinks by 500,000 People as Births Fall to Lowest Number Since 1874 (Published 2019)". The New York Times. ISSN 0362-4331. Retrieved 2021-02-04.
- "Statistics Bureau Home Page/Population Estimates Monthly Report". 2019-06-06. Archived from the original on 2019-06-06. Retrieved 2021-02-04.
- Nielsen, Barry. "The Lost Decade: Lessons From Japan's Real Estate Crisis". Investopedia. Retrieved 2021-02-04.
- Post, The Blah (2019-11-17). "Japanese Asset Price Bubble". Medium. Retrieved 2021-02-04.
- Group, Global Legal. "International Comparative Legal Guides". International Comparative Legal Guides International Business Reports. Retrieved 2021-02-04.
- "Practical Law UK Signon". signon.thomsonreuters.com. Retrieved 2021-02-04.
- "Japan's 2020 corporate bankruptcies fall to 31-year low with government aid". The Japan Times. 2021-01-13. Retrieved 2021-02-04.
- "Prize possessions". The Economist. 2002-05-09. ISSN 0013-0613. Retrieved 2021-02-04.
- "What to do about zombie firms". The Economist. 2020-09-24. ISSN 0013-0613. Retrieved 2021-02-04.
- "Is the Bank of Japan Technically Insolvent? Dangers Involved in Long-Term Deterioration of BoJ Financial Position | Discuss Japan-Japan Foreign Policy Forum". www.japanpolicyforum.jp. Retrieved 2021-02-04.
- "Nippon Credit Bank declared insolvent and nationalised". The Irish Times. Retrieved 2021-02-04.
- (PDF) https://www.frbsf.org/economic-research/files/wp08-29bk.pdf. Missing or empty
- "New Japanese Import! Deflation". www.bullionvault.com. Retrieved 2021-02-04.
- "Why Stimulus Spending Depresses the Economy". But Now You Know.
- "Explaining Japan's Recession, Benjamin Powell". Mises Institute.
- Ponciano, Jonathan. "World Bank Warns Stimulus Spending And 'Dangerous' Debt Crisis Could Trigger Recession And Wipe Out A Decade Of Income Gains". Forbes. Retrieved 2021-02-04.
- Salsman, Richard. "Japan's Three Decades of Depressive Stimulus Schemes – AIER". www.aier.org. Retrieved 2021-02-04.
- "Japan Releases Stimulus Package as Recovery Weakens (Update3)".
- "Japan inflation rate hits 23-year high". BBC News. 2014-05-30. Retrieved 2021-02-04.
- "Abe unveils 'massive' coronavirus stimulus worth 20% of GDP". The Japan Times. 2020-04-06. Retrieved 2021-02-04.
- Kihara, Kaori Kaneko, Leika (2020-12-18). "Japan's consumer prices fall at fastest pace in decade, stoke deflation fears". Reuters. Retrieved 2021-02-04.
- "Deflation fears reignited as pandemic hits consumer prices in Japan". The Japan Times. 2020-05-01. Retrieved 2021-02-04.
- FocusEconomics. "Japan Inflation Rate (CPI) - Japan Economy Forecast & Outlook". FocusEconomics | Economic Forecasts from the World's Leading Economists. Retrieved 2021-02-04.
- Bank of England Quarterly inflation report Feb 2009 p. 33 chart A
- Atack, Jeremy; Passell, Peter (1994). A New Economic View of American History. New York: W.W. Norton and Co. p. 102. ISBN 0-393-96315-2.
- "Inflation, ho! (a primer on deflation)". 23 May 2003. Archived from the original on 28 February 2006.
- Wells, David A. (1890). Recent Economic Changes and Their Effect on Production and Distribution of Wealth and Well-Being of Society. New York: D. Appleton and Co. ISBN 0-543-72474-3.
RECENT ECONOMIC CHANGES AND THEIR EFFECT ON DISTRIBUTION OF WEALTH AND WELL BEING OF SOCIETY WELLS.
- Rothbard, Murray (2002). History of Money and Banking in the United States. Ludwig Von Mises Inst. pp. 164–8. ISBN 0-945466-33-1.
- Rosenberg, Yuval (26 February 2015). "America Is In Deflation. So What?". The Fiscal Times.
- "FOMC statement" (Press release). Board of Governors of the Federal Reserve System. 16 December 2008.
- Roubini, Nouriel (30 October 2008). "Get Ready For 'Stag-Deflation'". Forbes.
- Svaldi, Aldo (13 October 2009). "Colorado minimum wage set to fall". The Denver Post.
- Nicola Acocella, The deflationary bias of exit strategies in the EMU countries, in: Review of economic conditions in Italy, 2-3: 471–93, (2011).
- Ben S. Bernanke. Deflation: Making Sure "It" Doesn't Happen Here. USA Federal Reserve Board. 2002-11-21. Accessed: 2008-10-17. (Archived by WebCite at https://www.webcitation.org/5bdTTiZhU?url=http://www.federalreserve.gov/BOARDDOCS/SPEECHES/2002/20021121/default.htm
- Michael Bordo & Andrew Filardo, Deflation and monetary policy in a historical perspective: Remembering the past or being condemned to repeat it?, In: Economic Policy, October 2005, pp. 799–844.
- Georg Erber, The Risk of Deflation in Germany and the Monetary Policy of the ECB. In: Cesifo Forum 4 (2003), 3, pp. 24–29
- Charles Goodhart and Boris Hofmann, Deflation, credit and asset prices, In: Deflation - Current and Historical Perspectives, eds. Richard C. K. Burdekin & Pierre L. Siklos, Cambridge University Press, Cambridge, 2004.
- International Monetary Fund, Deflation: Determinants, Risks, and Policy Options - Findings of an Independent Task Force, Washington D. C., April 30, 2003.
- International Monetary Fund, World Economic Outlook 2006 – Globalization and Inflation, Washington D. C., April 2006.
- Otmar Issing, The euro after four years: is there a risk of deflation?, 16th European Finance Convention, 2 December 2002, London, Europäische Zentralbank, Frankfurt am Main
- Steven B. Kamin, Mario Marazzi & John W. Schindler, Is China "Exporting Deflation"?, International Finance Discussion Papers No. 791, Board of Governors of the Federal Reserve System, Washington D. C. January 2004.
- Krugman, Paul (1998). "Its Baaaaack: Japan's Slump and the Return of the Liquidity Trap" (PDF). Brookings Papers on Economic Activity. 1998 (2): 137–205. doi:10.2307/2534694. JSTOR 2534694.
|Wikiquote has quotations related to: Deflation|
- Cato Policy Report – A Plea for (Mild) Deflation
- Deflation (EH.Net economic history encyclopedia)
- What is deflation and how can it be prevented? (About.com)
- Deflation, Free or Compulsory from Making Economic Sense by Murray N. Rothbard
- "Annual Inflation Rate – Japan". Archived from the original on 2008-04-10.
- Why Are Japanese Wages So Sluggish? IMF Working paper | https://wiki-offline.jakearchibald.com/wiki/Deflation | 21 |
47 | Inflation is too much money chasing too few goods. It is a measure of the rate of rising prices in goods and services in an economy.
Although inflation can occur in any product or service, it generally refers to the increase in prices across the economy as a whole. In the United States, the most commonly used metric of inflation is the Consumer Price Index (CPI). The CPI measures the price of a basket of goods, including education, cars, food, recreation, and many others. As its name suggests, it represents changes in prices purchased for consumption in urban households.
Because of inflation, prices a year from now will be higher than prices today, broadly speaking. It’s why a McDonald’s cheeseburger costs $2.99 today and used to cost 10 cents. This erosion in purchasing power is essentially a “tax” of sort on savings. $100 in the bank today will not be worth $100 in 2022 – it will be closer to $98. This fact forces savers to make a decision: pay the tax or seek out returns to at least match the cost of inflation.
What Causes Inflation?
There are 3 primary causes/drivers of inflation: costs increasing, demand increasing, and fiscal and monetary policy. Remember: inflation is just the measure of the rate of rising prices.
- Costs Increasing – When production costs increase and demand for the product/service stays constant, the business is likely to pass on their higher costs to the consumer by raising prices. An increase in wages – which are typically the highest cost for an employer – is a perfect example and is highly correlated to a raise in prices on a finished good.
- Demand Increasing – When supply remains constant and demand increases, higher prices are likely to follow. Generally speaking, in times of economic expansion, consumer spending increases, leading to higher prices.
- Fiscal and Monetary Policy – Fiscal Policy refers to government spending and taxes whereas Monetary Policy is the actions taken by central banks to achieve macroeconomic goals. Policies taken to increase the supply of money will have inflationary effects. For example, lowering interest rates and distributing stimulus checks both increase the supply of money.
Why does the Federal Reserve Want Inflation?
Why does the Fed target 2% inflation? Why not zero? Or a negative number (deflation)?
There’s a lot of discussion around this topic and a general agreement has yet to be reached by economists. However, there are 3 main arguments made in support of the 2% target.
- Measurement Bias – Inflation itself is extremely hard to measure, leading many to believe it is often overstated. This means, when the CPI shows 2% inflation, we may actually be closer to 0%.
- Room to Cut Interest Rates – It’s believed that interest rates and inflation run in tandem to each other. Therefore, inflation needs to be positive and at least high enough to give the Fed room to lower interest rates in the event of a recession.
- Avoiding Deflation – Deflation is the downward movement of prices. Some believe that deflation is worse than inflation, as consumers will put off spending as they wait for prices to fall. This lack of spending would undermine the economy, putting downward pressure on everything and likely lead to lower GDP growth.
Towards the end of 2020 and in the beginning stages of 2021, the Federal Reserve has made a major adjustment to its inflation policy: instead of targeting a 2% fixed goal, they are now pursuing a 2% average. Given the low levels of inflation over the past several years, expectations are to see the Fed targeting greater than 2% inflation over the next few years, thus hitting their new goal of a 2% average.
What To Do About It in 2021
Whether the Fed’s policy on inflation are right or wrong is up for debate, but the likelihood of higher inflation in 2021 and beyond has been increasing. The increase in Treasury yields signals more and more investors are betting on higher inflation (Treasuries no longer look attractive as higher returns will be necessary to combat inflation). Where should investors turn?
Assets, typically in all forms, fare much better than cash. Tangible assets, like real estate and commodities, have long been regarded as safe havens during periods of inflation. Wanting to buy more stocks? Try to locate companies with increasing profit margins. This means their pricing power has increased more than their production costs, meaning more earnings for shareholders.
Talk to Price Wealth Management today about how you can best prepare for 2021 and beyond. | https://pricewm.com/2021/05/28/inflation-in-2021/ | 21 |
15 | When and how did colonization begin
"Colonialism is the political-administrative, mostly military-supported subjugation of other countries and areas (colonies), which served to consolidate one's own power, economic exploitation (raw materials) and the expansion of sales markets (territorial expansion)."
Beginnings of colonization
In the 16th century, in the Age of Discovery, Spain and Portugal became the first great colonial powers in Europe. The Spaniards had occupied the Pacific coast off Panama, central Mexico and Peru, Guatemala, New Granada and Chile. They exploited Indian society, introduced forced labor there, but did not try to build up trade and industry in the colonies. Goods such as tobacco, gold, silver, cotton and cocoa were used to promote their own Spanish economy. The Portuguese took possession of Brazil. Sugar-growing areas were created and slaves were used for plantation farming. Islands and coastal lands of the Indian Ocean also belonged to their property; gold was mined there and the spice trade was conquered.
Emergence of further colonial powers
At the beginning of the 17th century, England, France, the Netherlands and Russia also became colonial powers. The European powers had acquired 16.8 million km² of new land between 1800 and 1878 and as a result 67% of the earth's surface were under their influence.
Colonialism ties in with the age of imperialism (1880–1914). The colonial race for unexplored areas began. A further 27.4 million km² had been conquered by 1914.
The European nation states needed markets for industrial production, raw material suppliers, investment opportunities for their capital, settlement areas for the population and colonial goods. This was made possible by the more intensive exploration of Africa. In the colonial interest, traders, settlers and missionaries were sent to unexplored areas. Nationalism and racism shaped the colonial powers. With the missionary activity the foreign population should be Europeanized.
At the International Conference in Berlin in 1884/85 the powers planned to secure their interests in Africa. Without African participation, the governments of Europe and the USA decided on freedom of trade and navigation. In the race for Africa, borders were drawn arbitrarily. Goods such as cotton, rubber, wood, ores, minerals, precious metals, coffee, tea, spices, raw sugar and tobacco were indispensable for the European economy.
Semi-colonies emerged, which were economically and financially dependent and were thus under debt.
Before the outbreak of the First World War, the European colonial powers ruled 84.4% of the land surface with around 450 million inhabitants.
Even after most of the former colonies gained their independence between the 1960s and 80s, almost nothing changed in terms of economic and political dependency. Many formerly colonized countries are still heavily in debt today. Their worldwide coveted natural resources are being exploited, and agricultural autonomy has been and is being destroyed in the process. Plantation economy was introduced, tribal cultures suppressed. The consequences were Europeanization and economic backwardness.
The African population, for example, became increasingly impoverished. There were executions, deportations and concentration in camps, forced labor and lawlessness. The African population often tried to defend themselves against it in the form of resistance, such as the Italian invasion of Ethiopia, the Sudan against the British, the battle of the Hereros against German troops in South West Africa and the Boer War from 1899 to 1902. (red)
Reading tips and links
http://www.zeit.de/2001/51/Die_Schuld_des_Westen?page=4 (accessed on 7.1.2018)
Austrian Study Center for Peace and Conflict Resolution (ed.) From cold energy strategies to hot raw material wars? LitVerlag, Vienna 2008
Otto Zierer: New World History - Volume II (1966). Stuttgart / Salzburg; Fackelverlag Olten, pp. 315-349, pp. 225-246, pp. 377-383, pp. 485-497.
Otto Zierer: New World History - Volume III (1967). Stuttgart / Salzburg: Fackelverlag Olten, pp. 77-91, pp. 146-169, pp. 226-229, pp. 309-320
The Ravensburger Lexicon of World History - Volume 2 (1995). Ravensburg: Buchverlag Otto Maier GmbH, pp. 222-223, p. 261.
Wikipedia: Colonialism (accessed on 7.1.2018)
Wikipedia: Imperialism (accessed on 7.1.2018)
Hans Georg Schachtschabel (1979). Lexicon of economic policy. Munich, Wilhelm Goldmann
Medico international e.V. and DGB Bildungswerk / Nord-Süd Netz (ed.) (2005). Raw materials trade and war in Africa - on the causes and consequences of armed war. Frankfurt: Medico International e.V., p. 4
Image source: http://commons.wikimedia.org (accessed on 7.1.2018)
- Have tips for procrastinators
- How much does a medical assistant earn
- Peasant wearing cowboy hats
- Why does America not like Russia
- What is an international payment
- Do you have to marry
- What do words mean
- Is Islam a logical religion for you?
- What's your favorite sarcastic quote
- Why are Indian Brahmins vegetarians
- Is the serotonin syndrome
- How do you deal with the immigration issue?
- What are some innovations in delivery services
- How to study AP art history
- What is the formula for the net sales
- Is a violent revolution ever justified?
- Where was your most enjoyable Airbnb experience
- Why is Duolingo free
- How do you think the reality works
- Can the US ever annex Mexico?
- What do managers often underestimate?
- The best days in Australia are long over
- What is a polymer battery
- Why is sex taboo in families | https://arbalet37.ru/?post=675 | 21 |
16 | The involvement of the Belgian Congo (the modern-day Democratic Republic of Congo) in World War II began with the German invasion of Belgium in May 1940. Despite Belgium's surrender, the Congo remained in the conflict on the Allied side, administered by the Belgian government in exile, and provided much-needed raw materials, most notably gold and uranium, to Britain and the United States. Congolese troops of the Force Publique fought alongside British forces in the East African Campaign, and a Congolese medical unit served in Madagascar and in the Burma Campaign. Congolese formations also acted as garrisons in Egypt, Nigeria and Palestine.
The increasing demands placed on the Congolese population by the colonial authorities during the war, however, provoked strikes, riots and other forms of resistance, particularly from the indigenous Congolese. These were repressed, often violently, by the Belgian colonial authorities. The Congo's comparative prosperity during the conflict led to a wave of post-war immigration from Belgium, bringing the white population to 100,000 by 1950, as well as a period of industrialisation that continued throughout the 1950s. The role played by Congolese uranium during the hostilities caused the country to be of interest to the Soviet Union during the Cold War.
Background[edit | edit source]
Following World War I, Belgium possessed two colonies in Africa—the Belgian Congo, which it had controlled since its annexation of the Congo Free State in 1908, and Ruanda-Urundi, a former German colony that had been mandated to Belgium in 1924 by the League of Nations. The Belgian colonial military numbered 18,000 soldiers, making it one of the largest standing colonial armies in Africa at the time.
The Belgian government followed a policy of neutrality during the interwar years. Nazi Germany invaded on 10 May 1940 and, after 18 days of fighting, Belgium surrendered on 28 May; it was thereafter occupied by German forces. King Leopold III, who had surrendered to the Germans, was kept a prisoner for the rest of the war. Just before the fall of Belgium, its government, including the Minister of the Colonies Albert de Vleeschauwer, fled first to Bordeaux in France, then to London, where it formed an official Belgian government in exile in October 1940.
The Governor-General of the Congo, Pierre Ryckmans, decided on the day of Belgium's surrender that the colony would remain loyal to the Allies, in stark contrast to the French colonies that later pledged allegiance to the pro-German Vichy government. The Congo was therefore administered from London by the Belgian government in exile during the war.
Economic contribution[edit | edit source]
With the Congolese declaration of support for the Allies, the economy of the Congo and in particular its production of important raw materials, was placed at the disposal of other governments, particularly Britain and the United States:
|“||The Belgian Congo has entered the service of the Allies. Its economic doctrine and practices have been rapidly adapted to the new conditions and, whilst everything is being done to maintain the potentiality of the Congo wealth, there is no hesitation whatsoever when it comes to sacrificing any riches in favor of the war effort.||”|
The Congo had become increasingly centralised economically during the Great Depression of the 1930s, as the Belgian government encouraged the production there of cotton, which had value on the international market. The greatest economic demands on the Congo were related to raw materials. Between 1938 and 1944, the number of workers employed in the mines of the Union Minière du Haut Katanga (UMHK) rose from 25,000 to 49,000 to cope with the increased demand. The vast majority of the Congolese-produced raw resources were exported to other Allied countries. By 1942, the entire colony's output of copper, palm oil and industrial diamonds were being exported to the United Kingdom, while almost all the colony's lumber was sent to South Africa. Exports to the United States also rose from US$600,000 in early 1940 to $2,700,000 by 1942.
The Congo possessed major uranium deposits and was one of the few sources of the material available to the Allies. Uranium extracted from the disused Shinkolobwe uranium mine, owned by the UMHK in Katanga in the southern Congo, was instrumental in the development of an atomic bomb during the American Manhattan Project. The director of UMHK, Edgar Sengier, secretly despatched half of its uranium stock to New York in 1940; in September 1942, he sold it to the United States Army. Sengier himself moved to New York, from where he directed the UMHK's operations for the rest of the war. The U.S. government sent soldiers from the Army Corps of Engineers to Shinkolobwe in 1942 to restore the mine and improve its transport links by renovating the local aerodromes and port facilities. In 1944, the Americans acquired a further 1,720 long tons (1,750 t) of Uranium ore from the newly reopened mine.
Tax revenue from the Belgian Congo enabled the Belgian government in exile and Free Belgian Forces to fund themselves, unlike most other states in exile, which operated through subsidies and donations from sympathetic governments. It also meant that the Belgian gold reserves, which had been moved to London in 1940, were not needed to fund the war effort, and therefore were still available at the end of the war.
Military involvement[edit | edit source]
The Force Publique (or "Public Force") was the combined police and military force of both the Congo and Ruanda-Urundi. During World War II, it constituted the bulk of the Free Belgian Forces, numbering some 40,000 men. Like other colonial armies of the time, the Force Publique was racially segregated; it was led by 280 white officers and NCOs, but otherwise comprised indigenous black Africans. The Force Publique had never received the more modern equipment supplied to the Belgian Armed Forces before the war, and so had to use outdated weapons and equipment like the Stokes mortar and the Saint Chamond 75 mm gun.
East African Campaign[edit | edit source]
Three brigades of the Force Publique were sent to Abyssinia alongside British forces to fight the Italians in June 1940. This was done in spite of the government in exile's reservations to demonstrate its allegiance to the Allied cause. The Belgian 1st Colonial Brigade operated in the Galla-Sidamo area in the South-West sector. In May 1941, Force Publique elements under Major-General Auguste-Éduard Gilliaert successfully cut off the retreat of General Pietro Gazzera's Italians at Saio, in the Ethiopian Highlands. Gilliaert subsequently accepted the surrender of Gazzera and 7,000 Italian troops. Over the course of the campaign in Abyssinia, the Force Publique received the surrender of nine Italian generals, 370 high-ranking officers and 15,000 Italian colonial troops before the end of 1941. The Congolese forces in Abyssinia suffered about 500 fatalities.
After the Allied victory in Abyssinia, the Force Publique was redesignated the 1st Belgian Colonial Motorised Brigade Group. It garrisoned Egypt and British Mandatory Palestine during 1943 and 1944. The British colony of Nigeria was also garrisoned by 13,000 Congolese troops.
10 (Belgian Congo) Casualty Clearing Station[edit | edit source]
A medical unit from the Congo, the 10th (Belgian Congo) Casualty Clearing Station, was formed in 1943, and served alongside British forces during the invasion of Madagascar and in the Far East during the Burma Campaign. The unit included 350 black and 20 white personnel, and continued to serve with the British until 1945.
Life in the Belgian Congo during the war[edit | edit source]
At the start of the war, the population of the Congo numbered approximately 12 million black people and 30,000 whites. The colonial government segregated the population along racial lines and there was very little mixing between the colours. The white population of Léopoldville lived in a quarter of the city separated from the black majority, and all blacks in the city had to adhere to a curfew.
Education was overwhelmingly controlled by Protestant and Catholic missions, which were also responsible for providing limited medical and welfare support to the rural Congolese. Food remained unrationed during the war, with only the sales of tyres and automobiles restricted by the government. One of the consequences of the Congo's economic mobilisation during the war, particularly for the black population, was significant urbanisation. Just 9% of the indigenous population lived in cities in 1938; by 1950, the figure stood at close to 20%. The colonial authorities arrested enemy aliens in the Congo and confiscated their property in 1940.
Unrest during the war[edit | edit source]
Strikes[edit | edit source]
The demands made by the colonial government on Congolese workers during the war provoked strikes and riots from the workforce. Whites in the colony were allowed to form trade unions for the first time during the war, and their demands for better pay and working conditions were often emulated by black workers. In October 1941, white workers in the colony unsuccessfully attempted a general strike across the colony.
In December 1941, mine workers at various sites, including Jadotville and Élizabethville, went on strike, demanding that their pay be increased from 1.50 francs to 2 francs to compensate for rising living costs. The strike started on 3 December, and by the next day 1,400 workers had downed tools. All UMHK sites were affected by 9 December. The strike was also fuelled by other grievances against the colonial order and segregation:
|“||Why should a white man be paid more than a black, when all the white man does is stand there, giving orders, his arms behind his back and with his pipe in his mouth? We should take our rights, or we won't work tomorrow.||”|
—Léonard Mpoyi, December 1941
From the start, the colonial authorities attempted to persuade the strikers to disperse and go back to work. When they refused, they were fired on. In Jadotville, 15 strikers were shot dead by the military. In Élizabethville, the strikers were invited to negotiations at the town's stadium, where they were offered various concessions, including a 30% pay rise. When the workers refused, the Governor of Katanga, Amour Maron, shot Mpoyi, killing him. The Governor then ordered his soldiers to fire on the other strikers in the stadium. Between 60 and 70 strikers were killed during the protest, although the official estimate was around 30. The miners returned to work on 10 December.
Numerous smaller strikes occurred in the Congo later in the war, though not on the same scale as in 1941. In 1944 strikes broke out in Katanga and Kasaï, provoked by the conscription of workers for the mines and deteriorating working conditions. In 1945, riots and strikes occurred among the black dockworkers in the port city of Matadi.
Luluabourg mutiny[edit | edit source]
The colonial government in the Congo depended on its military to maintain civil order and, above all, it depended on the loyalty of the native troops who made up the bulk of the Force Publique. Black non-commissioned Officers led by First Sergeant-Major Ngoie Mukalabushi, a veteran of the East Africa Campaign, munitied at Luluabourg in the central Congolese province of Kasaï in February 1944; the trigger for this was a plan to vaccinate troops who had served at the front, though the soldiers were also unhappy about the demands placed on them and the way the white officers treated them.
The mutineers broke into the base's armoury on the morning of 20 February and pillaged the white quarter of the town. The town's inhabitants fled, and a Belgian officer and two white civilians were killed. The mutineers attacked visible signs of the colonial authorities and proclaimed their desire for independence. The mutineers then dispersed to their home villages, pillaging on the way; they failed to spread the insurrection to neighbouring garrisons. Two mutineers, including Mukalabushi, were executed for their part in the insurrection.
Legacy[edit | edit source]
As a result of the Congo's comparative prosperity during the conflict, the post-war period saw a wave of immigration to the country from Belgium. By 1950, 100,000 whites were living in the Congo. Nevertheless, the war highlighted the precarious nature of the colonial administration, leading Governor Ryckmans to remark that "the days of colonialism are over" in 1946. In the years after the war, the colonial government underwent extensive reform. Black people were granted significantly more rights and freedoms, leading to the growth of a so-called Évolué ("evolved") class.
Following the industrial unrest, trade unions for black workers were instituted in 1946, though they lacked power and influence. Workers at the UMHK continued to demand higher wages, and strikes were common in the colony for the next decade. Nevertheless, both wages and living conditions improved significantly in the years after the war. The war began a second wave of industrialisation that lasted right up to Congolese independence in 1960.
The 1941 Élizabethville massacre is a recurrent theme in Congolese art and folklore, and was later incorporated into the popular Congolese anti-colonial narrative. The importance of Congolese uranium during the war caused the Soviet Union to become interested in the territory; it was subsequently an area of Soviet interest during the Cold War.
See also[edit | edit source]
|Wikimedia Commons has media related to Belgian Congo in World War II.|
References[edit | edit source]
- "Empire Colonial Belge". Online Encyclopedia. Éditions Larousse. http://www.larousse.fr/encyclopedie/autre-region/Empire_colonial_belge/182725. Retrieved 17 July 2013.
- Killingray, David (2010). Fighting for Britain: African soldiers in the Second World War. Woodbridge, Suffolk: James Currey. p. 7. ISBN 978-1-84701-015-5.
- Yapou, Eliezer (1998). "4: Belgium: Disintegration and Resurection". Governments in Exile, 1939–1945. Jerusalem. http://governmentsinexile.com/yapoubelgium.html.
- "L'Histoire du Congo vue par les Coloniaux" (PDF). Urome.be. http://www.urome.be/pdf/fhisconbe.pdf. Retrieved 19 July 2013.
- Various authors (1942). The Belgian Congo at War. New York: Belgian Information Center. p. 7. http://www.ibiblio.org/hyperwar/UN/Belgium/Congo/.
- Various authors (1942). The Belgian Congo at War. New York: Belgian Information Center. p. 3. http://www.ibiblio.org/hyperwar/UN/Belgium/Congo/.
- Various authors (1942). The Belgian Congo at War. New York: Belgian Information Center. p. 8. http://www.ibiblio.org/hyperwar/UN/Belgium/Congo/.
- Zeilig, Leo, Renton, David; Seddon, David (2007). The Congo: Plunder and Resistance. London: Zed Books. p. 66. ISBN 1-84277-485-9.
- Various authors (1942). The Belgian Congo at War. New York: Belgian Information Center. p. 11. http://www.ibiblio.org/hyperwar/UN/Belgium/Congo/. Cite error: Invalid
<ref>tag; name "BCAW11" defined multiple times with different content Cite error: Invalid
<ref>tag; name "BCAW11" defined multiple times with different content
- Einstein, Albert (8 February 1939). "Letter from Albert Einstein to President Franklin D. Roosevelt". http://research.archives.gov/description/593374. Retrieved 9 September 2013.
- Broad, William J. (30 October 2007). "Why They Called It the Manhattan Project". New York Times. http://www.nytimes.com/2007/10/30/science/30manh.html. Retrieved 9 September 2013.
- Pollack, Michael (25 March 2011). "Answers to Questions About New York City". New York Times. http://www.nytimes.com/2011/03/27/nyregion/27fyi.html. Retrieved 3 September 2013.
- Hunt, Margaret R.. "Manhattan-Uranium Connection sidenotes". Amherst College. http://www3.amherst.edu/~mrhunt/uranium/notes8.htm. Retrieved 9 August 2013.
- Anderson, Oscar E.; Hewlett, Richard G. (1990) (PDF). The New World, 1939–1946 (Reprint ed.). Berkeley, Calif.: University of California Press. pp. 285–288. ISBN 0-520-07186-7. http://www.governmentattic.org/5docs/TheNewWorld1939-1946.pdf.
- Dowling, Timothy C. (ed.) (2005). Personal Perspectives: World War II. 2. Oxford: ABC-CLIO. p. 149. ISBN 1-85109-575-6.
- Willame, Jean-Claude (1972). Patrimonialism and Political Change in the Congo. Stanford, Calif.: Stanford U.P.. p. 62. ISBN 0-8047-0793-6.
- Buzin, Jean. "The "Belgian Congo Air Force." The Air Force that Never Was ..." (PDF). http://www.vieillestiges.be/files/articles/belgiancongoairforce_fr.pdf.
- Bellis, Malcolm A. (1999). Commonwealth Divisions: 1939–1945 (1st ed.). Crewe: Selbstverl.. p. 45. ISBN 0-9529693-0-0.
- Ready, J. Lee (1985). Forgotten Allies: the Military Contribution of the Colonies, Exiled Governments, and Lesser Powers to the Allied Victory in World War II. I. Jefferson: Mcfarland. p. 45. ISBN 0-7864-7168-9.
- Weller, George (1941). The Belgian Campaign in Ethiopia: A Trek of 2,500 Miles through Jungle Swamps and Desert Wastes. New York: Belgian Information Centre. p. 3. http://www.ibiblio.org/hyperwar/UN/Belgium/Ethiopia/.
- Thomas, Nigel (1991). Foreign Volunteers of the Allied Forces, 1939–45. London: Osprey. p. 17. ISBN 978-1-85532-136-6.
- "Epilogue Oriental". 16 June 2006. http://www.mil.be/vox/subject/index.asp?LAN=fr&ID=612&MENU=916&PAGE=2. Retrieved 2 July 2013.
- "Burma: The 10th Belgian Congo Casualty Clearing Station, 1945". Imperial War Museum. http://www.iwm.org.uk/collections/item/object/205058818. Retrieved 2 July 2013.
- Lagae, Johan (2005). "The built fabric of a segregated city". Rewriting Congo's Colonial Past: History, Memory, and Colonial Built Heritage in Lubumbashi, Democratic Republic of the Congo. Paris: Institut national d'histoire et d'art. http://inha.revues.org/499#text.
- Njoh, Ambe J. (2007). Planning Power: Social Control and Planning in Colonial Africa. New York: Routledge. p. 217. ISBN 1-84472-160-4.
- "Congo (Editorial)". Autumn 1960. https://www.marxists.org/history/etol/newspape/isj/1960/isj002/editorial4.htm.
- Higginson, John (1989). A Working Class in the Making: Belgian Colonial Labor Policy, Private Enterprise, and the African Mineworker, 1907–1951 (1st ed.). Madison, Wis.: University of Wisconsin Press. p. 188. ISBN 0-299-12070-8.
- Rutanga, Murindwa (2011). Politics, Religion and Power in the Great Lakes Region. Dakar: Council for the Development of Social Science Research in Africa (CODESRIA). p. 14. ISBN 2-86978-492-9.
- "The Story of Africa: World War II". BBC. http://www.bbc.co.uk/worldservice/specials/1624_story_of_africa/page20.shtml. Retrieved 18 July 2013.
- Higginson, John (1989). A Working Class in the Making: Belgian Colonial Labor Policy, Private Enterprise, and the African Mineworker, 1907–1951 (1st ed.). Madison, Wis.: University of Wisconsin Press. p. 189. ISBN 0-299-12070-8.
- Zeilig, Leo, Renton, David; Seddon, David (2007). The Congo: Plunder and Resistance. London: Zed Books. p. 68. ISBN 1-84277-485-9.
- Zeilig, Leo, Renton, David; Seddon, David (2007). The Congo: Plunder and Resistance. London: Zed Books. p. 69. ISBN 1-84277-485-9.
- Mobe, Anicet (18 August 2011). "L'armée au Congo ...". http://www.lalibre.be/debats/opinions/l-armee-au-congo-51b8d872e4b0de6db9c2b32a. Retrieved 8 August 2013.
- Transcription of the proceedings of the Council of Ministers of the government in exile, 18 December 1941, reproduced in "Grève à L'Union Minière du Haut Katanga". Expo Congo. http://www.expocongo.be/content.php?m=6&r=4&doc=218&l=fr. Retrieved 8 August 2013.
- Mwamba Mputu, Baudouin (2011). "IV: Mutinerie de Luluabourg de 1944". Le Congo-Kasaï (1865–1950): De l'exploration allemande à la consécration de Luluabourg. Paris: L'Harmattan.
- Zeilig, Leo, Renton, David; Seddon, David (2007). The Congo: Plunder and Resistance. London: Zed Books. p. 70. ISBN 1-84277-485-9.
- Zeilig, Leo, Renton, David; Seddon, David (2007). The Congo: Plunder and Resistance. London: Zed Books. p. 71. ISBN 1-84277-485-9.
- Buelens, Frans; Cassimon, Danny (October 2011). "The Industrialization of the Belgian Congo" (PDF). University of Antwerp. http://vkc.library.uu.nl/vkc/seh/research/Lists/Events/Attachments/12/CH9.Buelens.Cassimon.IndustrializationCongo.pdf.
- Fabian, Johannes (1996). Remembering the Present: Painting and Popular History in Zaire (2nd ed.). Berkeley, Calif.: University of California Press. pp. 59–60. ISBN 0-520-20376-3.
- Borstelmann, Thomas (1993). Apartheid, Colonialism, and the Cold War: the United States and Southern Africa, 1945–1952. New York: Oxford University Press. pp. 92–93. ISBN 0-19-507942-6.
Bibliography[edit | edit source]
- Primary sources
- Farson, Negley (1983) . Behind God's Back. Feltham, Middlesex, England: Zenith. ISBN 0-600-20739-0.
- Various authors (1942). The Belgian Congo at War. New York: Belgian Information Center. http://www.ibiblio.org/hyperwar/UN/Belgium/Congo/.
- de Vleeschauwer, Albert (1943). Belgian Colonial Policy. New York: Belgian Information Center. http://babel.hathitrust.org/cgi/pt?id=uc1.$b583254;view=1up;seq=1.
- Weller, George (1941). The Belgian Campaign in Ethiopia: A Trek of 2,500 Miles through Jungle Swamps and Desert Wastes. New York: Belgian Information Center. http://www.ibiblio.org/hyperwar/UN/Belgium/Ethiopia/.
- (French) Scohy, André (1945). Les corps expéditionnaires congolais (nr.8). Service de l'Information et de la Propagande du Congo belge.
- (French) Werbrouck, R.. La campagne des troupes coloniales belges en Abyssinie. Presses du Courrier de l'Afrique.
- (French) Buch, Pierre; Vanderlinden, Jacques (1995). L'uranium, la Belgique et les puissances: marché de dupes ou chef d'œuvre diplomatique?. Brussels: De Boeck. ISBN 978-2-8041-1993-5.
- Thematic studies
- Peeters, Natasja (2010). Lisolo na Bisu, 1885–1960: "Our History" the Congolese Soldier of the "Force Publique". Brussels: KLM-MRA. ISBN 2-87051-049-7.
- (French) Lovens, Maurice (1975). "L'effort militaire de guerre du Congo belge (1940–1944)". pp. 1–34.
- Higginson, John (1988). "The Belgian Congo In World War II". pp. 97–117.
- (French) Ermens, Paul (January 1947). "L'effort de guerre de la Force Publique du Congo belge". pp. 7–34.
- (French) Brousmiche, Philippe (2011). Bortaï: Campagne d'Abyssinie. Paris: L'Harmattan. ISBN 978-2-296-13069-2.
- (French) Vanderwalle, F.A. (May 1947). "Mutineries au Congo Belge". pp. 485–514.
- (Dutch) Van Maele, Benoît (1999). "De buitenlandse betrekkingen an Belgisch Congo aan de vooravond van de Tweede Wereldoorlog (1939–1940)". http://www.ethesis.net/congo_1939_1940/congo_1939_1940_inhoud.htm.
|This page uses Creative Commons Licensed content from Wikipedia (view authors).| | https://military.wikia.org/wiki/Belgian_Congo_in_World_War_II | 21 |
15 | |History and description of|
|Development of vowels|
|Development of consonants|
Most dialects of modern English have two close back vowels: the near-close near-back rounded vowel /?/ found in words like foot, and the close back rounded vowel /u:/ (realized as central [?:] in many dialects) found in words like goose. The STRUT vowel /?/, which historically was back, is often central [?] as well. This article discusses the history of these vowels in various dialects of English, focusing in particular on phonemic splits and mergers involving these sounds.
The Old English vowels included a pair of short and long close back vowels, /u/ and /u:/, both written ⟨u⟩ (the longer vowel is often distinguished as ⟨?⟩ in modern editions of Old English texts). There was also a pair of back vowels of mid-height, /o/ and /o:/, both of which were written ⟨o⟩ (the longer vowel is often ⟨?⟩ in modern editions).
The same four vowels existed in the Middle English system. The short vowels were still written ⟨u⟩ and ⟨o⟩, but long /u:/ came to be spelt as ⟨ou⟩, and /o:/ as ⟨oo⟩. Generally, the Middle English vowels descended from the corresponding Old English ones, but there were certain alternative developments: see Phonological history of Old English#Changes leading up to Middle and Modern English.
The Middle English open syllable lengthening caused short /o/ to be mostly lengthened to /?:/ (an opener back vowel) in open syllables; this development can be seen in words like nose. During the Great Vowel Shift, Middle English long /o:/ was raised to /u:/ in words like moon; Middle English long /u:/ was diphthongised, becoming the present-day /a?/, as in mouse; and Middle English /?:/ of nose was raised and later diphthongized, leading to present-day /o? ~ /.
At some point, short /u/ developed into a lax, near-close near-back rounded vowel, /?/, as found in words like put. (Similarly, short /i/ has become /?/.) According to Roger Lass, the laxing occurred in the 17th century, but other linguists have suggested that it may have taken place much earlier. The short /o/ remaining in words like lot has also been lowered and, in some accents, unrounded (see open back vowels).
In a handful of words, some of which are very common, the vowel /u:/ was shortened to /?/. In a few of those words, notably blood and flood, the shortening happened early enough that the resulting /?/ underwent the "foot-strut split" (see next section) and are now pronounced with /?/. Other words that underwent shortening later consistently have /?/, such as good, book, and wool. Still other words, such as roof, hoof, and root, are still in the process of the shift, with some speakers preferring /u:/ and others preferring /?/ in such words, such as in Texan English. For some speakers in Northern England, words ending in -ook, such as book and cook still have the long /u:/ vowel.
The FOOT-STRUT split is the split of Middle English short /u/ into two distinct phonemes: /?/ (as in foot) and /?/ (as in strut). The split occurs in most varieties of English, the most notable exceptions being most of Northern England and the English Midlands and some varieties of Hiberno-English. In Welsh English, the split is also absent in parts of North Wales, under influence from Merseyside and Cheshire accents, and south Pembrokeshire, where English replaced Welsh long before it occurred in the rest of Wales.
The origin of the split is the unrounding of /?/ in Early Modern English, resulting in the phoneme /?/. Usually, unrounding to /?/ did not occur if /?/ was preceded by a labial consonant, such as /p/, /f/, /b/, and was followed by /l/, /?/, or /t?/, leaving the modern /?/. Because of the inconsistency of the split, put and putt became a minimal pair that were distinguished as and . The first clear description of the split dates from 1644.
In non-splitting accents, cut and put rhyme, putt and put are homophonous as , and pudding and budding rhyme. However luck and look may not necessarily be homophones since many accents in the area concerned have look as , with the vowel of goose. In the Coventry area, a schwa is often hypercorrected to /?/, such as for 'button'.
The absence of the split is a less common feature of educated Northern English speech than the absence of the trap-bath split. The absence of the foot-strut split is sometimes stigmatized, and speakers of non-splitting accents may try to introduce it into their speech, which sometimes resultes in hypercorrection, such as by pronouncing butcher .
The name "FOOT-STRUT split" refers to the lexical sets introduced by Wells (1982) and identifies the vowel phonemes in the words. From a historical point of view, however, the name is inappropriate because the word foot did not have short /?/ when the split happened, but it underwent shortening only later.
|Great Vowel Shift||u:||u:||u:||u||u|
In modern standard varieties of English, such as Received Pronunciation (RP) and General American (GA), spelling is a reasonably good guide to whether a word is in the FOOT or STRUT lexical sets. The spellings o (apart from the regular LOT /?/) and u nearly always indicate the STRUT set (common exceptions are wolf, woman, pull, bull, full, push, bush, cushion, puss, put, pudding and butcher), and the spellings oo and ould usually indicate the FOOT set (common exceptions are blood and flood). The spellings of some words changed in accordance with that pattern: wull became wool, and wud became wood. In some recent loanwords such as Muslim both pronunciations are found.
The STRUT-commA merger or the STRUT-schwa merger is a merger of /?/ with /?/ that occurs in Welsh English and some higher-prestige Northern England English. Also usual in General American, the merger causes minimal pairs such as unorthodoxy and an orthodoxy to be merged. The phonetic quality of the merged vowel depends on the accent. For instance, merging General American accents have as the stressed variant, as the word-final variant. Elsewhere, the vowel surfaces as or even (GA features the weak vowel merger). That can cause words such as hubbub ( in RP) to have two different vowels (['h?b?b]) even though both syllables contain the same phoneme in both merging and non-merging accents. On the other hand, in Birmingham, Swansea and Miami, at least the non-final variant of the merged vowel is consistently realized as mid-central , with no noticeable difference between the stressed and the unstressed allophones.
The merged vowel is typically written with ⟨?⟩, regardless of its phonetic realization. That largely matches an older canonical phonetic range of the IPA symbol ⟨?⟩, which used to be described as covering a vast central area from near-close to near-open .
Because of the unstressed nature of /?/, the merger occurs only in unstressed syllables. Word-finally, the two vowels do not contrast in any accent of English (Middle English /u/, the vowel from which /?/ was split, could not occur in that position), and the vowel that occurs in that position approaches (the main allophone of STRUT in many accents). However, there is some dialectal variation, with varieties such as broad Cockney using variants that are strikingly more open than in other dialects. It is usually identified as belonging to the /?/ phoneme, even in accents without the /?-?/ merger, but native speakers may perceive the phonemic makeup of words such as comma to be /'k?m?/, rather than /'k?m?/. The open variety of /?/ occurs even in Northern English dialects (such as Geordie) that have not undergone the foot-strut split, but in Geordie, it can be generalised to other positions and so not only comma but also commas can be pronounced with in the second syllable, which is rare in other accents. In contemporary General British the final /?/ is often mid , rather than open .
All speakers of General American neutralise /?/, /?/ and /?:/ (the NURSE vowel) before /r/, which results in an r-colored vowel [?]. GA lacks a truly contrastive /?:/ phoneme (furry, hurry, letters and transfer (n.), distinguished in RP as /?:/, /?/, /?/ and /?:/ all have the same r-colored [?] in GA), and the symbol is used only to facilitate comparisons with other accents. See hurry-furry merger for more information.
Some other minimal pairs apart from unorthodoxy-an orthodoxy include unequal vs. an equal as well as a large untidy room vs. a large and tidy room . However, there are few minimal pairs like that, and their use as such has been criticized by scholars such as Geoff Lindsey because the members of such minimal pairs are structurally different. There also are words for which RP always used /?/ in the unstressed syllable, such as pick-up or sawbuck , which merging accents use the same /?/ as the second vowel of balance. In RP, there is a consistent difference in vowel height; the unstressed vowel in the first two words is a near-open (traditionally written with ⟨?⟩), but in balance, it is a mid .
|a large untidy room||a large and tidy room||/? 'l?:(r)d? ?n'ta?di 'ru:m/|
Earlier Middle English distinguished the close front rounded vowel /y/ (occurring in loanwords from Anglo-Norman like duke) and the diphthongs /iu/ (occurring in words like new), /eu/ (occurring in words like few) and /?u/ (occurring in words like dew).
In Late Middle English, /y/, /eu/, and /iu/ had merged as /?u/. In Early Modern English, /?u/ merged into /?u/ as well.
/?u/ has remained as such in some Welsh, some northern English and a few American accents. Thus, those varieties of Welsh English keep threw /?r?u/ distinct from through /?ru:/. In most accents, however, the falling diphthong /?u/ turned into a rising diphthong, which became the sequence /ju:/. The change had taken place in London by the late 17th century. Depending on the preceding consonant and on the dialect, it either remained as /ju:/ or developed into /u:/ by the processes of yod-dropping or yod-coalescence. That has caused the standard pronunciations of duke /d(j)u:k/ (or /d?u:k/), new /n(j)u:/, few /fju:/ and rude /ru:d/.
The FOOT-GOOSE merger is a phenomenon that helps define Scottish English, Northern Irish English, Malaysian English, and Singapore English,[full ] in which the modern English phonemes /?/ and /u:/ have merged into a single phoneme. As a result, word pairs like look and Luke are homophones, plus good and food and foot and boot rhyme.
The history of the merger dates back to two Middle English phonemes: the long vowel /o:/ (which shoot traces back to) and the short vowel /u/ (which put traces back to). As a result of the Great Vowel Shift, /o:/ raised to /u:/, which continues to be the pronunciation of shoot today. Meanwhile, the Middle English /u/ later adjusted to /?/, as put is pronounced today. However, the /u:/ of shoot next underwent a phonemic split in which some words retained /u:/ (like mood) while the vowel of other words shortened to /?/ (like good). Therefore, the two processes (/o:/->/u:/->/?/ and /u:/->/?/) resulted in a merger of the vowels in certain words, like good and put, to /?/, which is now typical of how all English dialects pronounce those two words. (See the table in the section "FOOT-STRUT split" above for more information about these early shifts.)[note 1] The final step, however, was for certain English dialects under the influence of foreign languages (the Scots language influencing Scottish English, for example) to merge the newly united /?/ vowel with the /u:/ vowel (of mood and shoot): the FOOT-GOOSE merger. Again, this is not an internally motivated phonemic merger but the appliance of different languages' vowel systems to English lexical incidence.[full ] The quality of this final merged vowel is usually [?~y~?] in Scotland and North Ireland but [u] in Singapore.
The full-fool merger is a conditioned merger of the same two vowels specifically before /l/, which causes pairs like pull/pool and full/fool to be homophones; it appears in many other dialects of English and is particularly gaining attention in several American English varieties.
|Great Vowel Shift||u:||u:||u||u|
In Geordie, the GOOSE vowel undergoes an allophonic split, with the monophthong [u: ~ ?:] being used in morphologically-closed syllables (as in bruise [b?u:z ~ b:z]) and the diphthong being used in morphologically-open syllables word-finally (as in brew [b]) but also word-internally at the end of a morpheme (as in brews [bz]).
Most dialects of English turn /u:/ into a diphthong, and the monophthongal [u: ~ ?: ~ ?:] is in free variation with the diphthongal [?u ~ ~ ~ ], particularly in word-internally. Word-finally, diphthongs are more usual.
The change of /u:.?/ to is a process that occurs in many varieties of British English in which bisyllabic /u:.?/ has become the diphthong in certain words. As a result, "ruin" is pronounced as monosyllabic ['n] and "fluid" is pronounced ['fld]. | https://www.popflock.com/learn?s=Phonological_history_of_English_close_back_vowels | 21 |
18 | Megalodon (/’m???l??d?n, -lo?-/ MEG-?-l?-don or /’me???l??d?n, -lo?-/ MAY-gh?-l?-don, meaning “big tooth”, from Ancient Greek: ????? (megas) “big, mighty” and ??o?? (odoús), “tooth”–whose stem is odont-, as seen in the genitive case form ???????, odóntos) is an extinct species of shark that lived approximately 23 to 2.6 million years ago, during the Cenozoic Era (early Miocene to end of Pliocene).
The taxonomic assignment of C. megalodon has been debated for nearly a century, and is still under dispute. The two major interpretations are Carcharodon megalodon (under family Lamnidae) or Carcharocles megalodon (under the family Otodontidae). Consequently, the scientific name of this species is commonly abbreviated C. megalodon in the literature.
Regarded as one of the largest and most powerful predators in vertebrate history, C. megalodon probably had a profound impact on the structure of marine communities. Fossil remains suggest that this giant shark reached a length of 18 metres (59 ft), and also indicate that it had a cosmopolitan distribution. Scientists suggest that C. megalodon looked like a stockier version of the great white shark, Carcharodon carcharias.
According to Renaissance accounts, gigantic, triangular fossil teeth often found embedded in rocky formations were once believed to be the petrified tongues, or glossopetrae, of dragons and snakes. This interpretation was corrected in 1667 by Danish naturalist Nicolaus Steno, who recognized them as shark teeth, and famously produced a depiction of a shark’s head bearing such teeth. He described his findings in the book The Head of a Shark Dissected, which also contained an illustration of a C. megalodon tooth.
© Image credit
Swiss naturalist Louis Agassiz gave the shark its initial scientific name, Carcharodon megalodon, in 1835, in his research work Recherches sur les poissons fossiles (Research on fossil fish), which he completed in 1843. C. megalodon teeth are morphologically similar to the teeth of the great white shark, and on the basis of this observation, Agassiz assigned C. megalodon to the genus Carcharodon. While the scientific name is C. megalodon, it is often informally dubbed the “megatooth shark”, “giant white shark” or “monster shark”.
© Image credit
C. megalodon is represented in the fossil record primarily by teeth and vertebral centra. As with all sharks, C. megalodon’s skeleton was formed of cartilage rather than bone; this means that most fossil specimens are poorly preserved. While the earliest C. megalodon remains were reported from late Oligocene strata, around 28 million years old, a more reliable date for the origin of the species is the early Miocene, about 23 million years ago. Although fossils are mostly absent in strata extending beyond the Tertiary boundary, they have been reported from subsequent Pleistocene strata. It is believed that C. megalodon became extinct around the end of the Pliocene, probably about 2.6 million years ago; reported post-Pliocene C. megalodon teeth are thought to be reworked fossils. C. megalodon had a cosmopolitan distribution; its fossils have been excavated from many parts of the world, including Europe, Africa and both North and South America, as well as Puerto Rico, Cuba, Jamaica, the Canary Islands, Australia, New Zealand, Japan, Malta, the Grenadines and India. C. megalodon teeth have been excavated from regions far away from continental lands, such as the Mariana Trench in the Pacific Ocean.
The most common fossils of C. megalodon are its teeth. Diagnostic characteristics include: triangular shape, robust structure, large size, fine serrations, and visible V-shaped neck. C. megalodon teeth can measure over 180 millimetres (7.1 in) in slant height or diagonal length, and are the largest of any known shark species.
Some fossil vertebrae have been found. The most notable example is a partially preserved vertebral column of a single specimen, excavated in the Antwerp basin, Belgium by M. Leriche in 1926. It comprises 150 vertebral centra, with the centra ranging from 55 millimetres (2.2 in) to 155 millimetres (6.1 in) in diameter. However, scientists have claimed that considerably larger vertebral centra can be expected. A partially preserved vertebral column of another C. megalodon specimen was excavated from Gram clay in Denmark by Bendix-Almgeen in 1983. This specimen comprises 20 vertebral centra, with the centra ranging from 100 millimetres (3.9 in) to 230 millimetres (9.1 in) in diameter.
© Image credit
Taxonomy and evolution
Even after decades of research and scrutiny, controversy over C. megalodon phylogeny persists. Several shark researchers (e.g. J. E. Randall, A. P. Klimley, D. G. Ainley, M. D. Gottfried, L. J. V. Compagno, S. C. Bowman, and R. W. Purdy) insist that C. megalodon is a close relative of the great white shark. However, others (e.g. D. S. Jordan, H. Hannibal, E. Casier, C. DeMuizon, T. J. DeVries, D. Ward, and H. Cappetta) cite convergent evolution as the reason for the dental similarity. Such Carcharocles advocates have gained noticeable support. However, the original taxonomic assignment still has wide acceptance.
C. megalodon within Carcharodon
The traditional view is that C. megalodon should be classified within the genus Carcharodon along with the great white shark. The main reasons cited for this phylogeny are: (1) an ontogenetic gradation, whereby the teeth shift from coarse serrations as a juvenile to fine serrations as an adult, the latter resembling C. megalodon’s; (2) morphological similarity of teeth of young C. megalodon to those of C. carcharias; (3) a symmetrical second anterior tooth; (4) a large intermediate tooth that is inclined mesially; and (5) upper anterior teeth that have a chevron-shaped neck area on the lingual surface. Carcharodon supporters suggest that C. megalodon and C. carcharias share a common ancestor, Palaeocarcharodon orientalis.
© Image credit
C. megalodon within Carcharocles
Around 1923, the genus Carcharocles was proposed by D. S. Jordan and H. Hannibal, to classify the shark C. auriculatus. Later on, Carcharocles proponents assigned C. megalodon to Carcharocles. Carcharocles proponents also suggest that the direct ancestor of the sharks belonging to Carcharocles is an ancient giant shark called Otodus obliquus, which lived during the Paleocene and Eocene epochs. According to Carcharocles supporters, Otodus obliquus evolved into Otodus aksuaticus, which evolved into Carcharocles auriculatus, and then into Carcharocles angustidens, and then into Carcharocles chubutensis, and then into C. megalodon. Hence, the immediate ancestor of C. megalodon is C. chubutensis, because it serves as the missing link between C. augustidens and C. megalodon and it bridges the loss of the “lateral cusps” that characterize C. megalodon.
Reconsideration of megatooth lineage from Carcharocles to Otodus
Shark researchers are apparently reconsidering the genus of the entire Carcharocles lineage back to Otodus.
Megalodon as a chronospecies
Shark researcher David Ward elaborated on the evolution of Carcharocles by implying that this lineage, stretching from the Paleocene to the Pliocene, is of a single giant shark which gradually changed through time, suggesting a case of chronospecies. This assessment may be credible.
Mako sharks as closest relatives of great white sharks
Carcharocles proponents point out that the great white shark is closely related to the ancient shark Isurus hastalis, the “broad tooth mako”, rather than to C. megalodon. One reason cited by paleontologist Chuck Ciampaglio is that the dental morphometrics (variations and changes in the physical form of objects) of I. hastalis and C. carcharias are remarkably similar. Another reason cited is that C. megalodon teeth have much finer serrations than C. carcharias teeth. Further evidence linking the great white shark more closely to ancient mako sharks, rather than to C. megalodon, was provided in 2009 – the fossilized remains of a form of the great white shark about 4 million years old were excavated from southwestern Peru in 1988. These remains demonstrate a likely shared ancestor of modern mako and great white sharks.
© Image credit
Ciampaglio asserted that dental similarities between C. megalodon and the great white are superficial with noticeable morphometric differences between them, and that these findings are sufficient to warrant a separate genus. However, some Carcharodon proponents (i.e., M. D. Gottfried, and R. E. Fordyce) provided more arguments for a close relationship between the megatooth and the great white. With respect to the recent controversy regarding fossil lamnid shark relationships, overall morphology – particularly the internal calcification patterns – of the great white shark vertebral centra have been compared to well-preserved fossil centra from the megatooth, including C. megalodon and C. angustidens. The morphological similarity of these comparisons supports a close relationship of the giant fossil megatooth species to extant whites.
Gottfried and Fordyce pointed out that some great white shark fossils are about 16 million years old and predate the transitional Pliocene fossils. In addition, the Oligocene C. megalodon records contradict the suggestion that C. chubutensis is the immediate ancestor of C. megalodon. These records also indicate that C. megalodon co-existed with C. angustidens.
Some paleontologists argue that the genus Otodus should be used for sharks within the Carcharocles lineage and that the genus Carcharocles should be discarded.
Several Carcharocles proponents (i.e. C. Pimiento, D. J. Ehret, B. J. MacFadden, and G. Hubbell) claim that both species belong to the order Lamniformes, and in the absence of living members of the family Otodontidae, the great white shark is the species most ecologically analogous to C. megalodon.
© Image credit
Due to fragmentary remains, estimating the size of C. megalodon has been challenging. However, the scientific community has concluded that C. megalodon was larger than the whale shark, Rhincodon typus. Scientists focused on two aspects of size: total length and body mass.
The first attempt to reconstruct the jaw of C. megalodon was made by Bashford Dean in 1909. From the dimensions of this jaw reconstruction, it was hypothesized that C. megalodon could have approached 30 metres (98 ft). Better knowledge of dentition and more accurate muscle structures, led to a rectified version of Dean’s jaw model about 70 percent of its original size and to a size consistent with modern findings. To resolve such errors, scientists, aided by new fossil discoveries of C. megalodon and improved knowledge of its closest living analogue’s anatomy, introduced more quantitative methods for estimating its size based on the statistical relationships between the tooth sizes and body lengths. Some methods are mentioned below.
In 1973, Hawaiian ichthyologist John E. Randall used a plotted graph to demonstrate a relationship between the enamel height (the vertical distance of the blade from the base of the enamel portion of the tooth to its tip) of the largest tooth in the upper jaw of the great white shark and the shark’s total length. Randall extrapolated this method to estimate C. megalodon’s total length. Randall cited two C. megalodon teeth in his work, specimen number 10356 at the American Museum of Natural History and specimen number 25730 at the United States National Museum, which had enamel heights of 115 millimetres (4.5 in) and 117.5 millimetres (4.63 in), respectively. These teeth yielded a corresponding total length of about 13 metres (43 ft). In 1991, Richard Ellis and John E. McCosker claimed that tooth enamel height does not necessarily increase in proportion to the animal’s total length.
Largest anterior tooth height
In 1996, after scrutinizing 73 great white shark specimens, Michael D. Gottfried, Leonard Compagno and S. Curtis Bowman proposed a linear relationship between the shark’s total length and the height of the largest upper anterior tooth. The proposed relationship is: total length in metres = – (0.096) × [UA maximum height (mm)]-(0.22). Gottfried and colleagues then extrapolated their technique to C. megalodon. The biggest C. megalodon tooth in the possession of this team, one discovered by Compagno in 1993, was an upper second anterior specimen, the maximum height of which was 168 millimetres (6.6 in). It yielded an estimated total length for C. megalodon of 15.9 metres (52 ft). Rumors of larger C. megalodon teeth persisted at the time. The maximum tooth height for this method is measured as a vertical line from the tip of the crown to the bottom of the lobes of the root, parallel to the long axis of the tooth. In layman’s terms, the maximum height of the tooth is its slant height.
In 2002, shark researcher Clifford Jeremiah proposed that total length was proportional to the root width of an upper anterior tooth. He claimed that for every 1 centimetre (0.39 in) of root width, there are approximately 1.4 metres (4.6 ft) of shark length. Jeremiah pointed out that the jaw perimeter of a shark is directly proportional to its total length, with the width of the roots of the largest teeth being a tool for estimating jaw perimeter. The largest tooth in Jeremiah’s possession had a root width of about 12 centimetres (4.7 in), which yielded 16.5 metres (54 ft) in total length. Ward asserted that this method is based on a sound principle that works well with most large sharks.
In 2002, paleontologist Kenshu Shimada of DePaul University proposed a linear relationship between tooth crown height and total length in great white sharks after conducting anatomical analysis of several specimens. This relationship is expressed as: total length in centimetres = a + bx, where a is a constant, b is the slope of the line and x is the crown height of tooth in millimetres. This relationship allowed any tooth to be used for the estimate. The crown height was measured as maximum vertical enameloid height on the labial side. Shimada pointed out that previously proposed methods were based on weaker evaluation of dental homology, and that the growth rate between the crown and root is not isometric, which he considered in his model. Furthermore, this relationship could be used to predict the total length of sharks that are morphologically similar to the great white shark, such as C. megalodon. Using this model, the upper anterior tooth (with maximum height of 168 millimetres (6.6 in)) possessed by Gottfried and colleagues corresponded to a total length of 15.1 metres (50 ft). In 2010, shark researchers Catalina Pimiento, Dana J. Ehret, Bruce J. MacFadden and Gordon Hubbell estimated the total length of C. megalodon on the basis of Shimada’s method. Among the specimens found in the Gatun Formation of Panama, specimen number 237956 yielded a total length of 16.8 metres (55 ft). Later on, shark researchers (including Pimiento, Ehret and MacFadden) revisited the Gatun Formation and recovered additional specimens. Specimen number 257579 yielded a total length of 17.9 metres (59 ft) on the basis of Shimada’s method.
In the 1990s, marine biologists such as Patrick J. Schembri and Staphon Papson opined that C. megalodon may have approached a maximum of around 24 to 25 metres (79 to 82 ft) in total length; however, Gottfried and colleagues asserted that C. megalodon could have reached a maximum of 20.3 metres (67 ft) in total length. However, commonly acknowledged maximum total length of C. megalodon is 18 metres (59 ft).
Largest known specimens
Gordon Hubbell from Gainesville, Florida, possesses an upper anterior C. megalodon tooth whose maximum height is 184.1 millimetres (7.25 in). In addition, a C. megalodon jaw reconstruction contains a tooth whose maximum height is reportedly 193.67 millimetres (7.625 in). This jaw reconstruction was developed by fossil hunter Vito Bertucci, who was known as “Megalodon Man”.
Body mass estimates
Gottfried and colleagues introduced a method to determine the mass of the great white after studying the length-mass relationship data of 175 specimens at various growth stages and extrapolated it to estimate C. megalodon’s mass. According to their model, a 15.9 metres (52 ft) long C. megalodon would have a mass of about 48 metric tons (53 short tons), a 17 metres (56 ft) long C. megalodon would have a mass of about 59 metric tons (65 short tons), and a 20.3 metres (67 ft) long C. megalodon would have a mass of 103 metric tons (114 short tons).
© Image credit
Dentition and jaw mechanics
A team of Japanese scientists, T. Uyeno, O. Sakamoto, and H. Sekine, discovered and excavated partial remains of a C. megalodon, with its nearly complete associated set of teeth, from Saitama, Japan, in 1989. Another nearly complete associated C. megalodon dentition was excavated from the Yorktown Formations of Lee Creek, North Carolina, in the United States and served as the basis of a jaw reconstruction of C. megalodon at the American Museum of Natural History in New York City. These associated tooth sets solved the mystery of how many teeth would be in each row of the jaws of C. megalodon. As a result, highly accurate jaw reconstructions became possible. More associated C. megalodon dentitions were found in later years. Based on these discoveries, scientists S. Applegate and L. Espinosa published an artificial dental formula (representation of dentition of an animal with respect to types of teeth and their arrangement within the animal’s jaw) for C. megalodon in 1996. Most accurate modern C. megalodon jaw reconstructions are based on this dental formula.
The dental formula of C. megalodon is: 220.127.116.11.0.8.4.
As evident from the formula, C. megalodon had four kinds of teeth in its jaws.
- Anterior – A
- Intermediate – I (C. megalodon’s tooth technically appears to be an upper anterior and is termed as “A3” because it is fairly symmetrical and does not point mesially (side of the tooth toward the midline of the jaws where the left and right jaws meet), but this tooth is still designated as an intermediate tooth. However, the great white shark’s intermediate tooth does point mesially. This point was raised in the Carcharodon vs. Carcharocles debate regarding the megalodon and favors the case of Carcharocles proponents.)
- Lateral – L
- Posterior – P
C. megalodon had a very robust dentition, and had a total of about 276 teeth in its jaws, spanning 5 rows. Paleontologists suggest that a very large C. megalodon had jaws over 2 metres (6.6 ft) across.
In 2008, a team of scientists led by S. Wroe conducted an experiment to determine the bite force of the great white shark, using a 2.5 metres (8.2 ft) long specimen, and then isometrically scaling the results for its maximum confirmed size and the conservative minimum and maximum body mass of C. megalodon, placing the bite force of the latter between 108,514 N (24,400 lbf) and 182,201 N (41,000 lbf) in a posterior bite. Compared to 18,216 N (4,095 lbf) for the largest confirmed great white shark, and 5,300 N (1,200 lbf) for the placoderm fish Dunkleosteus.
In addition, Wroe and colleagues pointed out that sharks shake sideways while feeding, amplifying the post-cranial generated forces. Therefore, the total force experienced by prey is probably higher than the estimate. The extraordinary bite forces in C. megalodon must be considered in the context of its great size and of paleontological evidence suggesting that C. megalodon was an active predator of large whales.
Functional parameters of teeth
The teeth of C. megalodon were exceptionally robust and serrated, which would have improved efficiency in slicing its prey’s flesh. Paleontologist B. K. Kent suggested that these teeth are comparatively thicker for their size with much lower slenderness and bending strength ratios. Their roots are substantially larger relative to total tooth heights, and so have a greater mechanical advantage. Teeth with these traits are good cutting tools and are well suited for grasping powerful prey and would seldom crack even when slicing through bones.
Gottfried and colleagues further estimated the schematics of C. megalodon’s entire skeleton. To support the beast’s dentition, its jaws would have been massive, stouter, and more strongly developed than those of the great white, which possesses a comparatively gracile dentition. The jaws would have given it a “pig-eyed” profile. Its chondrocranium would have had a blockier and more robust appearance than that of the great white. Its fins were proportional to its larger size. Scrutiny of the partially preserved vertebral C. megalodon specimen from Belgium revealed that C. megalodon had a higher vertebral count than specimens of any known shark. Only the great white approached it.
Using the above characteristics, Gottfried and colleagues reconstructed the entire skeleton of C. megalodon, which was later put on display at the Calvert Marine Museum at Solomon’s Island, Maryland, in the United States. This reconstruction is 11.5 metres (38 ft) long and represents a young individual. The team stresses that relative and proportional changes in the skeletal features of C. megalodon are ontogenetic in nature in comparison to those of the great white, as they occur in great white sharks while growing. Fossil remains of C. megalodon confirm that it had a heavily calcified skeleton while alive.
© Image credit
Range and habitat
Sharks, especially large species, are highly mobile and experience a complex life history amid wide distribution. Fossil records indicate that C. megalodon was cosmopolitan, and commonly occurred in subtropical to temperate latitudes. It has been found at latitudes up to 55° N; its inferred tolerated temperature range goes down to an annual mean of 12 °C (an annual range of 1-24 °C). It arguably had the capacity to endure such low temperatures by virtue of mesothermy, the physiological capability of large sharks to conserve metabolic heat by maintaining a higher body temperature than the surrounding water.
C. megalodon had enough adaptability to inhabit a wide range of marine environments (i.e., shallow coastal waters, areas of coastal upwelling, swampy coastal lagoons, sandy littorals, and offshore deep water environments), and exhibited a transient lifestyle. Adult C. megalodon were not abundant in shallow water environments, and mostly lurked offshore. C. megalodon may have moved between coastal and oceanic waters, particularly in different stages of its life cycle.
© Image credit
Sharks generally are opportunistic predators, but scientists propose that C. megalodon was “arguably the most formidable carnivore ever to have existed”. Its great size, high-speed swimming capability, and powerful jaws, coupled with a formidable killing apparatus, made it a super-predator capable of consuming a broad spectrum of fauna. A study about calcium isotopes of extinct and extant elasmobranchs revealed that C. megalodon fed at a higher trophic level than the contemporaneous great white shark.
Fossil evidence indicates that C. megalodon preyed upon cetaceans (i.e., dolphins), small whales, (including cetotheriids, squalodontids, and Odobenocetops), and large whales, (including sperm whales, bowhead whales, and rorquals), pinnipeds, porpoises, sirenians, and giant sea turtles. Marine mammals were regular prey targets for C. megalodon. Many whale bones have been found with clear signs of large bite marks (deep gashes) made by teeth that match the teeth of C. megalodon. Various excavations have revealed C. megalodon teeth lying close to the chewed remains of whales, and sometimes in direct association with them. Fossil evidence of interactions between C. megalodon and pinnipeds also exist. In one interesting observation, a 127 millimetres (5.0 in) C. megalodon tooth was found lying very close to a bitten earbone of a sea lion.
Competition and impact on marine communities
C. megalodon faced a highly competitive environment. However, its position at the top of the food chain probably had a profound impact on the structuring of marine communities. Fossil evidence indicates a correlation between C. megalodon emergence and extensive diversification of cetaceans. Juvenile C. megalodon preferred habitats where small cetaceans were abundant, and adult C. megalodon preferred habitats where large cetaceans were abundant. Such preferences may have developed shortly after they appeared in the Oligocene.
C. megalodon were contemporaneous with macro-predatory odontocetes (particularly raptorial sperm whales and squalodontids), which were also probably among the era’s apex predators, and provided competition. In response to competition from giant macro-predatory sharks, macro-predatory odontocetes may have evolved defensive adaptations; some species became pack predators, and some attained gigantic sizes, such as Livyatan melvillei. By late Miocene, raptorial sperm whales experienced a significant decline in abundance and diversity. However, raptorial delphinids began to emerge during the Pliocene, to fill this ecological void.
Like other sharks, C. megalodon also would have been piscivorous. Fossil evidence indicates that other notable species of macro-predatory sharks (e.g., great white sharks) responded to competitive pressure from C. megalodon by avoiding regions it inhabited. C. megalodon probably also had a tendency for cannibalism.
Sharks often employ complex hunting strategies to engage large prey animals. Some paleontologists suggest that great white shark hunting strategies may offer clues as to how C. megalodon hunted its unusually large prey. However, fossil evidence suggests that C. megalodon employed even more effective hunting strategies against large prey than the great white shark.
Paleontologists surveyed fossils to determine attacking patterns. One particular specimen – the remains of a 9 metres (30 ft) long prehistoric baleen whale (of an unknown Miocene taxon) – provided the first opportunity to quantitatively analyze its attack behavior. The predator primarily focused on the tough bony portions (i.e., shoulders, flippers, rib cage, and upper spine) of the prey, which great white sharks generally avoid. Dr. B. Kent elaborated that C. megalodon attempted to crush the bones and damage delicate organs (i.e., heart and lungs) harbored within the rib cage. Such an attack would have immobilized the prey, which would have died quickly from injuries to these vital organs. These findings also clarify why the ancient shark needed more robust dentition than that of the great white shark. Furthermore, attack patterns could differ for prey of different sizes. Fossil remains of some small cetaceans (e.g. cetotheriids) suggest that they were rammed with great force from below before being killed and eaten.
During the Pliocene, larger and more advanced cetaceans appeared. C. megalodon apparently further refined its hunting strategies to cope with these large whales. Numerous fossilized flipper bones (i.e., segments of the pectoral fins) and caudal vertebrae of large whales from the Pliocene have been found with C. megalodon bite marks. This paleontological evidence suggests that C. megalodon would immobilize a large whale by ripping apart or biting off its locomotive structures before killing and feeding on it.
© Image credit
Fossil evidence suggests that the preferred nursery sites of C. megalodon were warm water coastal environments, where threats were minor and food plentiful. Nursery sites were identified in the Gatun Formation of Panama, the Calvert Formation of Maryland, Banco de Concepción in the Canary Islands, and the Bone Valley Formation of Florida. As is the case with most sharks, C. megalodon gave birth to live young. The size of neonate C. megalodon teeth indicate that pups were around 2 to 4 metres (6.6 to 13.1 ft) in total length at birth. Their dietary preferences display an ontogenetic shift. Young C. megalodon commonly preyed on fish, giant sea turtles, dugongs and small cetaceans; mature C. megalodon moved to off-shore cetacean high-use areas and consumed large cetaceans.
However, an exceptional case in the fossil record suggests that juvenile C. megalodon may occasionally have attacked much larger balaenopterid whales. Three tooth marks apparently from a 4-7-metre (13.1-23.0 ft) long Pliocene macro-predatory shark were found on a rib from an ancestral great blue or humpback whale that showed evidence of subsequent healing. Scientists suspect that this shark was a juvenile C. megalodon.
© Image credit
Oceanic cooling and sea level drops
The Earth has been in a long term cooling trend since the Miocene Climactic Optimum, 15-17 Ma ago. This trend may have been accelerated by changes in global ocean circulation caused by the closure of the Central American Seaway and/or other factors (see Pliocene climate), setting the stage for glaciation in the northern hemisphere. Consequently, during the late Pliocene and Pleistocene, there were ice ages, which cooled the oceans significantly. Expansion of glaciation during the Pliocene tied up huge volumes of water in continental ice sheets, resulting in significant sea level drops. It has been argued that this cooling trend adversely impacted C. megalodon, as it preferred warmer waters, causing it to decline in abundance until its ultimate extinction at the end of the the Pliocene. Fossil evidence confirms the absence of C. megalodon in regions around the world where water temperatures had significantly declined during the Pliocene. Furthermore, these oceanographic changes may have restricted many of the suitable warm water nursery sites for C. megalodon, hindering reproduction. Nursery areas are pivotal for the survival of many shark species, in part because they protect juveniles from predation.
Decline in food supply
Baleen whales attained their greatest diversity during the Miocene, with over 20 recognized genera in comparison to only six extant genera. Such diversity presented an ideal setting to support a gigantic macropredator such as C. megalodon. However, by the end of the Miocene many species of mysticetes had gone extinct; surviving species may have been faster swimmers and thus more elusive prey. Furthermore, after the closure of the Central American Seaway, additional extinctions occurred in the marine environment, and faunal redistribution took place; tropical great whales decreased in diversity and abundance. Whale migratory patterns during the Pliocene have been reconstructed from the fossil record, suggesting that most surviving species of whales showed a trend towards polar regions. The cooling of the oceans during the Pliocene might have restricted the access of C. megalodon to polar regions, depriving it of its main food source of large whales. As a result of these developments, the food supply for C. megalodon in regions it inhabited during the Pliocene, primarily in low-to-mid latitudes, was no longer sufficient to sustain it worldwide. C. megalodon was adapted to a specialized lifestyle, and this lifestyle was disturbed by these developments. Paleontologist Albert Sanders suggests that C. megalodon was too large to sustain itself on the declining tropical food supply. The resulting shortage of food sources in the tropics during Plio-Pleistocene times may have fueled cannibalism by C. megalodon. Juveniles were at increased risk from attacks by adults during times of starvation.
Large raptorial delphinids (members of genus Orcinus) evolved during the Pliocene, and probably filled the ecological void left by the disappearance of raptorial sperm whales at the end of the Miocene. A minority view is that competition from ancestral killer whales may have contributed to the shark’s decline (another source suggests more generally that “competition with large odontocetes” may have been a factor). Fossil records indicate that these delphinids commonly occurred at high latitudes during the Pliocene, indicating that they could cope with the increasingly prevalent cold water temperatures. They also occurred in the tropics (e.g., Orcinus sp. in South Africa).
Expert consensus has pointed to factors such as a cooling trend in the oceans and a shortage of food sources during Plio-Pleistocene times having played a significant role in the demise of C. megalodon.
However, a recent analysis of the distribution, abundance and climatic range of C. megalodon over geologic time suggests that biotic factors, i.e. dwindling numbers of prey species combined with competition from new macro-predators (raptorial sperm whales, great white sharks and killer whales), were the primary drivers of its extinction. The distribution of C. megalodon during the Miocene and Pliocene did not correlate with warming and cooling trends; while the abundance and distribution of C. megalodon declined during the Pliocene, C. megalodon did show a capacity to inhabit anti-tropical latitudes. C. megalodon was found in locations with a mean temperature ranging from 12 to 27 °C (with a total range of from 1 to 33 °C), indicating that the global extent of suitable habitat for C. megalodon should not have been greatly affected by the temperature changes that occurred.
The extinction of C. megalodon set the stage for further changes in marine communities. Average body size of baleen whales increased significantly after its disappearance. Other apex predators gained from the loss of this formidable species, in some cases spreading to regions where C. megalodon became absent.
C. megalodon has been portrayed in several works of fiction, including films and novels, and continues to hold its place among the most popular subjects for fiction involving sea monsters. Many of these works posit that at least a relict population of C. megalodon survived extinction and lurk in the vast depths of the ocean, and that individuals may manage to surface, either by human intervention or by natural means. Jim Shepard’s story “Tedford and the Megalodon” is an example of this. Such beliefs are usually inspired by the discovery of a C. megalodon tooth by members of HMS Challenger in 1872, which some believed to be only 10,000 years old.
Some works of fiction (such as Shark Attack 3: Megalodon and Steve Alten’s Meg series) incorrectly depict C. megalodon as being a species over 70 million years old, and to have lived during the time of the dinosaurs. The writers of the movie Shark Attack 3: Megalodon depicted this assumption by including an altered copy of Great White Shark by shark researcher Richard Ellis. The copy shown in the film had several pages that do not exist in the book. The author sued the film’s distributor, Lions Gate Entertainment, asking for a halt to the film’s distribution along with $150,000 in damages. Steve Alten’s Meg: A Novel of Deep Terror is probably best known for portraying this inaccuracy with its prologue and cover artwork depicting C. megalodon killing a tyrannosaur in the sea.
The Animal Planet fictional documentary, Mermaids: The Body Found, included an encounter 1.6 million years ago between a pod of mermaids and a C. megalodon. Later, in August 2013, the Discovery Channel opened its annual Shark Week series with another film for television Megalodon: The Monster Shark Lives, a controversial docufiction about the creature that presented alleged evidence in order to suggest that C. megalodon was still alive. This program received criticism for being completely fictional; for example, all of the supposed “scientists” depicted were paid actors. In 2014 Discovery re-aired “The Monster Shark Lives”, along with a new one-hour program, “Megalodon: The New Evidence”, and an additional fictionalized program entitled “Shark of Darkness: Wrath of Submarine”, resulting in further backlash from media sources and the scientific community.
- Bretton W. Kent (1994). Fossil Sharks of the Chesapeake Bay Region. Egan Rees & Boyer, Inc.; 146 pages. ISBN 1-881620-01-8
- Dickson, K. A.; Graham, J. B. (2004). “Evolution and consequences of endothermy in fishes”. Physiological and Biochemical Zoology 77 (6): 998-1018. doi:10.1086/423743. PMID 15674772.
- The rise of super predatory sharks
- Extinct Megalodon, the largest shark ever, may have grown too big
- Carcharocles: Extinct Megatoothed shark
- Dykens, M.; Gillette, L. “SDNHM Fossil Field Guide: Carcharodon megalodon, Giant “Mega-Tooth” Shark”. Archived from the original on 13 June 2011. Retrieved 29 April 2012.
- Jurassic Shark
- Megalodon article on prehistoric-wildlife.com | https://vipwiki.org/movie/details/27816/megalodon.html | 21 |
81 | The hydrogen economy is an envisioned future in which hydrogen is used as a fuel for heat and hydrogen vehicles, for energy storage, and for long distance transport of energy. In order to phase out fossil fuels and limit global warming, hydrogen can be created from water using intermittent renewal sources such as wind and solar, and its combustion only releases water vapor to the atmosphere.
Hydrogen is a powerful fuel, and a frequent component in rocket fuel, but numerous technical challenges prevent the creation of a large-scale hydrogen economy. These include the difficulty of developing long-term storage, pipelines and engine equipment; a relative lack of off-the-shelf engine technology that can currently run safely on hydrogen; safety concerns due to the high reactivity of hydrogen fuel with environmental oxygen in the air; the expense of producing it by electrolysis; and a lack of efficient photochemical water splitting technology. Hydrogen can also be the fuel in a fuel cell, which produces electricity with high efficiency in a process which is the reverse of the electrolysis of water. The hydrogen economy is nevertheless slowly developing as a small part of the low-carbon economy.
As of 2019[update], hydrogen is mainly used as an industrial feedstock, primarily for the production of ammonia and methanol, and in petroleum refining. Although initially hydrogen gas was thought not to occur naturally in convenient reservoirs, it is now demonstrated that this is not the case; a hydrogen system is currently being exploited in the region of Bourakebougou, Mali, producing electricity for the surrounding villages. More discoveries of naturally occurring hydrogen in continental, on-shore geological environments have been made in recent years and open the way to the novel field of natural or native hydrogen, supporting energy transition efforts. As of 2019[update], almost all (95%) of the world's 70 million tons of hydrogen consumed yearly in industrial processing are produced by steam methane reforming (SMR) that also releases the greenhouse gas carbon dioxide.
A possible less-polluting alternative is the newer technology methane pyrolysis, though SMR with carbon capture also has much reduced carbon emissions. Small amounts of hydrogen (5%) are produced by the dedicated production of hydrogen from water, usually as a byproduct of the process of generating chlorine from seawater. As of 2018[update] there is not enough cheap clean electricity (renewable and nuclear) for this hydrogen to become a significant part of the low-carbon economy, and carbon dioxide is a by-product of the SMR process, but it can be captured and stored.
In the current hydrocarbon economy, heating is fueled primarily by natural gas and transportation by petroleum. Burning of hydrocarbon fuels emits carbon dioxide and other pollutants. The demand for energy is increasing, particularly in China, India, and other developing countries. Hydrogen can be an environmentally cleaner source of energy to end-users, without release of pollutants such as particulates or carbon dioxide.
Hydrogen has a high energy density by weight but has a low energy density by volume. Even when highly compressed, stored in solids, or liquified, the energy density by volume is only 1/4 that of gasoline, although the energy density by weight is approximately three times that of gasoline or natural gas. Hydrogen can help to decarbonize long-haul transport, chemicals, and iron and steel and has the potential to transport renewable energy long distance and store it long term, for example from wind power or solar electricity.
A hydrogen economy was proposed by the University of Michigan to solve some of the negative effects of using hydrocarbon fuels where the carbon is released to the atmosphere (as carbon dioxide, carbon monoxide, unburnt hydrocarbons, etc.). Modern interest in the hydrogen economy can generally be traced to a 1970 technical report by Lawrence W. Jones of the University of Michigan.
A spike in attention for the concept during the 2000s was repeatedly described as hype by some critics and proponents of alternative technologies. İnterest in the energy carrier resurged in the 2010s, notably by the forming of the Hydrogen Council in 2017. Several manufacturers released hydrogen fuel cell cars commercially, with manufacturers such as Toyota and industry groups in China planning to increase numbers of the cars into the hundreds of thousands over the next decade.
Current hydrogen market
As of 2019[update] fertiliser production and oil refining are the main uses. About half is used in the Haber process to produce ammonia (NH3), which is then used directly or indirectly as fertilizer. Because both the world population and the intensive agriculture used to support it are growing, ammonia demand is growing. Ammonia can be used as a safer and easier indirect method of transporting hydrogen. Transported ammonia can be then converted back to hydrogen at the bowser by a membrane technology.
The other half of current hydrogen production is used to convert heavy petroleum sources into lighter fractions suitable for use as fuels. This latter process is known as hydrocracking. Hydrocracking represents an even larger growth area, since rising oil prices encourage oil companies to extract poorer source material, such as oil sands and oil shale. The scale economies inherent in large-scale oil refining and fertilizer manufacture make possible on-site production and "captive" use. Smaller quantities of "merchant" hydrogen are manufactured and delivered to end users as well.
As of 2019[update] almost all hydrogen production is from fossil fuels, and emits 830 million tonnes of carbon dioxide per year. The distribution of production reflects the effects of thermodynamic constraints on economic choices: of the four methods for obtaining hydrogen, partial combustion of natural gas in a NGCC (natural gas combined cycle) power plant offers the most efficient chemical pathway and the greatest off-take of usable heat energy.
The large market and sharply rising prices in fossil fuels have also stimulated great interest in alternate, cheaper means of hydrogen production. As of 2002, most hydrogen is produced on site and the cost is approximately $0.70/kg and, if not produced on site, the cost of liquid hydrogen is about $2.20/kg to $3.08/kg.[needs update]
Production, storage, infrastructure
Hydrogen is often referred to by various colors to indicate its origin. As shown below, some production sources have more than one label with the more common listed first. Although the usage of color codes is not standardized, neither is it ambiguous.
|green||renewable electricity||via electrolysis of water||:28|
|turquoise||thermal splitting of methane||via methane pyrolysis||:28 :2|
|blue||fossil hydrocarbons with carbon capture and storage||CCS networks required||:28|
|gray||fossil hydrocarbons||often via steam reforming of natural gas||:28 :10 :2|
|brown or black||fossil coal||:91|
|purple or pink or red||nuclear power||via electrolysis of water||:2|
|white||—||refers to naturally occurring hydrogen|||
Methods of production
Molecular hydrogen was discovered in the Kola Superdeep Borehole. It is unclear how much molecular hydrogen is available in natural reservoirs, but at least one company specializes in drilling wells to extract hydrogen. Most hydrogen in the lithosphere is bonded to oxygen in water. Manufacturing elemental hydrogen requires the consumption of a hydrogen carrier such as a fossil fuel or water. The former carrier consumes the fossil resource and in the steam methane reforming (SMR) process produces greenhouse gas carbon dioxide. However in the newer methane pyrolysis process no greenhouse gas carbon dioxide is produced. These processes typically require no further energy input beyond the fossil fuel.
Decomposing water, the latter carrier, requires electrical or heat input, generated from some primary energy source (fossil fuel, nuclear power or a renewable energy). Hydrogen can also be produced by refining the effluent from geothermal sources in the lithosphere. Hydrogen produced by zero emission energy sources such as electrolysis of water using wind power, solar power, nuclear power, hydro power, wave power or tidal power is referred to as green hydrogen. When derived from natural gas by zero greenhouse emission methane pyrolysis, it is referred to as turquoise hydrogen. When fossil fuel derived with greenhouse gas emissions, is generally referred to as grey hydrogen. if most of the carbon dioxide emission is captured, it is referred to as blue hydrogen. Hydrogen produced from coal may be referred to as brown hydrogen.
Current production methods
Steam reforming – gray or blue
Hydrogen is industrially produced from steam reforming (SMR), which uses natural gas. The energy content of the produced hydrogen is less than the energy content of the original fuel, some of it being lost as excess heat during production. Steam reforming emits carbon dioxide, a greenhouse gas.
Methane pyrolysis – turquoise
Pyrolysis of methane (natural gas) with a one-step process bubbling methane through a molten metal catalyst is a "no greenhouse gas" approach to produce hydrogen that was perfected in 2017 and now being tested at scale. The process is conducted at high temperatures (1065 °C). Producing 1 kg of hydrogen requires about 5 kWh of electricity for process heat.
4(g) → C(s) + 2 H
2(g) ΔH° = 74 kJ/mol
The industrial quality solid carbon may be sold as manufacturing feedstock or landfilled (no pollution).
Electrolysis of water – green or purple
Hydrogen can be made via high pressure electrolysis, low pressure electrolysis of water, or a range of other emerging electrochemical processes such as high temperature electrolysis or carbon assisted electrolysis. However, current best processes for water electrolysis have an effective electrical efficiency of 70-80%, so that producing 1 kg of hydrogen (which has a specific energy of 143 MJ/kg or about 40 kWh/kg) requires 50–55 kWh of electricity.
In parts of the world, steam methane reforming is between $1–3/kg on average excluding hydrogen gas pressurization cost. This makes production of hydrogen via electrolysis cost competitive in many regions already, as outlined by Nel Hydrogen and others, including an article by the IEA examining the conditions which could lead to a competitive advantage for electrolysis.
The Kværner process or Kvaerner carbon black and hydrogen process (CB&H) is a method, developed in the 1980s by a Norwegian company of the same name, for the production of hydrogen from hydrocarbons (CnHm), such as methane, natural gas and biogas. Of the available energy of the feed, approximately 48% is contained in the hydrogen, 40% is contained in activated carbon and 10% in superheated steam.
Experimental production methods
Fermentative hydrogen production is the fermentative conversion of organic substrate to biohydrogen manifested by a diverse group of bacteria using multi enzyme systems involving three steps similar to anaerobic conversion. Dark fermentation reactions do not require light energy, so they are capable of constantly producing hydrogen from organic compounds throughout the day and night. Photofermentation differs from dark fermentation because it only proceeds in the presence of light. For example, photo-fermentation with Rhodobacter sphaeroides SH2C can be employed to convert small molecular fatty acids into hydrogen. Electrohydrogenesis is used in microbial fuel cells where hydrogen is produced from organic matter (e.g. from sewage, or solid matter) while 0.2 - 0.8 V is applied.
Biological hydrogen can be produced in an algae bioreactor. In the late 1990s it was discovered that if the algae is deprived of sulfur it will switch from the production of oxygen, i.e. normal photosynthesis, to the production of hydrogen.
Biological hydrogen can be produced in bioreactors that use feedstocks other than algae, the most common feedstock being waste streams. The process involves bacteria feeding on hydrocarbons and excreting hydrogen and CO2. The CO2 can be sequestered successfully by several methods, leaving hydrogen gas. In 2006-2007, NanoLogix first demonstrated a prototype hydrogen bioreactor using waste as a feedstock at Welch's grape juice factory in North East, Pennsylvania (U.S.).
Besides regular electrolysis, electrolysis using microbes is another possibility. With biocatalysed electrolysis, hydrogen is generated after running through the microbial fuel cell and a variety of aquatic plants can be used. These include reed sweetgrass, cordgrass, rice, tomatoes, lupines, and algae
High pressure electrolysis is the electrolysis of water by decomposition of water (H2O) into oxygen (O2) and hydrogen gas (H2) by means of an electric current being passed through the water. The difference with a standard electrolyzer is the compressed hydrogen output around 120-200 bar (1740-2900 psi, 12–20 MPa). By pressurising the hydrogen in the electrolyser, through a process known as chemical compression, the need for an external hydrogen compressor is eliminated, the average energy consumption for internal compression is around 3%. European largest (1 400 000 kg/a, High-pressure Electrolysis of water, alkaline technology) hydrogen production plant is operating at Kokkola, Finland.
This section needs to be updated.(February 2019)
Hydrogen can be generated from energy supplied in the form of heat and electricity through high-temperature electrolysis (HTE). Because some of the energy in HTE is supplied in the form of heat, less of the energy must be converted twice (from heat to electricity, and then to chemical form), and so potentially far less energy is required per kilogram of hydrogen produced.
While nuclear-generated electricity could be used for electrolysis, nuclear heat can be directly applied to split hydrogen from water. High temperature (950–1000 °C) gas cooled nuclear reactors have the potential to split hydrogen from water by thermochemical means using nuclear heat. Research into high-temperature nuclear reactors may eventually lead to a hydrogen supply that is cost-competitive with natural gas steam reforming. General Atomics predicts that hydrogen produced in a High Temperature Gas Cooled Reactor (HTGR) would cost $1.53/kg. In 2003, steam reforming of natural gas yielded hydrogen at $1.40/kg. In 2005 natural gas prices, hydrogen costs $2.70/kg.
High-temperature electrolysis has been demonstrated in a laboratory, at 108 MJ (thermal) per kilogram of hydrogen produced, but not at a commercial scale. In addition, this is lower-quality "commercial" grade Hydrogen, unsuitable for use in fuel cells.
Photoelectrochemical water splitting
Using electricity produced by photovoltaic systems offers the cleanest way to produce hydrogen. Water is broken into hydrogen and oxygen by electrolysis – a photoelectrochemical cell (PEC) process which is also named artificial photosynthesis. William Ayers at Energy Conversion Devices demonstrated and patented the first multijunction high efficiency photoelectrochemical system for direct splitting of water in 1983. This group demonstrated direct water splitting now referred to as an "artificial leaf" or "wireless solar water splitting" with a low cost thin film amorphous silicon multijunction sheet immersed directly in water. Hydrogen evolved on the front amorphous silicon surface decorated with various catalysts while oxygen evolved off the back metal substrate. A Nafion membrane above the multijunction cell provided a path for ion transport. Their patent also lists a variety of other semiconductor multijunction materials for the direct water splitting in addition to amorphous silicon and silicon germanium alloys. Research continues towards developing high-efficiency multi-junction cell technology at universities and the photovoltaic industry. If this process is assisted by photocatalysts suspended directly in water instead of using photovoltaic and an electrolytic system, the reaction is in just one step, which can improve efficiency.
A method studied by Thomas Nann and his team at the University of East Anglia consists of a gold electrode covered in layers of indium phosphide (InP) nanoparticles. They introduced an iron-sulfur complex into the layered arrangement, which when submerged in water and irradiated with light under a small electric current, produced hydrogen with an efficiency of 60%.
In 2015, it was reported that Panasonic Corp. has developed a photocatalyst based on niobium nitride that can absorb 57% of sunlight to support the decomposition of water to produce hydrogen gas. The company plans to achieve commercial application "as early as possible", not before 2020.
Concentrating solar thermal
Very high temperatures are required to dissociate water into hydrogen and oxygen. A catalyst is required to make the process operate at feasible temperatures. Heating the water can be achieved through the use of water concentrating solar power. Hydrosol-2 is a 100-kilowatt pilot plant at the Plataforma Solar de Almería in Spain which uses sunlight to obtain the required 800 to 1,200 °C to heat water. Hydrosol II has been in operation since 2008. The design of this 100-kilowatt pilot plant is based on a modular concept. As a result, it may be possible that this technology could be readily scaled up to the megawatt range by multiplying the available reactor units and by connecting the plant to heliostat fields (fields of sun-tracking mirrors) of a suitable size.
There are more than 352 thermochemical cycles which can be used for water splitting, around a dozen of these cycles such as the iron oxide cycle, cerium(IV) oxide-cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle, aluminum aluminum-oxide cycle, are under research and in testing phase to produce hydrogen and oxygen from water and heat without using electricity. These processes can be more efficient than high-temperature electrolysis, typical in the range from 35% - 49% LHV efficiency. Thermochemical production of hydrogen using chemical energy from coal or natural gas is generally not considered, because the direct chemical path is more efficient.
None of the thermochemical hydrogen production processes have been demonstrated at production levels, although several have been demonstrated in laboratories.
Hydrogen as a byproduct of other chemical processes
The industrial production of chlorine and caustic soda by electrolysis generates a sizable amount of Hydrogen as a byproduct. In the port of Antwerp a 1MW demonstration fuel cell power plant is powered by such byproduct. This unit has been operational since late 2011. The excess hydrogen is often managed with a hydrogen pinch analysis.
Although molecular hydrogen has very high energy density on a mass basis, partly because of its low molecular weight, as a gas at ambient conditions it has very low energy density by volume. If it is to be used as fuel stored on board the vehicle, pure hydrogen gas must be stored in an energy-dense form to provide sufficient driving range.
Pressurized hydrogen gas
Increasing gas pressure improves the energy density by volume making for smaller container tanks. The standard material for holding pressurised hydrogen in tube trailers is steel (there is no hydrogen embrittlement problem with hydrogen gas). Tanks made of carbon and glass fibres reinforcing plastic as fitted in Toyota Marai and Kenworth trucks are required to meet safety standards. Few materials are suitable for tanks as hydrogen being a small molecule tends to diffuse through many polymeric materials. The most common on board hydrogen storage in today's 2020 vehicles is hydrogen at pressure 700bar = 70MPa. The energy cost of compressing hydrogen to this pressure is significant.
Pressurized gas pipelines are always made of steel and operate at much lower pressures than tube trailers.
Alternatively, higher volumetric energy density liquid hydrogen or slush hydrogen may be used. However, liquid hydrogen is cryogenic and boils at 20.268 K (–252.882 °C or –423.188 °F). Cryogenic storage cuts weight but requires large liquification energies. The liquefaction process, involving pressurizing and cooling steps, is energy intensive. The liquefied hydrogen has lower energy density by volume than gasoline by approximately a factor of four, because of the low density of liquid hydrogen – there is actually more hydrogen in a litre of gasoline (116 grams) than there is in a litre of pure liquid hydrogen (71 grams). Liquid hydrogen storage tanks must also be well insulated to minimize boil off.
Japan has a liquid hydrogen (LH2) storage facility at a terminal in Kobe, and are expected to receive the first shipment of liquid hydrogen via LH2 carrier in 2020. Hydrogen is liquified by reducing its temperature to -253 °C, similar to liquified natural gas (LNG) which is stored at -162 °C. A potential efficiency loss of 12.79% can be achieved, or 4.26kWh/kg out of 33.3kWh/kg.
Liquid organic hydrogen carriers (LOHC)
Storage as hydride
Distinct from storing molecular hydrogen, hydrogen can be stored as a chemical hydride or in some other hydrogen-containing compound. Hydrogen gas is reacted with some other materials to produce the hydrogen storage material, which can be transported relatively easily. At the point of use the hydrogen storage material can be made to decompose, yielding hydrogen gas. As well as the mass and volume density problems associated with molecular hydrogen storage, current barriers to practical storage schemes stem from the high pressure and temperature conditions needed for hydride formation and hydrogen release. For many potential systems hydriding and dehydriding kinetics and heat management are also issues that need to be overcome. A French company McPhy Energy is developing the first industrial product, based on Magnesium Hydrate, already sold to some major clients such as Iwatani and ENEL. Emergent hydride hydrogen storage technologies have achieved a compressed volume of less than 1/500.
A third approach is to adsorb molecular hydrogen on the surface of a solid storage material. Unlike in the hydrides mentioned above, the hydrogen does not dissociate/recombine upon charging/discharging the storage system, and hence does not suffer from the kinetic limitations of many hydride storage systems. Hydrogen densities similar to liquefied hydrogen can be achieved with appropriate adsorbent materials. Some suggested adsorbents include activated carbon, nanostructured carbons (including CNTs), MOFs, and hydrogen clathrate hydrate.
Underground hydrogen storage
Underground hydrogen storage is the practice of hydrogen storage in caverns, salt domes and depleted oil and gas fields. Large quantities of gaseous hydrogen have been stored in caverns by ICI for many years without any difficulties. The storage of large quantities of liquid hydrogen underground can function as grid energy storage. The round-trip efficiency is approximately 40% (vs. 75-80% for pumped-hydro (PHES)), and the cost is slightly higher than pumped hydro. Another study referenced by a European staff working paper found that for large scale storage, the cheapest option is hydrogen at €140/MWh for 2,000 hours of storage using an electrolyser, salt cavern storage and combined-cycle power plant. The European project Hyunder indicated in 2013 that for the storage of wind and solar energy an additional 85 caverns are required as it cannot be covered by PHES and CAES systems. A German case study on storage of hydrogen in salt caverns found that if the German power surplus (7% of total variable renewable generation by 2025 and 20% by 2050) would be converted to hydrogen and stored underground, these quantities would require some 15 caverns of 500,000 cubic metres each by 2025 and some 60 caverns by 2050 – corresponding to approximately one third of the number of gas caverns currently operated in Germany. In the US, Sandia Labs are conducting research into the storage of hydrogen in depleted oil and gas fields, which could easily absorb large amounts of renewably produced hydrogen as there are some 2.7 million depleted wells in existence.
Power to gas
Power to gas is a technology which converts electrical power to a gas fuel. There are 2 methods, the first is to use the electricity for water splitting and inject the resulting hydrogen into the natural gas grid. The second (less efficient) method is used to convert carbon dioxide and water to methane, (see natural gas) using electrolysis and the Sabatier reaction. The excess power or off peak power generated by wind generators or solar arrays is then used for load balancing in the energy grid. Using the existing natural gas system for hydrogen Fuel cell maker Hydrogenics and natural gas distributor Enbridge have teamed up to develop such a power to gas system in Canada.
A natural gas network may be used for the storage of hydrogen. Before switching to natural gas, the UK and German gas networks were operated using towngas, which for the most part consisted of hydrogen. The storage capacity of the German natural gas network is more than 200,000 GWh which is enough for several months of energy requirement. By comparison, the capacity of all German pumped storage power plants amounts to only about 40 GW·h. Similarly UK pumped storage is far less than the gas network. The transport of energy through a gas network is done with much less loss (<0.1%) than in a power network (8%). The use of the existing natural gas pipelines for hydrogen was studied by NaturalHy. Ad van Wijk, a professor at Future Energy Systems TU Delft, also discusses the possibility of producing electricity in areas or countries with much sunlight (Sahara, Chile, Mexico, Namibia, Australia, New Zealand, ...) and transporting it (via ship, pipeline, ...) to the Netherlands. This being economically seen, still cheaper than producing it locally in the Netherlands. He also mentions that the energy transport capacity of gas lines are far higher than that of electricity lines coming into private houses (in the Netherlands) -30 kW vs 3 kW-.
The hydrogen infrastructure would consist mainly of industrial hydrogen pipeline transport and hydrogen-equipped filling stations like those found on a hydrogen highway. Hydrogen stations which were not situated near a hydrogen pipeline would get supply via hydrogen tanks, compressed hydrogen tube trailers, liquid hydrogen trailers, liquid hydrogen tank trucks or dedicated onsite production.
Over 700 miles of hydrogen pipeline currently exist in the United States. Although expensive, pipelines are the cheapest way to move hydrogen over long distances. Hydrogen gas piping is routine in large oil-refineries, because hydrogen is used to hydrocrack fuels from crude oil.
Hydrogen piping can in theory be avoided in distributed systems of hydrogen production, where hydrogen is routinely made on site using medium or small-sized generators which would produce enough hydrogen for personal use or perhaps a neighborhood. In the end, a combination of options for hydrogen gas distribution may succeed.
Hydrogen embrittlement is not a problem for hydrogen gas pipelines. Hydrogen embrittlement only happens with 'diffusible' hydrogen, i.e. atoms or ions. Hydrogen gas, however, is molecular (H2), and there is a very significant energy barrier to splitting it into atoms.
The IEA recommends existing industrial ports be used for production and existing natural gas pipelines for transport: also international co-operation and shipping.
South Korea and Japan, which as of 2019 lack international electrical interconnectors, are investing in the hydrogen economy. In March 2020, a production facility was opened in Namie, Fukushima Prefecture, claimed to be the world's largest.
A key tradeoff: centralized vs. distributed production
In a future full hydrogen economy, primary energy sources and feedstock would be used to produce hydrogen gas as stored energy for use in various sectors of the economy. Producing hydrogen from primary energy sources other than coal and oil would result in lower production of the greenhouse gases characteristic of the combustion of coal and oil fossil energy resources. The importance of non-polluting methane pyrolysis of natural gas is becoming a recognized method for using current natural gas infrastructure investment to produce hydrogen and no greenhouse gas.
One key feature of a hydrogen economy would be that in mobile applications (primarily vehicular transport) energy generation and use could be decoupled. The primary energy source would need no longer travel with the vehicle, as it currently does with hydrocarbon fuels. Instead of tailpipes creating dispersed emissions, the energy (and pollution) could be generated from point sources such as large-scale, centralized facilities with improved efficiency. This would allow the possibility of technologies such as carbon sequestration, which are otherwise impossible for mobile applications. Alternatively, distributed energy generation schemes (such as small scale renewable energy sources) could be used, possibly associated with hydrogen stations.
Aside from the energy generation, hydrogen production could be centralized, distributed or a mixture of both. While generating hydrogen at centralized primary energy plants promises higher hydrogen production efficiency, difficulties in high-volume, long range hydrogen transportation (due to factors such as hydrogen damage and the ease of hydrogen diffusion through solid materials) makes electrical energy distribution attractive within a hydrogen economy. In such a scenario, small regional plants or even local filling stations could generate hydrogen using energy provided through the electrical distribution grid or methane pyrolysis of natural gas. While hydrogen generation efficiency is likely to be lower than for centralized hydrogen generation, losses in hydrogen transport could make such a scheme more efficient in terms of the primary energy used per kilogram of hydrogen delivered to the end user.
The proper balance between hydrogen distribution, long-distance electrical distribution and destination converted pyrolysis of natural gas is one of the primary questions that arises about the hydrogen economy.
Again the dilemmas of production sources and transportation of hydrogen can now be overcome using on site (home, business, or fuel station) generation of hydrogen from off grid renewable sources..
This section needs to be updated.(February 2019)
Distributed electrolysis would bypass the problems of distributing hydrogen by distributing electricity instead. It would use existing electrical networks to transport electricity to small, on-site electrolysers located at filling stations. However, accounting for the energy used to produce the electricity and transmission losses would reduce the overall efficiency.
For heating and cooking instead of natural gas
This section needs expansion. You can help by adding to it. (September 2019)
Fuel cells as alternative to internal combustion and electric batteries
One of the main offerings of a hydrogen economy is that the fuel can replace the fossil fuel burned in internal combustion engines and turbines as the primary way to convert chemical energy into kinetic or electrical energy, thereby eliminating greenhouse gas emissions and pollution from that engine. Ad van Wijk, a professor at Future Energy Systems TU Delft also mentions that hydrogen is better for larger vehicles - such as trucks, buses and ships - than electric batteries. This because a 1 kg battery, as of 2019[update], can store 0.1 kWh of energy whereas 1 kg of hydrogen has a usable capacity of 33 kWh.
Although hydrogen can be used in conventional internal combustion engines, fuel cells, being electrochemical, have a theoretical efficiency advantage over heat engines. Fuel cells are more expensive to produce than common internal combustion engines.
Some types of fuel cells work with hydrocarbon fuels, while all can be operated on pure hydrogen. In the event that fuel cells become price-competitive with internal combustion engines and turbines, large gas-fired power plants could adopt this technology.
Hydrogen gas must be distinguished as "technical-grade" (five nines pure, 99.999%) produced by methane pyrolysis or electrolysis, which is suitable for applications such as fuel cells, and "commercial-grade", which has carbon- and sulfur-containing impurities, but which can be produced by the slightly cheaper steam-reformation process that releases carbon dioxide greenhouse gas. Fuel cells require high-purity hydrogen because the impurities would quickly degrade the life of the fuel cell stack.
Much of the interest in the hydrogen economy concept is focused on the use of fuel cells to power hydrogen vehicles, particularly large trucks. Hydrogen fuel cells suffer from a low power-to-weight ratio. Fuel cells are more efficient than internal combustion engines. If a practical method of hydrogen storage is introduced, and fuel cells become cheaper, they can be economically viable to power hybrid fuel cell/battery vehicles, or purely fuel cell-driven ones. The combination of the fuel cell and electric motor is 2-3 times more efficient than an internal-combustion engine. Capital costs of fuel cells have reduced significantly over recent years, with a modeled cost of $50/kW cited by the Department of Energy.
A 2019 video by Real Engineering noted that using hydrogen as a fuel for cars, as a practical matter, does not help to reduce carbon emissions from transportation. The 95% of hydrogen still produced from fossil fuels releases carbon dioxide, and producing hydrogen from water is an energy-consuming process. Storing hydrogen requires more energy either to cool it down to the liquid state or to put it into tanks under high pressure, and delivering the hydrogen to fueling stations requires more energy and may release more carbon. The hydrogen needed to move a fuel cell vehicle a kilometer costs approximately 8 times as much as the electricity needed to move a battery electric vehicle the same distance. Also in 2019, Katsushi Inoue, the president of Honda Europe, stated, "Our focus is on hybrid and electric vehicles now. Maybe hydrogen fuel cell cars will come, but that's a technology for the next era." A 2020 assessment concluded that hydrogen vehicles are still only 38% efficient, while battery EVs are 80% efficient.
Other fuel cell technologies based on the exchange of metal ions (e.g. zinc–air fuel cells) are typically more efficient at energy conversion than hydrogen fuel cells, but the widespread use of any electrical energy → chemical energy → electrical energy systems would necessitate the production of electricity.
Use as a transport fuel and system efficiency
An accounting of the energy utilized during a thermodynamic process, known as an energy balance, can be applied to automotive fuels. With today's[when?] technology, the manufacture of hydrogen via methane pyrolysis or steam reforming can be accomplished with a thermal efficiency of 75 to 80 percent. Additional energy will be required to liquefy or compress the hydrogen, and to transport it to the filling station via truck or pipeline. The energy that must be utilized per kilogram to produce, transport and deliver hydrogen (i.e., its well-to-tank energy use) is approximately 50 MJ using technology available in 2004. Subtracting this energy from the enthalpy of one kilogram of hydrogen, which is 141 MJ, and dividing by the enthalpy, yields a thermal energy efficiency of roughly 60%. Gasoline, by comparison, requires less energy input, per gallon, at the refinery, and comparatively little energy is required to transport it and store it owing to its high energy density per gallon at ambient temperatures. Well-to-tank, the supply chain for gasoline is roughly 80% efficient (Wang, 2002). Another grid-based method of supplying hydrogen would be to use electrical to run electrolysers. Roughly 6% of electricity is lost during transmission along power lines, and the process of converting the fossil fuel to electricity in the first place is roughly 33 percent efficient. Thus if efficiency is the key determinant it would be unlikely hydrogen vehicles would be fueled by such a method, and indeed viewed this way, electric vehicles would appear to be a better choice except for large trucks where the weight of batteries is less efficient. However, as noted above, hydrogen can be produced from a number of feedstocks, in centralized or distributed fashion, by methane pyrolysis with zero pollution, and these afford more efficient pathways to produce and distribute the fuel.
In 2006 a study of the well-to-wheels efficiency of hydrogen vehicles compared to other vehicles in the Norwegian energy system indicates that hydrogen fuel-cell vehicles (FCV) tend to be about a third as efficient as EVs when electrolysis is used, with hydrogen Internal Combustion Engines (ICE) being barely a sixth as efficient. Even in the case where hydrogen fuel cells get their hydrogen from natural gas reformation rather than electrolysis, and EVs get their power from a natural gas power plant, the EVs still come out ahead 35% to 25% (and only 13% for a H2 ICE). This compares to 14% for a gasoline ICE, 27% for a gasoline ICE hybrid, and 17% for a diesel ICE, also on a well-to-wheels basis.
In 2007 Hydrogen was called one of the least efficient and most expensive possible replacements for gasoline (petrol) in terms of reducing greenhouse gases; other technologies may be less expensive and more quickly implemented. A 2010 comprehensive study of hydrogen in transportation applications has found that "there are major hurdles on the path to achieving the vision of the hydrogen economy; the path will not be simple or straightforward". Although Ford Motor Company and French Renault-Nissan cancelled their hydrogen car R&D efforts in 2008 and 2009, respectively, they signed a 2009 letter of intent with the other manufacturers and Now GMBH in September 2009 supporting the commercial introduction of FCVs by 2015. A study by The Carbon Trust for the UK Department of Energy and Climate Change suggests that hydrogen technologies have the potential to deliver UK transport with near-zero emissions whilst reducing dependence on imported oil and curtailment of renewable generation. However, the technologies face very difficult challenges, in terms of cost, performance and policy. An Otto-cycle internal-combustion engine running on hydrogen is said to have a maximum efficiency of about 38%, 8% higher than a gasoline internal-combustion engine.
Hydrogen has one of the widest explosive/ignition mix range with air of all the gases with few exceptions such as acetylene, silane, and ethylene oxide. This means that whatever the mix proportion between air and hydrogen, when ignited in an enclosed space a hydrogen leak will most likely lead to an explosion, not a mere flame. This makes the use of hydrogen particularly dangerous in enclosed areas such as tunnels or underground parking. Pure hydrogen-oxygen flames burn in the ultraviolet color range and are nearly invisible to the naked eye, so a flame detector is needed to detect if a hydrogen leak is burning. Like natural gas, hydrogen is odorless and leaks cannot be detected by smell. This is the reason odorant chemical is injected into the natural gas to deliver the rotten-egg odor.
Hydrogen codes and standards are codes and standards for hydrogen fuel cell vehicles, stationary fuel cell applications and portable fuel cell applications. There are codes and standards for the safe handling and storage of hydrogen, for example the standard for the installation of stationary fuel cell power systems from the National Fire Protection Association.
Codes and standards have repeatedly been identified as a major institutional barrier to deploying hydrogen technologies and developing a hydrogen economy. As of 2019[update] international standards are needed for the transport, storage and traceability of environmental impact.
One of the measures on the roadmap is to implement higher safety standards like early leak detection with hydrogen sensors.[needs update] The Canadian Hydrogen Safety Program concluded that hydrogen fueling is as safe as, or safer than, compressed natural gas (CNG) fueling. The European Commission has funded the first higher educational program in the world in hydrogen safety engineering at the University of Ulster. It is expected that the general public will be able to use hydrogen technologies in everyday life with at least the same level of safety and comfort as with today's fossil fuels.
Although much of an existing natural gas grid could be reused with 100% hydrogen, eliminating natural gas from a large area such as Britain would require huge investment. Switching from natural gas to low-carbon heating is more costly if the carbon costs of natural gas are not reflected in its price.
Power plant capacity that now goes unused at night could be used to produce green hydrogen, but this would not be enough; therefore turquoise hydrogen from non-polluting methane pyrolysis or blue hydrogen with carbon capture and storage is needed, possibly after autothermal reforming of methane rather than steam methane reforming.
As of 2020[update] green hydrogen costs between $2.50-6.80 per kilogram and turquoise hydrogen $1.40-2.40/kg or blue hydrogen $1.40-2.40/kg compared with high-carbon grey hydrogen at $1–1.80/kg. Deployment of hydrogen can provide a cost-effective option to displace carbon polluting fossil fuels in applications where emissions reductions would otherwise be impractical and/or expensive. These may include heat for buildings and industry, conversion of natural gas-fired power stations, and fuel for aviation and importantly heavy trucks.
In Australia, the Australian Renewable Energy Agency (ARENA) has invested $55 million in 28 hydrogen projects, from early stage research and development to early stage trials and deployments. The agency's stated goal is to produce hydrogen by electrolysis for $2 per kilogram, announced by Minister for Energy and Emissions Angus Taylor in a 2021 Low Emissions Technology Statement.
Examples and pilot programs
This section needs to be updated.(February 2019)
The distribution of hydrogen for the purpose of transportation is currently[when?] being tested around the world, particularly in the US (California, Massachusetts), Canada, Japan, the EU (Portugal, Norway, Denmark, Germany), and Iceland, but the cost is very high.
Several domestic U.S. automobile have developed vehicles using hydrogen, such as GM and Toyota. However as of February 2020, infrastructure for hydrogen was underdeveloped except in some parts of California. The United States have their own hydrogen policy. A joint venture between NREL and Xcel Energy is combining wind power and hydrogen power in the same way in Colorado. Hydro in Newfoundland and Labrador are converting the current wind-diesel Power System on the remote island of Ramea into a Wind-Hydrogen Hybrid Power Systems facility. A similar pilot project on Stuart Island uses solar power, instead of wind power, to generate electricity. When excess electricity is available after the batteries are fully charged, hydrogen is generated by electrolysis and stored for later production of electricity by fuel cell. The US also have a large natural gas pipeline system already in place.
Countries in the EU which have a relatively large natural gas pipeline system already in place include Belgium, Germany, France, and the Netherlands. In 2020, The EU launched its European Clean Hydrogen Alliance (ECHA).
The UK started a fuel cell pilot program in January 2004, the program ran two Fuel cell buses on route 25 in London until December 2005, and switched to route RV1 until January 2007. The Hydrogen Expedition is currently working to create a hydrogen fuel cell-powered ship and using it to circumnavigate the globe, as a way to demonstrate the capability of hydrogen fuel cells.
Western Australia's Department of Planning and Infrastructure operated three Daimler Chrysler Citaro fuel cell buses as part of its Sustainable Transport Energy for Perth Fuel Cells Bus Trial in Perth. The buses were operated by Path Transit on regular Transperth public bus routes. The trial began in September 2004 and concluded in September 2007. The buses' fuel cells used a proton exchange membrane system and were supplied with raw hydrogen from a BP refinery in Kwinana, south of Perth. The hydrogen was a byproduct of the refinery's industrial process. The buses were refueled at a station in the northern Perth suburb of Malaga.
Iceland has committed to becoming the world's first hydrogen economy by the year 2050. Iceland is in a unique position. Presently,[when?] it imports all the petroleum products necessary to power its automobiles and fishing fleet. Iceland has large geothermal resources, so much that the local price of electricity actually is lower than the price of the hydrocarbons that could be used to produce that electricity.
Iceland already converts its surplus electricity into exportable goods and hydrocarbon replacements. In 2002, it produced 2,000 tons of hydrogen gas by electrolysis, primarily for the production of ammonia (NH3) for fertilizer. Ammonia is produced, transported, and used throughout the world, and 90% of the cost of ammonia is the cost of the energy to produce it.
Neither industry directly replaces hydrocarbons. Reykjavík, Iceland, had a small pilot fleet of city buses running on compressed hydrogen, and research on powering the nation's fishing fleet with hydrogen is under way (for example by companies as Icelandic New Energy). For more practical purposes, Iceland might process imported oil with hydrogen to extend it, rather than to replace it altogether.
The Reykjavík buses are part of a larger program, HyFLEET:CUTE, operating hydrogen fueled buses in eight European cities. HyFLEET:CUTE buses were also operated in Beijing, China and Perth, Australia (see below). A pilot project demonstrating a hydrogen economy is operational on the Norwegian island of Utsira. The installation combines wind power and hydrogen power. In periods when there is surplus wind energy, the excess power is used for generating hydrogen by electrolysis. The hydrogen is stored, and is available for power generation in periods when there is little wind.
India is said to adopt hydrogen and H-CNG, due to several reasons, amongst which the fact that a national rollout of natural gas networks is already taking place and natural gas is already a major vehicle fuel. In addition, India suffers from extreme air pollution in urban areas.
Currently however, hydrogen energy is just at the Research, Development and Demonstration (RD&D) stage. As a result, the number of hydrogen stations may still be low, although much more are expected to be introduced soon.
The Turkish Ministry of Energy and Natural Resources and the United Nations Industrial Development Organization have signed a $40 million trust fund agreement in 2003 for the creation of the International Centre for Hydrogen Energy Technologies (UNIDO-ICHET) in Istanbul, which started operation in 2004. A hydrogen forklift, a hydrogen cart and a mobile house powered by renewable energies are being demonstrated in UNIDO-ICHET's premises. An uninterruptible power supply system has been working since April 2009 in the headquarters of Istanbul Sea Buses company.
Another indicator of the presence of large natural gas infrastructures already in place in countries and in use by citizens is the number of natural gas vehicles present in the country. The countries with the largest amount of natural gas vehicles are (in order of magnitude): Iran, China, Pakistan, Argentina, India, Brasil, Italy, Colombia, Thailand, Uzbekistan, Bolivia, Armenia, Bangladesh, Egypt, Peru, Ukraine, United States. Natural gas vehicles can also be converted to run on hydrogen.
Some hospitals have installed combined electrolyser-storage-fuel cell units for local emergency power. These are advantageous for emergency use because of their low maintenance requirement and ease of location compared to internal combustion driven generators.
Also, in some private homes, fuel cell micro-CHP plants can be found, which can operate on hydrogen, or other fuels as natural gas or LPG. When running on natural gas, it relies on steam reforming of natural gas to convert the natural gas to hydrogen prior to use in the fuel cell. This hence still emits CO2 (see reaction) but (temporarily) running on this can be a good solution until the point where the hydrogen is starting to become distributed through the (natural gas) piping system.
Partial hydrogen economy
Hydrogen is simply a method to store and transmit energy. Energy development of various alternative energy transmission and storage scenarios which begin with hydrogen production, but do not use it for all parts of the store and transmission infrastructure, may be more economic, in both near and far term. These include:
This section needs expansion with: shipping to Japan and maybe from Iceland. You can help by adding to it. (February 2019)
An alternative to gaseous hydrogen as an energy carrier is to bond it with nitrogen from the air to produce ammonia, which can be easily liquefied, transported, and used (directly or indirectly) as a clean and renewable fuel. For example, researchers at CSIRO in Australia in 2018 fuelled a Toyota Mirai and Hyundai Nexo with hydrogen separated from ammonia using a membrane technology.
Hybrid heat pumps
Hybrid heat pumps (not to be confused with air water hybrids) also include a boiler which could run on methane or hydrogen, and could be a pathway to full decarbonisation of residential heating as the boiler would be used to top up the heating when the weather was very cold.
As of 2019[update] although technically possible production of syngas from hydrogen and carbon-dioxide from bio-energy with carbon capture and storage (BECCS) via the Sabatier reaction is limited by the amount of sustainable bioenergy available: therefore any bio-SNG made may be reserved for production of aviation biofuel.
- United States Hydrogen Policy
- Hydrogen damage
- Hydrogen embrittlement
- Alternative fuel
- Energy development
- Fuel Cells and Hydrogen Joint Technology Initiative
- Formic acid
- Hydrogen internal combustion engine vehicle
- Hydrogen prize
- Hydrogen-powered aircraft
- International Journal of Hydrogen Energy
- Lolland Hydrogen Community
- Methane pyrolysis
- "Transitioning to hydrogen: Assessing the engineering risks and uncertainties". theiet.org. Archived from the original on 2020-06-19. Retrieved 2020-04-11.
- CCJ News. "How fuel cell trucks produce electric power and how they're fueled". CCJ News. Commercial Carrier Journal. Archived from the original on 19 October 2020. Retrieved 19 October 2020.
- "A portfolio of power-trains for Europe: a fact-based analysis" (PDF). International Partnership for Hydrogen and Fuel Cells in the Economy. Archived from the original (PDF) on 15 October 2017. Retrieved 9 September 2020.
- Toyota. "Hydrogen Fuel-Cell Class 8 Truck". Hydrogen-Powered Truck Will Offer Heavy-Duty Capability and Clean Emissions. Toyota. Archived from the original on 19 October 2020. Retrieved 19 October 2020.
- IEA H2 2019, p. 13
- "Hydrogen Insights: A perspective on hydrogen investment, market development and cost competitiveness" (PDF). Hydrogen Council. February 2021. Archived (PDF) from the original on 17 February 2021. Retrieved 21 February 2021.
- "Hydrogen isn't the fuel of the future. It's already here". World Economic Forum. Archived from the original on 2019-11-02. Retrieved 2019-11-29.
- Deign, Jason (2019-10-14). "10 Countries Moving Toward a Green Hydrogen Economy". greentechmedia.com. Archived from the original on 2019-12-09. Retrieved 2019-11-29.
- Prinzhofer, Alain; Tahara Cissé, Cheick Sidy; Diallo, Aliou Boubacar (October 2018). "Discovery of a large accumulation of natural hydrogen in Bourakebougou (Mali)". International Journal of Hydrogen Energy. 43 (42): 19315–19326. doi:10.1016/j.ijhydene.2018.08.193.
- Larin, Nikolay; Zgonnik, Viacheslav; Rodina, Svetlana; Deville, Eric; Prinzhofer, Alain; Larin, Vladimir N. (September 2015). "Natural Molecular Hydrogen Seepage Associated with Surficial, Rounded Depressions on the European Craton in Russia". Natural Resources Research. 24 (3): 369–383. doi:10.1007/s11053-014-9257-5. S2CID 128762620.
- Gaucher, Eric C. (1 February 2020). "New Perspectives in the Industrial Exploration for Native Hydrogen". Elements. 16 (1): 8–9. doi:10.2138/gselements.16.1.8.
- Truche, Laurent; Bazarkina, Elena F. (2019). "Natural hydrogen the fuel of the 21 st century". E3S Web of Conferences. 98: 03006. Bibcode:2019E3SWC..9803006T. doi:10.1051/e3sconf/20199803006.
- Snyder, John (2019-09-05). "Hydrogen fuel cells gain momentum in maritime sector". Riviera Maritime Media. Archived from the original on 2021-02-08. Retrieved 2020-11-29.
- "Global Hydrogen Generation Market ize | Industry Report, 2020-2027". Archived from the original on 2019-04-16. Retrieved 2019-03-05.
- Upham, D. Chester. "Catalytic molten metals for the direct conversion of methane to hydrogen and separable non-polluting carbon in a single reaction-step commercial process (at potentially low-cost). This would provide no-pollution hydrogen from natural gas with no GHG emission, essentially forever". ScienceMag.org. American Association for Advancement of Science. Retrieved 31 October 2020.
- Upham, D. Chester; Agarwal, Vishal; Khechfe, Alexander; Snodgrass, Zachary R.; Gordon, Michael J.; Metiu, Horia; McFarland, Eric W. (17 November 2017). "Catalytic molten metals for the direct conversion of methane to hydrogen and separable carbon". Science. 358 (6365): 917–921. Bibcode:2017Sci...358..917U. doi:10.1126/science.aao5023. PMID 29146810. S2CID 206663568.
- BASF. "BASF researchers working on fundamentally new, low-carbon production processes, Methane Pyrolysis". United States Sustainability. BASF. Archived from the original on 19 October 2020. Retrieved 19 October 2020.
- UKCCC H2 2018, p. 20
- "Hydrogen could help decarbonise the global economy". Financial Times. Archived from the original on 2019-09-17. Retrieved 2019-08-31.
- IEA H2 2019, p. 18
- National Hydrogen Association; United States Department of Energy. "The History of Hydrogen" (PDF). hydrogenassociation.org. National Hydrogen Association. p. 1. Archived from the original (PDF) on 14 July 2010. Retrieved 17 December 2010.
- "Daedalus or Science and the Future, A paper read to the Heretics, Cambridge, on February 4th, 1923 – Transcript 1993". Archived from the original on 2017-11-15. Retrieved 2016-01-16.
- Jones, Lawrence W (13 March 1970). Toward a liquid hydrogen fuel economy. University of Michigan Environmental Action for Survival Teach In. Ann Arbor, Michigan: University of Michigan. hdl:2027.42/5800.
- Bakker, Sjoerd (2010). "The car industry and the blow-out of the hydrogen hype" (PDF). Energy Policy. 38 (11): 6540–6544. doi:10.1016/j.enpol.2010.07.019. Archived (PDF) from the original on 2018-11-03. Retrieved 2019-12-11.
- Harrison, James. "Reactions: Hydrogen hype". Chemical Engineer. 58: 774–775. Archived from the original on 2021-02-08. Retrieved 2017-08-31.
- Rizzi, Francesco Annunziata, Eleonora Liberati, Guglielmo Frey, Marco (2014). "Technological trajectories in the automotive industry: are hydrogen technologies still a possibility?". Journal of Cleaner Production. 66: 328–336. doi:10.1016/j.jclepro.2013.11.069.CS1 maint: multiple names: authors list (link)
- Murai, Shusuke (2018-03-05). "Japan's top auto and energy firms tie up to promote development of hydrogen stations". The Japan Times Online. Japan Times. Archived from the original on 2018-04-17. Retrieved 16 April 2018.
- Mishra, Ankit (2018-03-29). "Prospects of fuel-cell electric vehicles boosted with Chinese backing". Energy Post. Archived from the original on 2018-04-17. Retrieved 16 April 2018.
- IEA H2 2019, p. 17
- IEA H2 2019, p. 14
- Crabtree, George W.; Dresselhaus, Mildred S.; Buchanan, Michelle V. (2004). The Hydrogen Economy (PDF) (Technical report). Archived (PDF) from the original on 2020-04-10. Retrieved 2020-03-05.
- Mealey, Rachel. ”Automotive hydrogen membranes-huge breakthrough for cars" Archived 2019-06-10 at the Wayback Machine, ABC, August 8, 2018
- "Archived copy". Argonne National Laboratory. Archived from the original on 2007-09-22. Retrieved 2007-06-15.CS1 maint: archived copy as title (link)
- Argonne National Laboratory. "Configuration and Technology Implications of Potential Nuclear Hydrogen System Applications" (PDF). Archived from the original (PDF) on 5 August 2013. Retrieved 29 May 2013.
- "Vehicle Technologies Program: Fact #205: February 25, 2002 Hydrogen Cost and Worldwide Production". .eere.energy.gov. Archived from the original on 2013-07-01. Retrieved 2009-09-19.
- "Bellona-HydrogenReport". Interstatetraveler.us. Archived from the original on 2016-06-03. Retrieved 2010-07-05.
- BMWi (June 2020). The national hydrogen strategy (PDF). Berlin, Germany: Federal Ministry for Economic Affairs and Energy (BMWi). Archived (PDF) from the original on 2020-12-13. Retrieved 2020-11-27.
- Van de Graaf, Thijs; Overland, Indra; Scholten, Daniel; Westphal, Kirsten (December 2020). "The new oil? The geopolitics and international governance of hydrogen". Energy Research & Social Science. 70: 101667. doi:10.1016/j.erss.2020.101667. PMC 7326412. PMID 32835007.
- Sansom, Robert; Baxter, Jenifer; Brown, Andy; Hawksworth, Stuart; McCluskey, Ian (2020). Transitioning to hydrogen: assessing the engineering risks and uncertainties (PDF). London, United Kingdom: The Institution of Engineering and Technology (IET). Archived (PDF) from the original on 2020-05-08. Retrieved 2020-03-22.
- Bruce, S; Temminghoff, M; Hayward, J; Schmidt, E; Munnings, C; Palfreyman, D; Hartley, P (2018). National hydrogen roadmap: pathways to an economically sustainable hydrogen industry in Australia (PDF). Australia: CSIRO. Archived (PDF) from the original on 2020-12-08. Retrieved 2020-11-28.
- Zgonnik, Viacheslav (April 2020). "The occurrence and geoscience of natural hydrogen: A comprehensive review". Earth-Science Reviews. 203: 103140. Bibcode:2020ESRv..20303140Z. doi:10.1016/j.earscirev.2020.103140.
- "Natural Hydrogen Energy LLC". Archived from the original on 2020-10-25. Retrieved 2020-09-29.
- "Definition of Green Hydrogen" (PDF). Clean Energy Partnership. Retrieved 2014-09-06.[permanent dead link]
- Schneider, Stefan; Bajohr, Siegfried; Graf, Frank; Kolb, Thomas (October 2020). "State of the Art of Hydrogen Production via Pyrolysis of Natural Gas". ChemBioEng Reviews. 7 (5): 150–158. doi:10.1002/cben.202000014.
- Sampson2019-02-11T10:48:00+00:00, Joanna. "Blue hydrogen for a green future". gasworld. Archived from the original on 2019-05-09. Retrieved 2019-06-03.
- "Brown coal the hydrogen economy stepping stone | ECT". Archived from the original on 2019-04-08. Retrieved 2019-06-03.
- "Actual Worldwide Hydrogen Production from …". Arno A Evers. December 2008. Archived from the original on 2015-02-02. Retrieved 2008-05-09.
- Fernandez, Sonia. "Researchers develop potentially low-cost, low-emissions technology that can convert methane without forming CO2". Phys-Org. American Institute of Physics. Archived from the original on 19 October 2020. Retrieved 19 October 2020.
- Palmer, Clarke; Upham, D. Chester; Smart, Simon; Gordon, Michael J.; Metiu, Horia; McFarland, Eric W. (January 2020). "Dry reforming of methane catalysed by molten metal alloys". Nature Catalysis. 3 (1): 83–89. doi:10.1038/s41929-019-0416-2. S2CID 210862772.
- Cartwright, Jon. "The reaction that would give us clean fossil fuels forever". NewScientist. New Scientist Ltd. Archived from the original on 26 October 2020. Retrieved 30 October 2020.
- Karlsruhe Institute of Technology. "Hydrogen from methane without CO2 emissions". Phys.Org. Phys.Org. Archived from the original on 21 October 2020. Retrieved 30 October 2020.
- Badwal, Sukhvinder P. S.; Giddey, Sarbjit S.; Munnings, Christopher; Bhatt, Anand I.; Hollenkamp, Anthony F. (24 September 2014). "Emerging electrochemical energy conversion and storage technologies". Frontiers in Chemistry. 2: 79. Bibcode:2014FrCh....2...79B. doi:10.3389/fchem.2014.00079. PMC 4174133. PMID 25309898.
- Werner Zittel; Reinhold Wurster (1996-07-08). "Chapter 3: Production of Hydrogen. Part 4: Production from electricity by means of electrolysis". HyWeb: Knowledge - Hydrogen in the Energy Sector. Ludwig-Bölkow-Systemtechnik GmbH. Archived from the original on 2007-02-07. Retrieved 2010-10-01.
- Bjørnar Kruse; Sondre Grinna; Cato Buch (2002-02-13). "Hydrogen – Status and Possibilities". The Bellona Foundation. Archived from the original (PDF) on 2011-07-02.
Efficiency factors for PEM electrolysers up to 94% are predicted, but this is only theoretical at this time.
- "high-rate and high efficiency 3D water electrolysis". Grid-shift.com. Archived from the original on 2012-03-22. Retrieved 2011-12-13.
- "Wide Spread Adaption of Competitive Hydrogen Solution" (PDF). nelhydrogen.com. Nel ASA. Archived (PDF) from the original on 2018-04-22. Retrieved 22 April 2018.
- Philibert, Cédric. "Commentary: Producing industrial hydrogen from renewable energy". iea.org. International Energy Agency. Archived from the original on 22 April 2018. Retrieved 22 April 2018.
- IEA H2 2019, p. 37
- "How Much Electricity/Water Is Needed to Produce 1 kg of H2 by Electrolysis?". Archived from the original on 17 June 2020. Retrieved 17 June 2020.
- https://www.hfpeurope.org/infotools/energyinfos__e/hydrogen/main03.html[permanent dead link]
- Tao, Yongzhen; Chen, Yang; Wu, Yongqiang; He, Yanling; Zhou, Zhihua (1 February 2007). "High hydrogen yield from a two-step process of dark- and photo-fermentation of sucrose". International Journal of Hydrogen Energy. 32 (2): 200–206. doi:10.1016/j.ijhydene.2006.06.034. INIST:18477081.
- "Hydrogen production from organic solid matter". Biohydrogen.nl. Archived from the original on 2011-07-20. Retrieved 2010-07-05.
- Hemschemeier, Anja; Melis, Anastasios; Happe, Thomas (December 2009). "Analytical approaches to photobiological hydrogen production in unicellular green algae". Photosynthesis Research. 102 (2–3): 523–540. doi:10.1007/s11120-009-9415-5. PMC 2777220. PMID 19291418.
- "NanoLogix generates energy on-site with bioreactor-produced hydrogen". Solid State Technology. September 20, 2007. Archived from the original on 2018-05-15. Retrieved 14 May 2018.
- "Power from plants using microbial fuel cell" (in Dutch). Archived from the original on 2021-02-08. Retrieved 2010-07-05.
- "2001-High pressure electrolysis - The key technology for efficient H.2" (PDF). Retrieved 2010-07-05.[permanent dead link]
- Carmo, M; Fritz D; Mergel J; Stolten D (2013). "A comprehensive review on PEM water electrolysis". Journal of Hydrogen Energy. 38 (12): 4901–4934. doi:10.1016/j.ijhydene.2013.01.151.
- "2003-PHOEBUS-Pag.9" (PDF). Archived from the original (PDF) on 2009-03-27. Retrieved 2010-07-05.
- "Finland exporting TEN-T fuel stations". Archived from the original on 2016-08-28. Retrieved 2016-08-22.
- "Steam heat: researchers gear up for full-scale hydrogen plant" (Press release). Science Daily. 2008-09-18. Archived from the original on 2008-09-21. Retrieved 2008-09-19.
- "Nuclear Hydrogen R&D Plan" (PDF). U.S. Dept. of Energy. March 2004. Archived from the original (PDF) on 2008-05-18. Retrieved 2008-05-09.
- Valenti, Giovanni; Boni, Alessandro; Melchionna, Michele; Cargnello, Matteo; Nasi, Lucia; Bertoni, Giovanni; Gorte, Raymond J.; Marcaccio, Massimo; Rapino, Stefania; Bonchio, Marcella; Fornasiero, Paolo; Prato, Maurizio; Paolucci, Francesco (December 2016). "Co-axial heterostructures integrating palladium/titanium dioxide with carbon nanotubes for efficient electrocatalytic hydrogen evolution". Nature Communications. 7 (1): 13549. Bibcode:2016NatCo...713549V. doi:10.1038/ncomms13549. PMC 5159813. PMID 27941752.
- William Ayers, US Patent 4,466,869 Photolytic Production of Hydrogen
- Navarro Yerga, Rufino M.; Álvarez Galván, M. Consuelo; del Valle, F.; Villoria de la Mano, José A.; Fierro, José L. G. (22 June 2009). "Water Splitting on Semiconductor Catalysts under Visible-Light Irradiation". ChemSusChem. 2 (6): 471–485. doi:10.1002/cssc.200900018. PMID 19536754.
- Navarro, R.M.; Del Valle, F.; Villoria de la Mano, J.A.; Álvarez-Galván, M.C.; Fierro, J.L.G. (2009). "Photocatalytic Water Splitting Under Visible Light". Advances in Chemical Engineering - Photocatalytic Technologies. Advances in Chemical Engineering. 36. pp. 111–143. doi:10.1016/S0065-2377(09)00404-9. ISBN 978-0-12-374763-1.
- Nann, Thomas; Ibrahim, Saad K.; Woi, Pei-Meng; Xu, Shu; Ziegler, Jan; Pickett, Christopher J. (22 February 2010). "Water Splitting by Visible Light: A Nanophotocathode for Hydrogen Production". Angewandte Chemie International Edition. 49 (9): 1574–1577. doi:10.1002/anie.200906262. PMID 20140925.
- Yamamura, Tetsushi (August 2, 2015). "Panasonic moves closer to home energy self-sufficiency with fuel cells". Asahi Shimbun. Archived from the original on August 7, 2015. Retrieved 2015-08-02.
- "DLR Portal - DLR scientists achieve solar hydrogen production in a 100-kilowatt pilot plant". Dlr.de. 2008-11-25. Archived from the original on 2013-06-22. Retrieved 2009-09-19.
- "353 Thermochemical cycles" (PDF). Archived (PDF) from the original on 2009-02-05. Retrieved 2010-07-05.
- UNLV Thermochemical cycle automated scoring database (public)[permanent dead link]
- "Development of Solar-powered Thermochemical Production of Hydrogen from Water" (PDF). Archived (PDF) from the original on 2007-04-17. Retrieved 2010-07-05.
- Jie, Xiangyu; Li, Weisong; Slocombe, Daniel; Gao, Yige; Banerjee, Ira; Gonzalez-Cortes, Sergio; Yao, Benzhen; AlMegren, Hamid; Alshihri, Saeed; Dilworth, Jonathan; Thomas, John; Xiao, Tiancun; Edwards, Peter (2020). "Microwave-initiated catalytic deconstruction of plastic waste into hydrogen and high-value carbons". Nature Catalysis. 3 (11): 902–912. doi:10.1038/s41929-020-00518-5. ISSN 2520-1158.
- http://www.nedstack.com/images/stories/news/documents/20120202_Press%20release%20Solvay%20PEM%20Power%20Plant%20start%20up.pdf Archived 2014-12-08 at the Wayback Machine Nedstack
- "Different Gases from Steel Production Processes". Archived from the original on 27 March 2016. Retrieved 5 July 2020.
- "Production of Liquefied Hydrogen Sourced by COG" (PDF). Archived (PDF) from the original on 8 February 2021. Retrieved 8 July 2020.
- Zubrin, Robert (2007). Energy Victory. Amherst, New York: Prometheus Books. pp. 117–118. ISBN 978-1-59102-591-7.
The situation is much worse than this, however, because before the hydrogen can be transported anywhere, it needs to be either compressed or liquefied. To liquefy it, it must be refrigerated down to a temperature of -253°C (20 degrees above absolute zero). At these temperatures, fundamental laws of thermodynamics make refrigerators extremely inefficient. As a result, about 40 percent of the energy in the hydrogen must be spent to liquefy it. This reduces the actual net energy content of our product fuel to 792 kcal. In addition, because it is a cryogenic liquid, still more energy could be expected to be lost as the hydrogen boils away as it is warmed by heat leaking in from the outside environment during transport and storage.
- Savvides, Nick (2017-01-11). "Japan plans to use imported liquefied hydrogen to fuel Tokyo 2020 Olympics". Safety At Sea. IHS Markit Maritime Portal. Archived from the original on 2018-04-23. Retrieved 22 April 2018.
- S.Sadaghiani, Mirhadi (2 March 2017). "Introducing and energy analysis of a novel cryogenic hydrogen liquefaction process configuration". International Journal of Hydrogen Energy. 42 (9).
- 1994 – ECN abstract Archived 2004-01-02 at the Wayback Machine. Hyweb.de. Retrieved on 2012-01-08.
- European Renewable Energy Network Archived 2019-07-17 at the Wayback Machine pp. 86, 188
- "Energy storage – the role of electricity" (PDF). European Commission. European Commission. Archived from the original (PDF) on 8 November 2020. Retrieved 22 April 2018.
- "Hyunder". Archived from the original on 2013-11-11. Retrieved 2013-11-11.
- Storing renewable energy: Is hydrogen a viable solution?[permanent dead link]
- "BRINGING NORTH SEA ENERGY ASHORE EFFICIENTLY" (PDF). worldenergy.org. World Energy Council Netherlands. Archived (PDF) from the original on 23 April 2018. Retrieved 22 April 2018.
- GERDES, JUSTIN (2018-04-10). "Enlisting Abandoned Oil and Gas Wells as 'Electron Reserves'". greentechmedia.com. Wood MacKenzie. Archived from the original on 2018-04-23. Retrieved 22 April 2018.
- Anscombe, Nadya (4 June 2012). "Energy storage: Could hydrogen be the answer?". Solar Novus Today. Archived from the original on 19 August 2013. Retrieved 3 November 2012.
- Naturalhy Archived 2012-01-18 at the Wayback Machine
- Kijk magazine, 10, 2019
- 50% hydrogen for Europe. A manifesto by Frank Wouters and Ad van Wijk
- Bhadhesia, Harry. "Prevention of Hydrogen Embrittlement in Steels" (PDF). Phase Transformations & Complex Properties Research Group, Cambridge University. Archived (PDF) from the original on 11 November 2020. Retrieved 17 December 2020.
- IEA H2 2019, p. 15
- "Japan's Hydrogen Strategy and Its Economic and Geopolitical Implications". Etudes de l'Ifri. Archived from the original on 10 February 2019. Retrieved 9 February 2019.
- "South Korea's Hydrogen Economy Ambitions". The Diplomat. Archived from the original on 9 February 2019. Retrieved 9 February 2019.
- "The world's largest-class hydrogen production, Fukushima Hydrogen Energy Research Field (FH2R) now is completed at Namie town in Fukushima". Toshiba Energy Press Releases. Toshiba Energy Systems and Solutions Corporations. 7 March 2020. Archived from the original on 22 April 2020. Retrieved 1 April 2020.
- Editor (2019-06-14). "Hydrogen could replace natural gas to heat homes and slash carbon emissions, new report claims | Envirotec". Archived from the original on 2019-09-25. Retrieved 2019-09-25.CS1 maint: extra text: authors list (link)
- Murray, Jessica (2020-01-24). "Zero-carbon hydrogen injected into gas grid for first time in groundbreaking UK trial". The Guardian. ISSN 0261-3077. Archived from the original on 2020-01-24. Retrieved 2020-01-24.
- frankwouters1 (2019-05-07). "A European Hydrogen Manifesto". Frank Wouters. Archived from the original on 2020-09-20. Retrieved 2019-12-02.
- "idealhy.eu - Liquid Hydrogen Outline". idealhy.eu. Archived from the original on 2020-11-11. Retrieved 2019-12-02.
- Electricity from wood through the combination of gasification and solid oxide fuel cells Archived 2011-03-13 at the Wayback Machine, Ph.D. Thesis by Florian Nagel, Swiss Federal Institute of Technology Zurich, 2008
- "Power-to-weight ratio". .eere.energy.gov. 2009-06-23. Archived from the original on 2010-06-09. Retrieved 2010-07-05.
- "EPA mileage estimates". Honda FCX Clarity - Vehicle Specifications. American Honda Motor Company. Archived from the original on 1 July 2013. Retrieved 17 December 2010.
- "Fuel Cell Technologies Office; Accomplishments and Progress". US Department of Energy. Archived from the original on 15 April 2018. Retrieved 16 April 2018.
- Ruffo, Gustavo Henrique. "This Video Compares BEVs to FCEVs and the More Efficient Is..." Archived 2020-10-26 at the Wayback Machine, InsideEVs.com, September 29, 2019
- Allen, James. "Honda: Now Is The Right Time to Embrace Electric Cars" Archived 2020-11-24 at the Wayback Machine, The Sunday Times, November 4, 2019
- Baxter, Tom (3 June 2020). "Hydrogen cars won't overtake electric vehicles because they're hampered by the laws of science". The Conversation. Archived from the original on 31 July 2020. Retrieved 24 November 2020.
- Kluth, Andreas. "How Hydrogen Is and Isn't the Future of Energy" Archived 2020-11-24 at the Wayback Machine, Bloomberg.com. November 9, 2020
- Kreith, 2004
- Seba, Tony (23 October 2015). "Toyota vs Tesla - hydrogen fuel cell vehicles vs electric cars". EnergyPost.eu. Archived from the original on 6 December 2016. Retrieved 3 December 2016.
- Bossel, Ulrich (2006). "Does a Hydrogen Economy Make Sense?". Proceedings of the IEEE. 94 (10): 1826–1837. doi:10.1109/JPROC.2006.883715. S2CID 39397471. Mirror Archived 2015-09-06 at the Wayback Machine
- Ann Mari Svensson; Steffen Møller-Holst; Ronny Glöckner; Ola Maurstad (September 2006). "Well-to-wheel study of passenger vehicles in the Norwegian energy system". Energy. 32 (4): 437–45. doi:10.1016/j.energy.2006.07.029.
- Boyd, Robert S. (May 15, 2007). "Hydrogen cars may be a long time coming". McClatchy Newspapers. Archived from the original on May 1, 2009. Retrieved 2008-05-09.
- Squatriglia, Chuck (May 12, 2008). "Hydrogen Cars Won't Make a Difference for 40 Years". Wired. Archived from the original on 2008-05-12. Retrieved 2008-05-13.
- National Academy of Engineering (2004). The Hydrogen Economy: Opportunities, Costs, Barriers, and R&D Needs. Washington, D.C.: The National Academies Press. doi:10.17226/10922. ISBN 978-0-309-53068-2. Archived from the original on 8 September 2010. Retrieved 17 December 2010.
- "Ford Motor Company Business Plan" Archived 2017-03-27 at the Wayback Machine, December 2, 2008
- Dennis, Lyle. "Nissan Swears Off Hydrogen and Will Only Build Electric Cars" Archived 2010-12-21 at the Wayback Machine, All Cars Electric, February 26, 2009
- "Letter of Understanding 2009" (PDF). Archived (PDF) from the original on 2013-09-27. Retrieved 2012-07-08.
- "Hydrogen for transport" Archived 2015-01-20 at the Wayback Machine, The Carbon Trust, 28 November 2014. Retrieved on 20 January 2015.
- BMW Group Clean Energy ZEV Symposium. September 2006, p. 12
- "Liebreich: Separating Hype from Hydrogen – Part Two: The Demand Side". BloombergNEF. 2020-10-16. Archived from the original on 2021-01-26. Retrieved 2021-01-26.
- "This company may have solved one of the hardest problems in clean energy". Vox. 2018-02-16. Archived from the original on 2019-11-12. Retrieved 9 February 2019.
- Utgikar, Vivek P; Thiesen, Todd (2005). "Safety of compressed hydrogen fuel tanks: Leakage from stationary vehicles". Technology in Society. 27 (3): 315–320. doi:10.1016/j.techsoc.2005.04.005.
- "Hydrogen Sensor: Fast, Sensitive, Reliable, and Inexpensive to Produce" (PDF). Argonne National Laboratory. September 2006. Archived from the original (PDF) on 2013-07-01. Retrieved 2008-05-09.
- "Canadian Hydrogen Safety Program testing H2/CNG". Hydrogenandfuelcellsafety.info. Archived from the original on 2011-07-21. Retrieved 2010-07-05.
- UKCCC H2 2018, p. 113
- "A wake-up call on green hydrogen: the amount of wind and solar needed is immense | Recharge". Recharge | Latest renewable energy news. Archived from the original on 2020-04-11. Retrieved 2020-04-11.
- UKCCC H2 2018, p. 7
- UKCCC H2 2018, p. 124
- UKCCC H2 2018, p. 118
- "Australia's pathway to $2 per kg hydrogen - ARENAWIRE". Australian Renewable Energy Agency. Archived from the original on 2020-12-15. Retrieved 2021-01-06.
- "Are hydrogen fuel cell vehicles the future of autos?". ABC News. Archived from the original on 2021-01-17. Retrieved 2021-01-18.
- Siddiqui, Faiz. "The plug-in electric car is having its moment. But despite false starts, Toyota is still trying to make the fuel cell happen". Washington Post. ISSN 0190-8286. Archived from the original on 2021-01-19. Retrieved 2021-01-18.
- "Experimental 'wind to hydrogen' system up and running". Physorg.com. January 8, 2007. Archived from the original on 2013-07-01. Retrieved 2008-05-09.
- "Hydrogen Engine Center Receives Order for Hydrogen Power Generator 250kW Generator for Wind/Hydrogen Demonstration" (PDF). Hydrogen Engine Center, Inc. May 16, 2006. Archived from the original (PDF) on May 27, 2008. Retrieved 2008-05-09.
- "Stuart Island Energy Initiative". Archived from the original on 2013-07-01. Retrieved 2008-05-09.
- "Hydrogen transport & distribution". Archived from the original on 2019-09-29. Retrieved 2019-09-29.
- "Archived copy". Archived from the original on 2020-08-07. Retrieved 2020-08-14.CS1 maint: archived copy as title (link)
- "ECHA". Archived from the original on 2020-08-12. Retrieved 2020-08-14.
- "Hydrogen buses". Transport for London. Archived from the original on March 23, 2008. Retrieved 2008-05-09.
- "The Hydrogen Expedition" (PDF). January 2005. Archived from the original (PDF) on 2008-05-27. Retrieved 2008-05-09.
- "Perth Fuel Cell Bus Trial". Department for Planning and Infrastructure, Government of Western Australia. 13 April 2007. Archived from the original on 7 June 2008. Retrieved 2008-05-09.
- Hannesson, Hjálmar W. (2007-08-02). "Climate change as a global challenge". Iceland Ministry for Foreign Affairs. Archived from the original on 2013-07-01. Retrieved 2008-05-09.
- Doyle, Alister (January 14, 2005). "Iceland's hydrogen buses zip toward oil-free economy". Reuters. Archived from the original on July 24, 2012. Retrieved 2008-05-09.
- "What is HyFLEET:CUTE?". Archived from the original on 2008-02-24. Retrieved 2008-05-09.
- "Hydrogen vehicles and refueling infrastructure in India" (PDF). Archived (PDF) from the original on 2017-06-12. Retrieved 2019-09-28.
- Das, L (1991). "Exhaust emission characterization of hydrogen-operated engine system: Nature of pollutants and their control techniques". International Journal of Hydrogen Energy. 16 (11): 765–775. doi:10.1016/0360-3199(91)90075-T.
- "MNRE: FAQ". Archived from the original on 2019-09-21. Retrieved 2019-09-28.
- Overview of Indian Hydrogen Programme
- "H2 stations worldwide". Archived from the original on 2019-09-21. Retrieved 2019-09-28.
- "India working on more H2 stations". Archived from the original on 2019-09-21. Retrieved 2019-09-28.
- "Shell plans to open 1200 fuel stations in India, some of which may include H2 refilling". Archived from the original on 2019-09-22. Retrieved 2019-09-28.
- "Hydrogen Vehicles and Refueling Infrastructure in India" (PDF). Archived (PDF) from the original on 2017-06-12. Retrieved 2019-09-28.
- "Independent Mid-Term Review of the UNIDO Project: Establishment and operation of the International Centre for Hydrogen Energy Technologies (ICHET), TF/INT/03/002" (PDF). UNIDO. 31 August 2009. Archived from the original (PDF) on 1 June 2010. Retrieved 20 July 2010.
- "Worldwide NGV statistics". Archived from the original on 2015-02-06. Retrieved 2019-09-29.
- "Fuel Cell micro CHP". Archived from the original on 2019-11-06. Retrieved 2019-10-23.
- "Fuel cell micro Cogeneration". Archived from the original on 2019-10-23. Retrieved 2019-10-23.
- Agosta, Vito (July 10, 2003). "The Ammonia Economy". Archived from the original on May 13, 2008. Retrieved 2008-05-09.
- "Renewable Energy". Iowa Energy Center. Archived from the original on 2008-05-13. Retrieved 2008-05-09.
- UKCCC H2 2018, p. 36: "Near-term pursuit of hybrid heat pumps would not necessarily lead to a long-term solution of hybrid heat pumps with hydrogen boilers."
- UKCCC H2 2018, p. 79: The potential for bio-gasification with CCS to be deployed at scale is limited by the amount of sustainable bioenergy available. .... "
- UKCCC H2 2018, p. 33: production of biofuels, even with CCS, is only one of the best uses of the finite sustainable bio-resource if the fossil fuels it displaces cannot otherwise feasibly be displaced (e.g. use of biomass to produce aviation biofuels with CCS)."
|Wikimedia Commons has media related to Hydrogen economy.| | https://en.wikipedia.org/wiki/Hydrogen_economy | 21 |
29 | Despite their massive size, African forest elephants remains an elusive species, poorly studied because of their habitat in the dense tropical forests of West Africa and the Congo.
But the more we learn about them, the more we know that forest elephants are in trouble. Like their slightly larger and better-known cousins, the bush or savannah elephants (Loxodonta africana), forest elephants (L. cyclotis) face rampant poaching for their majestic ivory tusks and the growing bush meat trade. More than 80% of the population has been killed off in central Africa since 2002.
Today fewer than 100,000 forest elephants occupy their dwindling habitat. Conservationists worry they could soon head toward extinction if nothing is done.
And now a new threat has emerged: A study published this September found that climate change has resulted in an 81% decline in fruit production in one forest elephant habitat in Gabon. That’s caused the elephants there to experience an 11% decline in body condition since 2008.
But other research, also published in September, suggests a possible solution to both these crises.
Elephants and Carbon
It all boils down to carbon dioxide.
Forest elephants play a huge role in supporting the carbon sequestration power of their tropical habitats. Hungry pachyderms act as mega-gardeners as they roam across the landscape searching for bits of leaves, tree bark and fruit; stomping on small trees and bushes; and spreading seeds in their dung. This promotes the growth of larger carbon-absorbing trees, allowing forests to sequester more carbon from the air.
A July 2019 study by ecologist Fabio Berzaghi, a researcher at the Laboratory of Climate and Environmental Sciences in France, estimated that if forest elephants disappeared African forests would lose 7% of their biomass — a stunning 3 billion-ton loss of carbon.
And they’re not unique in this oversized role, although the closest equivalent lives in an entirely different type of habitat.
Last year a team of researchers led by Ralph Chami, an economist and assistant director at the International Monetary Fund, published a groundbreaking report on the monetary value of great whales, the 13 large species that include blue and humpback whales. The study accounted for whales’ enormous carbon-capturing functions, from fertilizing oxygen-producing phytoplankton to storing enormous amounts of carbon in their bodies when they die and sink to the seafloor. After also including tourism values, Chami’s study estimated each whale was worth $2 million, amounting to a staggering $1 trillion for the entire global population of whales.
“It’s a win-win for everyone,” Chami says of his economic models, which place a monetary value on the “natural capital” of wildlife, including the carbon sequestration activities of whales and elephants. “By allowing nature to regenerate, [elephants and whales] are far more valuable to us than if we extract them. If nature thrives, you thrive.”
Soon after the publication of Chami’s whale study, Berzaghi called and asked if the economist could run the numbers on forest elephants too. Chami agreed, and this September they published the results. The elephants, they calculated, are worth about $1.75 million each due to their forest carbon sequestration value alone.
Even more importantly, they found that if forest elephants were allowed to rebound to their former populations, their carbon-capturing value would jump to more than $150 billion.
And as climate change worsens, Chami says forest elephants will become even more valuable in terms of their carbon sequestration role — and as individuals. “The loss of their habitats has the impact of causing them more stress and to have fewer babies,” he says.
Turning Numbers Into Action
Despite these stunning, if theoretical, numbers, the researchers knew they needed a financial plan that could be implemented and sustained in the real world.
That starts with keeping elephants alive.
Poachers receive pennies on the dollar for elephant tusks that, once they finally reach consumers, can fetch prices of up to $40,000 on the illegal ivory market.
Chami says that pales in comparison to the $1.75 million an elephant could be worth for its carbon sequestration services, an amount that works out to roughly $80 a day over an elephant’s 60-year average lifetime.
But how do you deliver that value to the people who live near elephants, including people who perhaps currently poach the animals? Chami turned to worldwide carbon markets, which encourage countries or companies to offset their greenhouse gases by investing in restorative measures in other parts of the world.
To activate that proposed value, Chami brought together a group of conservation, business technology and economic experts to develop a pilot project that could promote the protection of forest elephants in Africa. Together, they aim to create a legal framework and a secure financial distribution system that would use of carbon markets to pay local communities to protect forest elephants. Individual elephants would be tracked using satellite technology to ensure their safety. As long as the elephants remain alive, communities could receive regular payments from a carbon market funded by corporations, individuals and governments to offset their pollution. Elephants could become “living assets” for countries that protect them.
Those assets could add up. Chami says the population of 1,500 elephants in Gabon’s Loango National Forest would provide $2.4 million in annual revenue.
“We need to build a market around living elephants,” Chami says. “The poachers can become the caretakers.”
That’s an exciting concept to wildlife experts, who have already had some success empowering communities through tourism. But for elephants that live in remote areas of African forests, tourism is less of an option. A market that places a value on elephants for their global carbon sequestration and climate contributions opens a new opportunity for support.
“It potentially changes how people think of the value of elephants,” said Ian Redmond, a renowned African conservationist who’s working with Chami and others to fund forest elephant protection efforts.
Redmond says he’s thrilled about this new plan because it incentivizes locals to protect their natural resources, not exploit them.
“It’s a gamechanger, not just for its ecological benefits, but for poverty reduction,” he says. “It’s a mechanism of change for people in the forest for people who before now only get money if they kill something. Now there’s an economic incentive to protect the elephants and their carbon-rich habitat so everyone benefits, locally and globally.”
The trick, the experts say, is getting money dispersed fairly and securely to local communities. Chami’s team says the revolution in new secure financial networks such as blockchain, the building block of digital monetary systems like Bitcoin, can help establish a monetary system that can be more efficient and transparent than traditional banking systems. Africa’s ahead of the curve when it comes to dealing in these new digital monetary technologies, which, though not perfect, can be a positive anti-corruption tool in the murky world of international carbon markets and debt swaps sometimes linked to fraud and influence peddling.
Walid Al Saqqaf, a startup founder and technology expert who produces the weekly podcast Insureblocks, is working closely with Chami and conservationists like Redmond to tap into global carbon exchange markets and create a framework for local funding efforts. Al Saqqaf says the secure nature of blockchain technology can attract international governmental agencies as well as private sector banks and insurance companies who will increasingly want to offset carbon footprints by investing in carbon-sequestering natural resources. “We take a toxic asset such as carbon and transform it into carbon for social good,” Al Saqqaf says.
The group is setting up technology, legal and science working groups to develop a cohesive plan that could go into effect next year, although the conservation team says it’s too early to announce specifics of the pilot program. They say they are in early discussions with African governments hoping to protect their elephants as well as private enterprises interested in offsetting carbon emissions.
A Ticking Clock, But Forward Motion
Meanwhile the threats from both climate change and poaching continue. A study published this June found that, despite efforts to reduce the ivory trade, elephant poaching rates remain “near their peak and have changed little since 2011.”
The rapidly growing risks of extinctions, fueled in part by climate change, have pushed the team to quickly get their ground-breaking plan up and running. “We are in a race against time,” Al Saqqaf says.
While the work on elephants remains on the drawing board, Chami’s earlier study on the economic value of whales has already started generating real-world action. A G20 working group recommended this year that member countries take whales into account for their climate mitigation and ecosystem values. In Chile a national initiative is using Chami’s economic model to help design a project called the Blue Boat Initiative, a sophisticated satellite and sea-based plan supported by the Chilean government to protect whales from ship collisions.
“The valuation of ecosystem services is very relevant because it allows us to show the oceans are not only a raw material,” says Patricia Morales, general manager of Fundacion Cortes Solari, a private foundation that supports the Blue Boat Initiative and other climate and environmental issues. “We need to move from the current paradigm to the blue economy.”
Chami says the positive global response to their work is rewarding, but it’s far from complete. His team — which plans to apply this methodology to other species — knows the dire state of the natural world, and the challenges of creating new international funding and conservation models are huge. But Chami and his colleagues say that by “translating science into dollars,” researchers can build a powerful market-based mechanism that can reverse society’s incentive to destroy the natural world.
“We need to learn to live in balance with nature,” Chami says. “Our sustainability depends on protecting our ecosystems.” | https://therevelator.org/climate-change-forest-elephants/ | 21 |
24 | The earliest inhabitants of the mountains and plains that became Colorado were of course Native Americans, such as the Utes. The Northern Cheyenne and Southern Cheyenne moved west into the area that became Colorado in the early 1800s. When William Bent created his successful trading post in 1833, he relied on a partnership with the Southern Cheyenne. The Santa Fe Trail became a major conduit for trade after Mexico’s independence from Spain in 1821. Euro-American fur trappers in the north began interacting with native peoples in these decades as well. Colorado became a U.S. territory thirteen years after the U.S.-Mexico War (1846–1848). By 1850 Spanish-speaking settlers ventured into the San Luis Valley. They also moved onto the plains south of the Arkansas River. There they encountered various Ute bands as well as Cheyenne, Arapaho, Kiowa, and Comanche peoples. The establishment of the Overland Trail near the Nebraska and Wyoming borders led to more contact and violence among new settlers and native inhabitants during the 1860s. (Maps of trails can be found online). The Gold Rush of 1858–1859 launched a tremendous wave of Euro-American migration to new mountain settlements as well as Golden and Denver. African Americans and Asian immigrants also joined this movement into the lands that would become Colorado. These new settlers increased cultural and racial diversity in the territory.
This chapter focuses on that diversity by comparing the experiences of some settlers in Colorado in the 1800s. When Colorado became a U.S. territory in 1861, there were over 30,000 new residents and thousands of native peoples in the area. Most of the Euro-Americans moving to Colorado in these years were single, male, white miners. We explore the miner experience in a separate chapter. But here students can consider the lives of more diverse early residents to get a sense of the multiple paths that they pursued in attempting to earn a living in this territory and state. Many who came before the Gold Rush arrived by or had a connection to the Santa Fe Trail or New Mexico. This chapter invites students to consider a broader range of pioneers in different regions of Colorado.
Initially, students can explore the lives of William Bent and his Southern Cheyenne wife, Owl Woman. Moving from St. Louis, Bent established a fur trading post on the Arkansas River in the 1830s to supply travelers along the Santa Fe Trail. Owl Woman was part of a prominent Southern Cheyenne family. Their twelve-year marriage represented a connection between the white world of traders and fur trappers and the Cheyenne world. Their four children also lived between these two worlds, even as the Colorado Gold Rush and the tens of thousands of white miners and settlers began building cities and roads and farms in the territory. As conflicts between the Cheyenne and white settlers increased in the 1860s, William Bent attempted to make peace. He negotiated between the two groups in hopes of avoiding open warfare. His efforts failed, however, and his children were caught in the violence and forced to choose sides.
Next, this chapter includes primary sources on San Luis Valley pioneers. These 1850s settlers from New Mexico brought important Hispanic traditions to this southern Colorado region. They organized plazas with farm land stretched out in narrow bands along key waterways in this arid region and dug the first irrigation ditches in the territory. As Spanish speakers tied to the Catholic culture of Mexico, these people established different settlement patterns from other pioneers.
Students will also encounter sources on diverse women pioneers. An ex-slave from Kentucky, Clara Brown, joined early gold seekers and helped establish Central City. Her arrival came later than Bents or the San Luis Valley pioneers, but her community established important patterns that recurred in other gold and silver rushes across the mountains. White women like Emily Hartman and Ella Bailey confronted important challenges and demonstrated significant resilience in their pioneering experiences. Last, students can read about a ten-year-old child’s perspective on moving into his first Colorado home in 1906. All of these experiences can help students answer the central question: What did it take to be a pioneer?
As Colorado’s recent Pioneer license plate indicates, many people think of pioneers as white families moving west in prairie schooners. Or the term may evoke an image of the lone, male prospector roaming the Rocky Mountains. The historical experience did include those people, but Colorado included so much more. With this chapter, students can explore this question about pioneering from multiple perspectives and begin to reconstruct the diverse lives of early settlers. They also can begin to compare source fragments to reconstruct pioneer life.
Sources for Students:
Doc. 1: William Bent Photograph, 1867
William Bent (1809-1869) was born in St. Louis, Missouri, one of eleven children. Following his older brother’s example, William moved west along the Santa Fe Trail in 1829 in search of adventure and a chance to make money. This trail connected St. Louis, Missouri, with Santa Fe and ran through lands that later became Colorado. Here is a recently created map of that trail.
William Bent developed a friendship with the Southern Cheyenne. In 1833 he built a fort along the Arkansas River. This fort allowed Bent to supply travelers along the Santa Fe Trail. Cheyenne and Arapaho peoples also traded at the fort.
To secure his connection to the Cheyenne, William Bent married the daughter of chief White Thunder in 1835. Owl Woman, or Mis-stan-sta, and William Bent lived in the fort part time. They also lived with the Cheyenne in their buffalo-hide tipis. Bent’s connections to the Southern Cheyenne and Southern Arapaho helped make his trading post along the Santa Fe Trail very successful.
Here is a photograph of William Bent (with a hat) sitting between Southern Arapaho chief Little Raven and his sons. Little Raven holds his granddaughter on his lap:
[Source: History Colorado. Available online at: http://www.coloradovirtuallibrary.org/digital-colorado/colorado-histories/beginnings/william-bent-frontiersman/]
- How are the clothes of each person alike or different? Why might the clothes be alike or different?
- Can we tell from the picture if Bent was friends with these other men?
- Who could the men at the top of the photograph be? What details make you think so? Why might they be in this picture?
- Based on this picture, what can we learn about William Bent and his relations with the Southern Arapahos?
Documents 2 and 3 about Bent’s Fort should be paired
Doc. 2: Photo of courtyard of Bent’s Fort after its reconstruction in 1976.
For sixteen years, Bent’s Fort was the only major Euro-American settlement between Missouri and New Mexico. Bent built the fort on the north bank of the Arkansas River near the current town of La Junta. Travelers on the Santa Fe Trail would stop here for supplies. They met fur trappers and different Native Americans in the fort. In 1849 Bent’s Fort burned to the ground. The National Park Service rebuilt the fort more than 125 years later. Builders used paintings and drawings of the original fort along with parts of the fort that were still there. Here is a photograph of the inside of the rebuilt fort:
[Source: Library of Congress photograph: https://www.loc.gov/item/2015632789/]
Doc. 3: Visitor to Bent’s Fort in 1839
The next document gives us a description of life in Bent’s Fort from a visitor who met Bent. Matthew Field was a newspaper reporter. He stopped at Bent’s Fort in 1839. He described what he saw inside the fort:
Two hundred men might [sleep] . . . in the fort, and three or four hundred animals can be shut up in the corral. Then there are the store rooms, the extensive wagon houses, in which to keep the enormous heavy wagons used twice a year to bring merchandise from [St. Louis], and to carry back the skins of buffalo and the beaver….[The many rooms enclosed in the walls] strike the wanderer…as though an ‘air built castle’ had dropped to earth…in the midst of a vast desert.
[Source: John E. Sunder, ed., Matt Field on the Santa Fe Trail (Norman: University of Oklahoma Press, 1960), 144. ]
Corral: a place to keep large animals
Merchandise: items to buy
Strike: appear to
Vast: great big
- How many men and how men animals were there at Bent’s Fort when Field visited? How might that smell, at a time before there was indoor plumbing for showers and toilets?
- According to Matthew Field, how might it feel to stay at this fort?
- What noises might a visitor to Bent’s fort in 1839 have heard?
- With these two sources, we can imagine what the inside of the fort might have looked like. What sort of work or jobs do you think people did in the fort? Why?
- How did Bent make money to keep this fort going?
- Find Bent’s Fort (or La Junta) on a map of Colorado. Imagine riding on horseback or walking from St. Louis, Missouri to Taos, New Mexico. This fort was the only settlement along that route. How long might it take you to reach Bent’s Fort from St. Louis if you traveled miles by horse at 10 miles per hour?
- How did William Bent succeed as a pioneer?
San Luis Valley Pioneers: The next two documents are paired together
Doc. 4: Photo of María Eulogia Gallegos, 1860s
María Eulogia Gallegos was a pioneer settler in the San Luis Valley. Here is a photograph of María Eulogia Gallegos from the 1860s:
[Source: Denver Public Library Digital Collection. Call # AUR-2247]
She came from Taos, New Mexico and married Dario Gallegos in 1850. They were two of the founders of the town of San Luis in the early 1850s. They were in Colorado before the Gold Rush. She and Dario had nine children.
Together they started a store that sold merchandise such as groceries and hardware. At first they supplied the store by sending horse-drawn wagons over the Santa Fe Trail from St. Louis, Missouri. Those wagons would have passed the site of Bent’s Fort. They sold to American soldiers at Fort Garland to the north. Ute Indians and San Luis settlers also traded at the store.
One of María and Dario’s daughters married the son of another San Luis settler, Arcadio Salazar. After Dario Gallegos died in 1883, this daughter and son-in-law ran the store. Their son, Delfino Salazar, continued the tradition. The store nearly closed in 2017 after more than 150 years in business.
Doc. 5: Interview with María Eulogia Gallegos’s son and grandson, 1933
In 1933 the US government paid for writers to interview Colorado pioneers. Charles Gibson interviewed Gaspar Gallegos and Delphino Salazar. They were the son and grandson of María Eulogia Gallegos and her husband Dario. The two men told Gibson about the early days of San Luis and their ancestors, Dario and María Eulogia Gallegos:
“When Dario died he left a great deal of [farm] property including twenty-six thousand head of sheep, and although Mrs. Gallegos could neither read nor write she became a very able manager. Gaspar recalled that when he was a small boy his mother raised a great many chickens and he used to go with her to [US Army] Fort Garland, where they were sold to the officers. She would take only silver, one piece of money for one chicken, either twenty-five cents or ten cents.”
“According to [María’s grandson], San Luis, Colorado and Boston, Massachusetts, are the only towns in the United States having a ‘Common,’ especially set aside for the use of the people. Carlos Beaubien donated the meadow just west of the town to the people . . . . In this way a convenient pasture was assured [and shared] for the horses and milk cows of the San Luis residents.”
[Source: Civil Works Administration Pioneer Interview Collection, volume 349, interview 28, pp. 105–6, recorded by Charles E. Gibson Jr. in 1933 or 1934. Available online at HistoryColorado.org]
assured: available for sure
Questions on these sources:
- What do you notice in the photo of María Gallegos?
- According to María Gallegos’s son Gaspar how did his mother make money? Was it a problem that she could not read or write?
- The town where María lived, San Luis, had a “common.” What could San Luis residents do with that space? Why wasn’t that area owned by just one person?
- Farming and ranching were important to many in San Luis. What kind of work did María, Dario, and their children likely do to keep the farm and ranch healthy and successful?
- How did María Gallegos succeed as a pioneer?
Doc. 6: Record of the founding of Guadalupe, CO (1933)
In 1933, the US government paid writers to interview Colorado pioneers and their children. One of these writers was Charles Gibson. The son of a pioneer settler, Jesus Velasquez, wrote down a short history of the founding of Guadalupe for Gibson. Velasquez probably heard this story and learned the list of the town founders from his father, Vincente Velasquez. The story is not complete though. You will have to help fill in some missing information.
Founders of the town of Guadalupe, 1854
|Jose Maria Jaquez, leader||El Llanito, New Mexico|
|Vincente Velasquez, 15 yrs. Old||El Llanito, New Mexico|
|Jesus Velasquez||La Cueva, New Mexico|
|Jose Manuel Vigil||La Cueva, New Mexico|
|Jose Francisco Lucero||La Servilleta, New Mexico|
|Juan Nicolas Martinez||La Servilleta, New Mexico|
|Santiago Manchego||La Cueva, New Mexico|
|Juan de Dios Martinez||La Cueva, New Mexico|
|Antonio Jose Chavez||La Servilleta, New Mexico|
|Juan Antonio Chavez||Ojo Caliente, New Mexico|
|Ilario Atencio||Ojo Caliente, New Mexico|
|Juan de la Cruz Espinoza||Ojo Caliente, New Mexico|
“All of the above named persons came to settle on what is called Conejos River. They came in August 1854 and stopped about 5 miles west of Guadalupe. They build a ditch from this point which they called El Cedro Redondo . . . . They build this Ditch for about 8 to 10 miles long to [a place] they called Sevilleta. Then they went back to their homes in New Mexico . . . to get ready . . . so that they could stay [in Colorado] when they came back.
Livestock: animals raised for food or work
“They came back in October 1854. They stopped at Guadalupe [and] here they built a town. They built it in a circle with only two openings: one on the south and one on the north. Here they put what livestock they had for fear of the Indians which were in great numbers at that time. They had come on ox carts and burros [from New Mexico]. They brought with them wheat, corn, flour, beans, cattle, horses, sheep, hogs, and chickens.
“But in March 1855 they had bad luck. One morning as they drove their livestock to pasture the Indians came from an ambush and [took] all the animals that the people had. The people had no arms to fight with and the Indians were too many for the people. So the Indians took all of the people’s animals.”
[Source: Civil Works Administration Pioneer Interview Collection, volume 349, interview 10, pp. 40–41, recorded by Charles E. Gibson Jr. in 1933 or 1934. Available online: https://www.historycolorado.org/oral-histories]
- We have a story here about the creation of a town called Guadalupe in the San Luis Valley and a list of names. Why are there names of men listed at the beginning? What might the names of towns to the right of those men mean?
- What details do we learn about the creation of Guadalupe from this story?
- Why do you think these settlers spent time digging a ditch before they built the town?
- How did these San Luis settlers hope to thrive in this new town?
- What bad luck happened to them?
- How might it help to create a new farm with a community of people, rather doing it alone or just with one family?
The next two sources focus on a pioneer named Clara Brown
Doc. 7: Photograph of Clara Brown, about 1875
Clara Brown was born a slave in Virginia around 1800. Here is a photograph of Clara Brown around 1875:
[Source: Denver Public Library Digital Collections. Call #Z-275]
Growing up a slave, Clara Brown was married at eighteen and had four children. Her family was broken up when her white master sold her husband and children to other white owners. She spent the rest of her life to trying to find her family.
Clara became free when her master, George Brown, died in 1856. Hearing that one of her daughters had moved West, Brown eventually walked with a covered wagon train to Denver in 1859. She was one of the first African Americans in Colorado. Brown opened a laundry service in the gold rush town of Central City. After a few years, she had saved thousands of dollars. She often helped homeless and sick miners who needed a place to stay. She was admired by the mostly white population of Colorado Territory for her generosity and kindness.
Docs. 8 A and B: Newspaper reports of fire in Central City, 1873
Clara Brown did not leave behind her own story of her life. She did leave a few sources of information that we can use to understand her life and experience. One place we can look for information about Clara Brown is the newspapers in the towns where she lived. Here is a newspaper story about a fire that mentions Clara Brown. Consider how reading this newspaper can help us find out more about her.
Story 8A: “The line of buckets was extended up Lawrence Street while another was brought from a shaft upon the hill and a desperate effort made to save the building which had become ignited from the burning chapel on the west. This however failed and three small buildings belong to Aunt Clara Brown (colored) were soon wrapped in flames.”
[Source: “Our City in Flames!” Daily Register Call, January 28, 1873.]
Line of buckets was extended up: people in town formed a line to pass buckets of water
Shaft: opening of a mine
Story 8B: “The losses, as near as we have been able to [tell], are as follows….Clara Brown, three houses, $1800.”
[Source: “The Late Fire in Central,” Daily Colorado Miner, January 29, 1873.]
- Look at the photo of Clara Brown. What five words describe her?
- Does a photo like this help us imagine what Brown was like? Why or why not?
- What do we learn about Central City in 1873 from these newspaper stories in the Daily Register Call?
- How many houses did Clara Brown own, at least?
- Can we tell whether her laundry business had been successful? Why or why not?
- What might she do after this event in 1873? Might she be discouraged?
- How could we find out what happened to Clara Brown after this fire?
- How did Clara Brown succeed as a pioneer?
Doc: 9: Photograph of Central City in 1875
[Source: Denver Public Library, call # X-11588. Available online at: http://digital.denverlibrary.org/cdm/singleitem/collection/p15330coll22/id/9632/rec/290]
- This photograph of Central City was likely taken after the fire. This was the main street. What might it feel like to walk that street in 1875?
- This was a town that supported gold miners in the surrounding hills. What kinds of stores and businesses might those miners need?
- Do you see any trees on the hills around the town? How might town settlers use trees in this area?
- How would you describe this town that Clara Brown lived in?
Doc. 10: Interview with Emily Sudbury Hartman, 1934
Emily Sudbury Hartman was born in Utah to Mormon parents in 1859. Her mother divorced Emily’s father and later married a U.S. Army soldier at Ft. Bridger. Emily was only seven years old in 1866 when her family moved to Denver. After she finished school, Emily trained as a nurse. She later helped run a sanitarium, a kind of hospital, with her husband, Flavius Josephus Hartman. He was raised in Kansas and came to Colorado to join his father who had come for the Gold Rush.
In 1934, Emily sat down with a US government interviewer and told him about their early days in Colorado:
“[In 1866 Emily’s family] was just becoming accustomed to the use of kerosene lamps instead of candles. [Emily’s] mother purchased one of the first hand sewing machines put on the market. Apples were brought into Denver from Missouri in Prairie Schooners, and sold at very high prices.
Accustomed to: used to, familiar with
Prairie Schooners: covered wagons
Tallow: animal fat
She was married to Flavius Josephus Hartman in 1877, after going [with him] to the San Luis Valley . . . . [Her husband’s father] had come into Colorado in 1859 with the Pike’s Peak gold rush, leaving his wife and several children on a ranch in Kansas. One time when the mother had gone to town [for food], a severe blizzard came up and she could not get back for three days. The three boys that were left behind [stayed alive] by burning the fence and grinding corn in the coffee grinder, making griddle cakes and cooking them in grease from tallow candles.
This little family suffered severe hardships because they were deprived of . . . money sent by the father from Colorado [when a mail clerk stole it] . . . . At the age of sixteen years [Flavius Josephus] was washing dishes and waiting tables in a restaurant in Denver. At that time eggs were twenty-five cents apiece and flour was twenty dollars a sack.”
Severe: very hard
Hardships: difficult challenges
Deprived of: robbed of
[Source: CWA Pioneer Interview of Emily Sudbury Hartman by Arthur W. Monroe, (February 3, 1934) v. 357 pages 6-8. Available online: https://www.historycolorado.org/oral-histories]
- How far did apples have to travel by horse-drawn wagon to reach Denver in 1866?
- Emily describes a big challenge her husband faced when he was a child. What happened to his family?
- What kinds of skills might her husband have developed as a child in Kansas before he came to Colorado?
- Who might Emily Sudbury Hartman have met in the San Luis Valley?
- How might we find out whether the price of eggs or the price of flour was expensive back then?
- Why do you think this Colorado pioneer story mentioned food so often?
Doc. 11: Ella Bailey Diary, 1869
Ella Bailey ran a boarding house near Greeley in 1869. That means she had to cook and clean and wash clothing for men who rented rooms from her. She kept a diary and recorded these entries about some of her days in the winter of 1869:
“Sun. Feb. 7: “[T]he days are 48 hours long in Colorado and Sundays seventy two hours long.”
“March 2: Baked fifty one pies. Tired as a beggar.
“March 3: Baked twenty three pies and three thousand cookies and ginger snaps.
“March 6: If men was company to me like friends at home, I would never get lonesome.
“April 8: I can’t help but wish I had never seen Colorado. It is lonesome and desolate.
[Source: Ella Bailey Papers (1869) History Colorado, Mini-MSS #28]
Beggar: A very poor person who begs for change or food
Desolate: a place without people.
- According to Ella Bailey, how long are the days in Colorado? How long are Sundays? Why would she say something that cannot be true?
- How much cooking and baking did Ella do on the first two days she wrote about? How did she feel afterward?
- Does she have much company at the boarding house? Why or why not?
- Why would Ella stay and keep working at this place? Do you think that she is alone?
- What did Ella Bailey do to succeed as a pioneer?
Doc. 12: Ralph Moody, 1950
Ralph Moody moved as a boy from New England to a ranch near Littleton in 1906. His family of seven hoped to find a working farm when they first arrived. Here is how Ralph described what they first saw when they reached their new ranch:
“We could see our new house from a couple of miles away . . . . [I]t looked like a little dollhouse sitting on the edge of a great big table . . . . As we came nearer, it looked less like a dollhouse and more like just what it was: a little three-room cottage . . . . The chimney was broken off at the roof and most of the windows were smashed. When we turned off the wagon road, a jack rabbit leaped out from under the house and raced away . . . . There wasn’t much to see [inside], except that the floor was covered with broken glass, and plaster that had fallen off the walls and ceiling . . . .
Cottage: a small home
Plaster: a material spread over walls and ceilings to make a hard cover
Bargain: an item bought at a lower price than expected
Privy: a toilet outside the house
“[My father and I] got up before daylight every morning for the next two weeks . . . First we’d pick up any of the bargains Mother had found for the house, then buy secondhand lumber, plaster, glass, and other things we needed on our way out to the ranch. And father would never stop working till it was so dark he couldn’t see to drive a nail.
“[After about a week] Father had built a new chimney, patched the places where the plaster had fallen off, put glass in all the windows, and made the front and back steps for the house. My part of the job was to sweep up all the broken glass and plaster . . . . There was nothing left to build but the privy.”
[Source: Ralph Moody, Little Britches (1950) pps. 3-5.]
- How did Ralph’s house look when he and his family first arrived there?
- Why do you think it looked like that?
- Why do you think his mother and father didn’t just turn around and go home right off?
- What did Ralph and his father do to help make their home liveable?
- What kinds of skills might Ralph have learned from watching and working with his father?
- How would those skills help a pioneer?
Doc. 13: Pioneer License Plate, 1999
Remembering the experiences of Colorado pioneers has been important for many generations in the state. To take only a recent example: starting in 1999 the Colorado Department of Motor Vehicles introduced a new license plate to allow residents to celebrate their ancestors who had arrived in the state more than one hundred years earlier.
At first, people who wanted this license plate had to prove their family connection to a Colorado resident from at least 100 years ago. Now anyone can request this license plate if they pay an extra fee.
- What images do you see on this license plate?
- Why do you think the state officials chose those images?
- What other images of pioneer life could appear on a license plate like this?
- If this Pioneer license plate included key words to describe pioneers, what would they be?
How to Use these Sources:
OPTION 1: Comparing Pioneer Communities.
Beginning with the William Bent documents, students could compare these first three sources to recreate some key aspects of his pioneer life. His story can help situate the white pioneer experience in the context of Native Americans. Students could answer the question: what kind of community did Bent help create along the Arkansas River in the 1830s? After reviewing these Bent sources, students could turn to those dealing with the San Luis Valley. Here students can read and interpret sources about early Hispanic settlers. After exploring each of these sources individually, students could describe the kind of community that emerged in the San Luis Valley in the 1850s. They could then compare that to the Bent’s Fort community. Maps from the previous chapter could be used to find these places.
OPTION 2: Pioneers after the Gold Rush.
The remaining sources describe pioneers who came to Colorado during or after the Gold Rush. Though many pioneers were single, male, white miners, Clara Brown offers an important alternative experience. Students can review the sources to begin to understand how this hard-working African American woman became successful in a mining boom town even though she was not a miner. We have little information directly from Clara Brown, and so must instead look at fragments or pieces of information to reconstruct her life. Included here are two newspaper stories about a fire that help us understand her real estate holdings and success as a business woman. The Clara Brown, Emily Hartman, and Ella Bailey sources give us a picture into the lives of other pioneering women in Colorado. The Ralph Moody source gives a child’s perspective on pioneering, though later than the Gold Rush. After exploring these individual sources, students could compare them to create a list of hardships and resilient characteristics. All these pioneers displayed a kind of toughness and determination, but they did so in different ways.
OPTION 3: Pioneer License Plates.
After reviewing all the previous sources, students can turn to the pioneer license plate. They can consider what images help tell the story of a Colorado pioneer. After reviewing the state of Colorado’s official plate, students could create their own plates to honor one of the pioneers from this chapter or to remember key aspects of the pioneer experience. Or their individual pioneer license plate could include a list of key characteristics or images to reflect the diversity of these experiences.
Additional Secondary Sources for Younger Readers
There are many short biographies of Colorado pioneers available for elementary-level readers. Some recent examples include:
- Cheryl Beckwith, William Bent: Frontiersman (Palmer Lake: Filter Press, 2011)
- Emerita Romero-Anderson, José Dario Gallegos: Merchant of the Santa Fe Trail (Palmer Lake, CO: Filter Press, 2007)
- Suzanne Frachetti, Clara Brown: African American Pioneer (Palmer Lake: Filter Press, 2011)
- The online Colorado Encyclopedia also includes short biographies of many pioneers (http://coloradoencyclopedia.org/). | https://coloradohistorydetectives.pressbooks.com/chapter/chapter-2/ | 21 |
26 | |Preceded by||Late Middle Ages|
|Followed by||Jacobean era|
|Periods in English history|
The Tudor period occurred between 1485 and 1603 in England and Wales and includes the Elizabethan period during the reign of Elizabeth I until 1603. The Tudor period coincides with the dynasty of the House of Tudor in England whose first monarch was Henry VII (b.1457, r.1485–1509). Historian John Guy (1988) argued that "England was economically healthier, more expansive, and more optimistic under the Tudors" than at any time since the Roman occupation.
Following the Black Death and the agricultural depression of the late 15th century, the population began to increase. It was less than 2 million in 1600. The growing population stimulated economic growth, accelerated the commercialisation of agriculture, increased the production and export of wool, encouraged trade, and promoted the growth of London.
The high wages and abundance of available land seen in the late 15th century and early 16th century were replaced with low wages and a land shortage. Various inflationary pressures, perhaps due to an influx of New World gold and a rising population, set the stage for social upheaval with the gap between the rich and poor widening. This was a period of significant change for the majority of the rural population, with manorial lords beginning the process of enclosure of village lands that previously had been open to everyone.
The Reformation transformed English religion during the Tudor period. The five sovereigns, Henry VII, Henry VIII, Edward VI, Mary I, and Elizabeth I had entirely different approaches, with Henry VIII replacing the pope as the head of the Church of England but maintaining Catholic doctrines, Edward imposing a very strict Protestantism, Mary attempting to reinstate Catholicism, and Elizabeth arriving at a compromise position that defined the not-quite-Protestant Church of England. It began with the insistent demands of Henry VIII for an annulment of his marriage that Pope Clement VII refused to grant.
Historians agreed that the great theme of Tudor history was the Reformation, the transformation of England from Catholicism to Protestantism. The main events, constitutional changes, and players at the national level have long been known, and the major controversies about them largely resolved. Historians until the late 20th century thought that the causes were: a widespread dissatisfaction or even disgust with the evils, corruptions, failures, and contradictions of the established religion, setting up an undertone of anti-clericalism that indicated a rightness for reform. A secondary influence was the intellectual impact of certain English reformers, such as the long-term impact of John Wycliffe (1328–1384) and his “Lollardy” reform movement, together with a stream of Reformation treatises and pamphlets from Martin Luther, John Calvin, and other reformers on the continent. The interpretation by Geoffrey Elton in 1960 is representative of the orthodox interpretation. He argued that:
Social historians after 1960 investigated English religion at the local level, and discovered the dissatisfaction had not been so widespread. The Lollardy movement had largely expired, and the pamphleteering of continental reformers hardly reached beyond a few scholars at the University of Cambridge—King Henry VIII had vigorously and publicly denounced Luther's heresies. More important, the Catholic Church was in a strong condition in 1500. England was devoutly Catholic, it was loyal to the pope, local parishes attracted strong local financial support, religious services were quite popular both at Sunday Mass and at family devotions. Complaints about the monasteries and the bishops were uncommon. The kings backed the popes and by the time Luther appeared on the scene, England was among the strongest supporters of orthodox Catholicism, and seemed a most unlikely place for a religious revolution.
Henry VII, founder of the House of Tudor, became King of England by defeating King Richard III at the Battle of Bosworth Field, the culmination of the Wars of the Roses. Henry engaged in a number of administrative, economic and diplomatic initiatives. He paid very close attention to detail and, instead of spending lavishly, concentrated on raising new revenues. His new taxes were unpopular, and when Henry VIII succeeded him, he executed Henry VII's two most hated tax collectors.
Henry VIII, flamboyant, energetic, militaristic and headstrong, remains one of the most visible kings of England, primarily because of his six marriages, all of which were designed to produce a male heir, and his heavy retribution in executing many top officials and aristocrats. In foreign-policy, he focused on fighting France—with minimal success—and had to deal with Scotland, Spain, and the Holy Roman Empire, often with military mobilisation or actual highly expensive warfare that led to high taxes. The chief military success came over Scotland.The main policy development was Henry's taking full control of the Church of England. This followed from his break from Rome, which was caused by the refusal of the Pope to annul his original marriage. Henry thereby introduced a very mild variation of the Protestant Reformation. There were two main aspects. First Henry rejected the Pope as the head of the Church in England, insisting that national sovereignty required the Absolute supremacy of the king. Henry worked closely with Parliament in passing a series of laws that implemented the break. Englishmen could no longer appeal to Rome. All the decisions were to be made in England, ultimately by the King himself, and in practice by top aides such as Cardinal Wolsey and Thomas Cromwell. Parliament proved highly supportive, with little dissent. The decisive moves came with the Act of Supremacy in 1534 that made the king the protector and only supreme head of the church and clergy of England. After Henry imposed a heavy fine on the bishops, they nearly all complied. The laws of treason were greatly strengthened so that verbal dissent alone was treasonous. There were some short-lived popular rebellions that were quickly suppressed. The league level in terms of the aristocracy and the Church was supportive. The highly visible main refusals came from Bishop Fisher and Chancellor Thomas More; they were both executed. Among the senior aristocrats, trouble came from the Pole family, which supported Reginald Pole who was in exile in Europe. Henry destroyed the rest of the family, executing its leaders, and seizing all its property. The second stage involved the seizure of the monasteries. The monasteries operating religious and charitable institutions were closed, the monks and nuns were pensioned off, and the valuable lands were sold to friends of the King, thereby producing a large, wealthy, gentry class that supported Henry. In terms of theology and ritual there was little change, as Henry wanted to keep most elements of Catholicism and detested the "heresies" of Martin Luther and the other reformers.
Biographer J.J. Scarisbrick says that Henry deserved his traditional title of "Father of the English navy."It became his personal weapon. He inherited seven small warships from his father, and added two dozen more by 1514. In addition to those built in England, he bought up Italian and Hanseatic warships. By March 1513, he proudly watched his fleet sail down the Thames under command of Sir Edmund Howard. It was the most powerful naval force to date in English history: 24 ships led by the 1600 ton "Henry Imperial"; the fleet carried 5000 combat marines and 3000 sailors. It forced the outnumbered French fleet back to its ports, took control of the English Channel, and blockaded Brest. Henry was the first king to organise the navy as a permanent force, with a permanent administrative and logistical structure, funded by tax revenue. His personal attention was concentrated on land, where he founded the royal dockyards, planted trees for shipbuilding, enacted laws for in land navigation, guarded the coastline with fortifications, set up a school for navigation and designated the roles of officers and sailors. He closely supervised the construction of all his warships and their guns, knowing their designs, speed, tonnage, armaments and battle tactics. He encouraged his naval architects, who perfected the Italian technique of mounting guns in the waist of the ship, thus lowering the centre of gravity and making it a better platform. He supervised the smallest details and enjoyed nothing more than presiding over the launching of a new ship. He drained his treasury on military and naval affairs, diverting the revenues from new taxes and the sales of monastery lands.
Elton argues that Henry indeed build up the organisation and infrastructure of the Navy, but it was not a useful weapon for his style of warfare. It lacked a useful strategy. It did serve for defence against invasion, and for enhancing England's international prestige.
Professor Sara Nair James says that in 1515–1529 Cardinal Thomas Wolsey, "would be the most powerful man in England except, possibly, for the king."Historian John Guy explains Wolsey's methods:
Operating with the firm support of the king, and with special powers over the church given by the Pope, Wolsey dominated civic affairs, administration, the law, the church, and foreign-policy. He was amazingly energetic and far-reaching. In terms of achievements, he built a great fortune for himself, and was a major benefactor of arts, humanities and education. He projected numerous reforms, but in the end English government had not changed much. For all the promise, there was very little achievement of note. From the king's perspective, his greatest failure was an inability to get a divorce when Henry VIII needed a new wife to give him a son who would be the undisputed heir to the throne. Historians agree that Wolsey was a disappointment. In the end, he conspired with Henry's enemies, and died of natural causes before he could be beheaded.
Historian Geoffrey Elton argued that Thomas Cromwell, who was Henry VIII's chief minister from 1532 to 1540, not only removed control of the Church of England from the hands of the Pope, but transformed England with an unprecedented modern, bureaucratic government.Cromwell (1485–1540) replaced medieval government-as-household-management. Cromwell introduced reforms into the administration that delineated the King's household from the state and created a modern administration. He injected Tudor power into the darker corners of the realm and radically altered the role of the Parliament of England. This transition happened in the 1530s, Elton argued, and must be regarded as part of a planned revolution. Elton's point was that before Cromwell the realm could be viewed as the King's private estate writ large, where most administration was done by the King's household servants rather than separate state offices. By masterminding these reforms, Cromwell laid the foundations of England's future stability and success. Cromwell's luck ran out when he picked the wrong bride for the King; he was beheaded for treason, More recently historians have emphasised that the king and others played powerful roles as well.
The king had an annual income of about £100,000, but he needed much more in order to suppress rebellions and finance his foreign adventures. In 1533, for example, military expenditures on the northern border cost £25,000, while the 1534 rebellion in Ireland cost £38,000. Suppressing the Pilgrimage of Grace cost £50,000, and the king's new palaces were expensive. Meanwhile, customs revenue was slipping. The Church had an annual revenue of about £300,000; a new tax of 10% was imposed which brought in about £30,000. To get even larger sums it was proposed to seize the lands owned by monasteries, some of which the monks farmed and most of which was leased to local gentry. Taking ownership meant the rents went to the king. Selling the land to the gentry at a bargain price brought in £1 million in one-time revenue and gave the gentry a stake in the administration.The clerical payments from First Fruits and Tenths, which previously went to the pope, now went to the king. Altogether, between 1536 and Henry's death, his government collected £1.3 million; this huge influx of money caused Cromwell to change the Crown's financial system to manage the money. He created a new department of state and a new official to collect the proceeds of the dissolution and the First Fruits and Tenths. The Court of Augmentations and number of departments meant a growing number of officials, which made the management of revenue a major activity. Cromwell's new system was highly efficient with far less corruption or secret payoffs or bribery than before. Its drawback was the multiplication of departments whose sole unifying agent was Cromwell; his fall caused confusion and uncertainty; the solution was even greater reliance on bureaucratic institutions and the new Privy Council.
In dramatic contrast to his father, Henry VIII spent heavily, in terms of military operations in Britain and in France, and in building a great network of palaces. How to pay for it remained a serious issue. The growing number of departments meant many new salaried bureaucrats. There were further financial and administrative difficulties in 1540–58, aggravated by war, debasement, corruption and inefficiency, which were mainly caused by Somerset. After Cromwell's fall, William Paulet, 1st Marquess of Winchester, the Lord Treasurer, produced further reforms to simplify the arrangements, reforms which united most of the crown's finance under the exchequer. The courts of general surveyors and augmentations were fused into a new Court of Augmentations, and this was later absorbed into the exchequer along with the First Fruits and Tenths.
At the end of his reign, Henry VII's peacetime income was about £113,000, of which customs on imports amounted to about £40,000. There was little debt, and he left his son a large treasury. Henry VIII spent heavily on luxuries, such as tapestries and palaces, but his peacetime budget was generally satisfactory. The heavy strain came from warfare, including building defences, building a Navy, Suppressing insurrections, warring with Scotland, and engaging in very expensive continental warfare. Henry's Continental wars won him little glory or diplomatic influence, and no territory. Nevertheless, warfare 1511 to 1514 with three large expeditions and two smaller ones cost £912,000. The Boulogne campaign of 1544 cost £1,342,000 and the wars against Scotland £954,000; the naval wars cost £149,000 and large sums were spent to build and maintain inland and coastal fortifications. The total cost of war and defence between 1539 and 1547 was well over £2,000,000, although the accounting procedures were too primitive to give an accurate total. Adding it all up, approximately 35% came from taxes, 32% from selling land and monastery holdings, and 30% from debasing the coinage. The cost of war in the short reign of Edward VI was another £1,387,000.
After 1540, the Privy Coffers were responsible for 'secret affairs', in particular for the financing of war. The Royal Mint was used to generate revenue by debasing the coinage; the government's profit in 1547–51 was £1.2 million. However, under the direction of regent Northumberland, Edward's wars were brought to an end. The mint no longer generated extra revenue after debasement was stopped in 1551.
Although Henry was only in his mid-50s, his health deteriorated rapidly in 1546. At the time the conservative faction, led by Bishop Stephen Gardiner and Thomas Howard, 3rd Duke of Norfolk that was opposed to religious reformation seemed to be in power, and was poised to take control of the regency of the nine-year-old boy who was heir to the throne. However, when the king died, the pro-reformation factions suddenly seized control of the new king, and of the Regency Council, under the leadership of Edward Seymour. Bishop Gardiner was discredited, and the Duke of Norfolk was imprisoned for all of the new king's reign.
The short reign of Edward VI marked the triumph of Protestantism in England. Somerset, the elder brother of the late Queen Jane Seymour (married to Henry VIII) and uncle to King Edward VI had a successful military career. When the boy king was crowned, Somerset became Lord Protector of the realm and in effect ruled England from 1547 to 1549. Seymour led expensive, inconclusive wars with Scotland. His religious policies angered Catholics. Purgatory was rejected so there was no more need for prayers to saints, relics, and statues, nor for masses for the dead. Some 2400 permanent endowments called chantries had been established that supported thousands of priests who celebrated masses for the dead, or operated schools or hospitals in order to earn grace for the soul in purgatory. The endowments were seized by Cromwell in 1547.Historians have contrasted the efficiency of Somerset's takeover of power in 1547 with the subsequent ineptitude of his rule. By autumn 1549, his costly wars had lost momentum, the crown faced financial ruin, and riots and rebellions had broken out around the country. He was overthrown by his former ally John Dudley, 1st Duke of Northumberland.
Until recent decades, Somerset's reputation with historians was high, in view of his many proclamations that appeared to back the common people against a rapacious landowning class. In the early 20th century this line was taken by the influential A. F. Pollard, to be echoed by Edward VI's leading biographer W. K. Jordan. A more critical approach was initiated by M. L. Bush and Dale Hoak in the mid-1970s. Since then, Somerset has often been portrayed as an arrogant ruler, devoid of the political and administrative skills necessary for governing the Tudor state.
Dudley by contrast moved quickly after taking over an almost bankrupt administration in 1549.Working with his top aide William Cecil, Dudley ended the costly wars with France and Scotland and tackled finances in ways that led to some economic recovery. To prevent further uprisings he introduced countrywide policing, appointed Lords Lieutenants who were in close contact with London, and set up what amounted to a standing national army. Working closely with Thomas Cramner, the Archbishop of Canterbury, Dudley pursued an aggressively Protestant religious policy. They promoted radical reformers to high Church positions, with the Catholic bishops under attack. The use of the Book of Common Prayer became law in 1549; prayers were to be in English not Latin. The Mass was no longer to be celebrated, and preaching became the centerpiece of church services.
Purgatory, Protestantism declared, was a Catholic superstition that falsified the Scriptures. Prayers for the dead were useless because no one was actually in Purgatory. It followed that prayers to saints, veneration of relics, and adoration of statues were all useless superstitions that had to end. For centuries devout Englishman had created endowments called chantries designed as good works that generated grace to help them get out of purgatory after they died. Many chantries were altars or chapels inside churches, or endowments that supported thousands of priests who said Masses for the dead. In addition there were many schools and hospitals established as good works. In 1547 a new law closed down 2,374 chantries and seized their assets.Although the Act required the money to go to "charitable" ends and the "public good," most of it appears to have gone to friends of the Court. Historian A.G. Dickens has concluded:
The new Protestant orthodoxy for the Church of England was expressed in the Forty-Two Articles of Faith in 1553. But when the king suddenly died, Dudley's last-minute efforts to make his daughter-in-law Lady Jane Grey the new sovereign failed after only nine days of her reign. Queen Mary took over and had him beheaded and had Jane Grey beheaded after Thomas Wyatt's Protestant rebellion against the marriage of the queen and Philip II of Spain less than a year later.
Mary was the daughter of Henry VIII by Catherine of Aragon; she closely identified with her Catholic, Spanish heritage. She was next in line for the throne. However, in 1553 as Edward VI lay dying, he and the Duke of Northumberland plotted to make his first cousin once removed Lady Jane Grey as the new Queen. Northumberland, a duke, wanted to keep control of the government, and promote Protestantism. Edward signed a devise to alter the succession, but that was not legal, for only Parliament could amend its own acts. Edward's Privy Council kept his death secret for three days to install Lady Jane, but Northumberland had neglected to take control of Princess Mary. She fled and organised a band of supporters, who proclaimed her Queen across the country. The Privy Council abandoned Northumberland, and proclaimed Mary to be the sovereign after nine days of the pretended Jane Grey. Queen Mary imprisoned Lady Jane and executed Northumberland.
Mary is remembered for her vigorous efforts to restore Roman Catholicism after Edward's short-lived crusade to minimise Catholicism in England. Protestant historians have long denigrated her reign, emphasising that in just five years she burned several hundred Protestants at the stake in the Marian persecutions. However, a historiographical revisionism since the 1980s has to some degree improved her reputation among scholars.Christopher Haigh's bold reappraisal of the religious history of Mary's reign painted the revival of religious festivities and a general satisfaction, if not enthusiasm, at the return of the old Catholic practices. Her re-establishment of Roman Catholicism was reversed by her younger half-sister and successor Elizabeth I.
Protestant writers at the time took a highly negative view, blasting her as "Bloody Mary". John Knox attacked her in his First Blast of the Trumpet against the Monstrous Regiment of Women (1558), and she was prominently vilified in Actes and Monuments (1563), by John Foxe. Foxe's book taught Protestants for centuries that Mary was a bloodthirsty tyrant. In the mid-20th century, H. F. M. Prescott attempted to redress the tradition that Mary was intolerant and authoritarian by writing more objectively, and scholarship since then has tended to view the older, simpler, partisan assessments of Mary with greater scepticism.
Haigh concluded that the "last years of Mary's reign were not a gruesome preparation for Protestant victory, but a continuing consolidation of Catholic strength."Catholic historians, such as John Lingard, argued Mary's policies failed not because they were wrong but because she had too short a reign to establish them. In other countries, the Catholic Counter-Reformation was spearheaded by Jesuit missionaries; Mary's chief religious advisor, Cardinal Pole, refused to allow the Jesuits in England. Spain was widely seen as the enemy, and her marriage to King Phillip II of Spain was deeply unpopular, even though he had practically no role in English government and they had no children. The military loss of Calais to France was a bitter humiliation to English pride. Failed harvests increased public discontent. Although Mary's rule was ultimately ineffectual and unpopular, her innovations regarding fiscal reform, naval expansion, and colonial exploration were later lauded as Elizabethan accomplishments.
Historians often depict Elizabeth's reign as the golden age in English history in terms of political, social and cultural development, and in comparison with Continental Europe.Calling her "Gloriana" and using the symbol of Britannia starting in 1572, marked the Elizabethan age as a renaissance that inspired national pride through classical ideals, international expansion, and naval triumph over the hated and feared Spanish. Elizabeth's reign marks the decisive turning point in English religious history, as a predominantly Catholic nation at the beginning of her reign was predominantly Protestant by the end. Although Elizabeth executed 250 Catholic priests, she also executed some extreme Puritans, and on the whole she sought a moderately conservative position that mixed Royal control of the church (with no people role), combined with predominantly Catholic ritual, and a predominantly Calvinists theology.
Mary, Queen of Scots (lived 1542–87) was a devout Catholic and next in line for the throne of England after Elizabeth. Her status became a major domestic and international issue for England.especially after the death of King James IV at the Battle of Flodden in 1513. The upshot was years of struggle for control of the throne, nominally held by the infant king James V (lived 1512–42, reigned 1513–42), until he came of age in 1528.
Mary of Guise (lived 1515–60) was a French woman close to the French throne. She ruled as the regent for her teenaged daughter Queen Mary, 1554–60. The regent and her daughter were both strong proponents of Catholicism and attempted to suppress the rapidly Growth of Protestantism in Scotland. Mary of Guise was a strong opponent of Protestantism, and worked to maintain a close alliance between Scotland and France, called the Auld Alliance. In 1559 the Regent became alarmed that widespread Scottish hostility against French rule was strengthening the partisan cause, so she banned unauthorised preaching. But the fiery preacher John Knox sent Scotland aflame with his preaching, leading the coalition of powerful Scottish nobles, calling themselves the Lords of the Congregation raised the rebellion to overthrow the Catholic Church and seize its lands. The Lords appealed to Elizabeth for English help, but she played a very cautious hand. The 1559 treaty with France called for peace and she was unwilling to violate it, especially since England had no allies at the time. Supporting rebels against the lawful ruler violated Elizabeth's deeply held claims to the legitimacy of all royalty. On the other hand, a French victory in Scotland would establish a Catholic state on the northern border supported by a powerful French enemy. Elizabeth first sent money, then sent artillery, then sent a fleet that destroyed the French fleet in Scotland. Finally she sent 8,000 troops north. The death of Mary of Guise allowed England, France and Scotland to come to terms in the Treaty of Edinburgh in 1560, which had a far-reaching impact. France permanently withdrew all its forces from Scotland. It ensured the success of the Reformation in Scotland; it began a century of peace with France; it ended any threat of a Scottish invasion; and it paved the way for a union of the two kingdoms in 1603 when the Scottish king James VI inherited the English throne as James I and launched the Stuart era.
When the treaty was signed, Mary was in Paris as the wife of the French King Francis II. When he died in 1561, she returned to Scotland as Queen of Scotland. However, when Elizabeth refused to recognise her as the heir to the English throne, Mary rejected the Treaty of Edinburgh. She made an unfortunate marriage to Lord Darnley who mistreated her and murdered her Italian favourite David Rizzio. Darnley in turn was murdered by the Earl of Bothwell. He was acquitted of murder; she quickly married Bothwell. Most people at the time thought she was deeply involved in adultery or murder; historians have argued at length and are undecided. However rebellion broke out and the Protestant nobles defeated the Queen's forces in 1567.She was forced to abdicate in favour of her infant son James VI; she fled to England, where Elizabeth confined her in house arrest for 19 years. Mary engaged in numerous complex plots to assassinate Elizabeth and become queen herself. Finally Elizabeth caught her plotting the Babington Plot and had her executed in 1587.
Elizabeth's final two decades saw mounting problems that were left for the Stuarts to solve after 1603. John Cramsie, in reviewing the recent scholarship in 2003, argues:
Elizabeth remained a strong leader, but almost all of her earlier advisers had died or retired. Robert Cecil (1563–1612) took over the role of leading advisor long held by his father Lord Burghley. Robert Devereux, 2nd Earl of Essex (1567–1601) was her most prominent general, a role previously held by his stepfather Robert Dudley, who was the love of Elizabeth's life; and the adventurer/historian Sir Walter Raleigh (1552–1618) was a new face on the scene. The three new men formed a triangle of interlocking and opposing forces that was hard to break into. The first vacancy came in 1601, when Devereux was executed for attempting to take the Queen prisoner and seize power.After Elizabeth died the new king kept on Cecil as his chief advisor, and beheaded Raleigh.
Numerous popular uprisings occurred; all suppressed by royal authorities. The largest were:
The main officials of the local government operated at the county level (also called "shire") were the sheriff and the Lord Lieutenant.the power of the sheriff had declined since medieval days, but he was still very prestigious. He was appointed for a one-year term, with no renewals, by the King's Privy Council. He was paid many small fees, but they probably did not meet the sheriff's expenses in terms of hospitality and hiring his under-sheriffs and bailiffs. The sheriff held court every month to deal with civil and criminal cases. He supervised elections, ran the jail and meted out punishments. His subordinates provided staffing for the county's justices of the peace.
The Lord Lieutenant was a new office created by Henry VIII to represent the royal power in each county. He was a person with good enough connections at court to be selected by the king and served at the king's pleasure, often for decades.He had limited powers of direct control, so successful Lord Lieutenants worked with his deputy lieutenants and dealt with the gentry through compromise, consensus, and the inclusion of opposing factions. He was in charge of mobilising the militia if necessary for defence, or to assist the king in military operations. In Yorkshire in 1588, the Lord Lieutenant was the Earl of Huntington, who urgently needed to prepare defences in the face of the threatened invasion from the Spanish Armada. The Queen's Privy Council urgently called upon him to mobilise the militia, and report on the availability of men and horses. Huntington's challenge was to overcome the reluctance of many militia men, the shortages of arms, training mishaps, and jealousy among the gentry as to who would command which unit. Despite Huntingdon's last-minute efforts, the mobilisation of 1588 revealed a reluctant society that only grudgingly answered the call to arms. The Armada never landed, and the militia were not actually used. During the civil wars of the mid-17th century, the Lord Lieutenant played an even more important role in mobilising his county either for king or for Parliament.
The day-to-day business of government was in the hands of several dozen justices of the peace (JP). They handled all the real routine police administrative functions, and were paid through a modest level of fees. Other local officials included constables, church-wardens, mayors, and city aldermen. The JP duties involved a great deal of paperwork – primarily in Latin – and attracted a surprisingly strong cast of candidates. For example, The 55 JPs in Devonshire holding office in 1592 included:
The cultural achievements of the Elizabethan era have long attracted scholars, and since the 1960s they have conducted intensive research on the social history of England.
The House of Tudor produced five monarchs who ruled during this reign. Occasionally listed is Lady Jane Grey, sometimes known as the 'Nine Days' Queen' for the shortness of her de facto reign.
The Tudor myth is a particular tradition in English history, historiography and literature that presents the period of the 15th century, including the Wars of the Roses, as a dark age of anarchy and bloodshed, and sees the Tudor period of the 16th century as a golden age of peace, law, order, and prosperity.
Edward VI was the King of England and Ireland from 28 January 1547 until his death in 1553. He was crowned on 20 February at the age of nine. Edward was the son of Henry VIII and Jane Seymour, and England's first monarch to be raised as a Protestant. During his reign, the realm was governed by a regency council because he never reached maturity. The council was first led by his uncle Edward Seymour, 1st Duke of Somerset (1547–1549), and then by John Dudley, 1st Earl of Warwick (1550–1553), who from 1551 was Duke of Northumberland.
Henry VIII was King of England from 1509 until his death in 1547. Henry is best known for his six marriages, and, in particular, his efforts to have his first marriage annulled. His disagreement with Pope Clement VII about such an annulment led Henry to initiate the English Reformation, separating the Church of England from papal authority. He appointed himself Supreme Head of the Church of England and dissolved convents and monasteries, for which he was excommunicated. Henry is also known as "the father of the Royal Navy," as he invested heavily in the navy, increasing its size from a few to more than 50 ships, and established the Navy Board.
Mary I, also known as Mary Tudor, and as "Bloody Mary" by her Protestant opponents, was Queen of England and Ireland from July 1553 until her death in 1558. She is best known for her vigorous attempt to reverse the English Reformation, which had begun during the reign of her father, Henry VIII. Her attempt to restore to the church the property confiscated in the previous two reigns was largely thwarted by parliament, but during her five-year reign, Mary had over 280 religious dissenters burned at the stake in the Marian persecutions.
The House of Tudor was an English royal house of Welsh origin, descended from the Tudors of Penmynydd. Tudor monarchs ruled the Kingdom of England and its realms, including their ancestral Wales and the Lordship of Ireland from 1485 until 1603, with six monarchs in that period: Henry VII, Henry VIII, Edward VI, Jane, Mary I and Elizabeth I. The Tudors succeeded the House of Plantagenet as rulers of the Kingdom of England, and were succeeded by the House of Stuart. The first Tudor monarch, Henry VII of England, descended through his mother from a legitimised branch of the English royal House of Lancaster, a cadet house of the Plantagenets. The Tudor family rose to power in the wake of the Wars of the Roses (1455–1487), which left the Tudor-aligned House of Lancaster extinct in the male line.
The Elizabethan era is the epoch in the Tudor period of the history of England during the reign of Queen Elizabeth I (1558–1603). Historians often depict it as the golden age in English history. The symbol of Britannia was first used in 1572, and often thereafter, to mark the Elizabethan age as a renaissance that inspired national pride through classical ideals, international expansion, and naval triumph over Spain.
Anne of Cleves was Queen of England from 6 January to 9 July 1540 as the fourth wife of King Henry VIII. Not much is known about Anne before 1527, when she became betrothed to Francis, Duke of Bar, son and heir of Antoine, Duke of Lorraine, although their marriage did not proceed. In March 1539, negotiations for Anne's marriage to Henry began, as Henry believed that he needed to form a political alliance with her brother, William, who was a leader of the Protestants of western Germany, to strengthen his position against potential attacks from Catholic France and the Holy Roman Empire.
Thomas Cromwell, 1st Earl of Essex, was an English lawyer and statesman who served as chief minister to King Henry VIII from 1534 to 1540, when he was beheaded on orders of the king.
Edward Seymour, 1st Duke of Somerset PC, also known as Edward Semel, was the eldest surviving brother of Queen Jane Seymour (d. 1537), the third wife of King Henry VIII. He was Lord Protector of England from 1547 to 1549 during the minority of his nephew King Edward VI (1547–1553). Despite his popularity with the common people, his policies often angered the gentry and he was overthrown.
Reginald Pole was an English cardinal of the Roman Catholic Church and the last Roman Catholic archbishop of Canterbury, holding the office from 1556 to 1558, during the Counter-Reformation.
Richard Rich, 1st Baron Rich, was Lord Chancellor during King Edward VI of England's reign, from 1547 until January 1552. He was the founder of Felsted School with its associated alms houses in Essex in 1564. He was a beneficiary of the Dissolution of the Monasteries, and persecuted opponents of church and state. He personally tortured the English writer, poet and Protestant martyr Anne Askew.
The Acts of Supremacy are two acts passed by the Parliament of England in the 16th century that established the English monarchs as the head of the Church of England. The 1534 Act declared King Henry VIII and his successors as the Supreme Head of the Church, replacing the pope. The Act was repealed during the reign of the Catholic Queen Mary I. The 1558 Act declared Queen Elizabeth I and her successors the Supreme Governor of the Church, a title that the British monarch still holds.
Thomas Wriothesley, 1st Earl of Southampton, KG was an English peer, secretary of state, Lord Chancellor and Lord High Admiral. A naturally skilled but unscrupulous and devious politician who changed with the times and personally tortured Anne Askew, Wriothesley served as a loyal instrument of King Henry VIII in the latter's break with the Catholic church. Richly rewarded with royal gains from the Dissolution of the Monasteries, he nevertheless prosecuted Calvinists and other dissident Protestants when political winds changed.
The Kingdom of England was a sovereign state on the island of Great Britain from 12 July 927, when it emerged from various Anglo-Saxon kingdoms, until 1 May 1707, when it united with Scotland to form the Kingdom of Great Britain. The Kingdom of England was among the most powerful states in Europe during the medieval period.
Early modern Britain is the history of the island of Great Britain roughly corresponding to the 16th, 17th, and 18th centuries. Major historical events in Early Modern British history include numerous wars, especially with France, along with the English Renaissance, the English Reformation and Scottish Reformation, the English Civil War, the Restoration of Charles II, the Glorious Revolution, the Treaty of Union, the Scottish Enlightenment and the formation and collapse of the First British Empire.
The Tudor navy was the navy of the Kingdom of England under the ruling Tudor dynasty (1485–1603). The period involved important and critical changes that led to the establishment of a permanent navy and laid the foundations for the future Royal Navy.
The English Reformation took place in 16th-century England when the Church of England broke away from the authority of the Pope and the Roman Catholic Church. These events were, in part, associated with the wider European Protestant Reformation, a religious and political movement that affected the practice of Christianity in western and central Europe. Causes included the invention of the printing press, increased circulation of the Bible and the transmission of new knowledge and ideas among scholars, the upper and middle classes and readers in general. The phases of the English Reformation, which also covered Wales and Ireland, were largely driven by changes in government policy, to which public opinion gradually accommodated itself.
The Reformation in Ireland was a movement for the reform of religious life and institutions that was introduced into Ireland by the English administration at the behest of King Henry VIII of England. His desire for an annulment of his marriage was known as the King's Great Matter. Ultimately Pope Clement VII refused the petition; consequently, in order to give legal effect to his wishes, it became necessary for the King to assert his lordship over the Catholic Church in his realm. In passing the Acts of Supremacy in 1534, the English Parliament confirmed the King's supremacy over the Church in the Kingdom of England. This challenge to Papal supremacy resulted in a breach with the Catholic Church. By 1541, the Irish Parliament had agreed to the change in status of the country from that of a Lordship to that of Kingdom of Ireland.
The will of Henry VIII of England was a significant constitutional document, or set of contested documents created in the 1530s and 1540s, affecting English and Scottish politics for the rest of the 16th century. In conjunction with legislation passed by the English Parliament, it was supposed to have a regulative effect in deciding the succession to the three following monarchs of the House of Tudor, the three legitimate and illegitimate children of King Henry VIII of England. Its actual legal and constitutional status was much debated; and arguably the succession to Elizabeth I of England did not respect Henry's wishes.
Richard Bruce Wernham, was an English historian of Elizabethan England. After his death The Times called him "the leading historian of English foreign policy in the 16th century".
Edmund Steward otherwise Stewart or Stewarde was an English lawyer and clergyman who served as Chancellor and later Dean of Winchester Cathedral until his removal in 1559.
|Wikisource has original works on the topic: House of Tudor|
House of Tudor
House of York
| Royal house of the Kingdom of England |
House of Stuart | https://wikimili.com/en/Tudor_period | 21 |
21 | Mexicans have contributed to making the United States in pivotal and enduring ways. In 1776, more of the territory of the current United States was under Spanish sovereignty than in the 13 colonies that rejected British rule. Florida, the Gulf coast to New Orleans, the Mississippi to St. Louis, and the lands from Texas through New Mexico and California all lived under Spanish rule, setting Hispanic-Mexican legacies. Millions of pesos minted in Mexico City, the American center of global finance, funded the war for U.S. independence, leading the new nation to adopt the peso (renamed the dollar) as its currency.
The U.S. repaid the debt by claiming Spanish/Mexican lands—buying vast Louisiana territories (via France) in 1803; gaining Florida by treaty in 1819; sending settlers into Texas (many undocumented) to expand cotton and slavery in the 1820s; enabling Texas secession in 1836; provoking war in 1846 to incorporate Texas’s cotton and slave economy—and California’s gold fields, too. The U.S. took in land and peoples long Spanish and recently Mexican—often mixing European, indigenous, and African ancestries. The 1848 Treaty of Guadalupe Hidalgo recognized those who remained in the U.S. as citizens. And the U.S. incorporated the dynamic mining-grazing-irrigation economy that had marked Spanish North America for centuries and would long define the U.S. West.
Debates over slavery and freedom in lands taken from Mexico led to the U.S. Civil War while Mexicans locked in shrunken territories fought over liberal reforms and then faced a French occupation—all in the 1860s. With Union victory, the U.S. drove to continental hegemony. Simultaneously, Mexican liberals led by Benito Juárez consolidated power and welcomed U.S. capital. U.S. investors built Mexican railroads, developed mines, and promoted export industries—including petroleum. The U.S. and Mexican economies merged; U.S. capital and technology shaped Mexico while Mexican workers built the U.S. west. The economies were so integrated that a U.S. downturn, the panic of 1907, was pivotal to setting off Mexico’s 1910 revolution, a sociopolitical conflagration that focused Mexicans while the U.S. joined World War I.
Afterwards, the U.S. roared in the 20s while Mexicans faced reconstruction. The U.S. blocked immigration from Europe, and still welcomed Mexicans to cross a little-patrolled border to build dams and irrigation systems, cities and farms across the west. When depression hit in 1929 (it began in New York, spread across the U.S., and was exported to Mexico), Mexicans became expendable. Denied relief, they got one-way tickets to the border, forcing thousands south—including children born as U.S. citizens.
Mexico absorbed the refugees thanks to new industries and land distributions—reforms culminating in the nationalization of the oil industry in 1938. U.S. corporations screamed foul and FDR arranged a settlement (access to Mexican oil mattered as World War II loomed). When war came, the U.S. needed more than oil. It needed cloth and copper, livestock and leather--and workers, too. Remembering the expulsions of the early 30s, many resisted going north. So the governments negotiated a labor program, recruiting braceros in Mexico, paying for travel, promising decent wages and treatment. 500,000 Mexican citizens fought in the U.S. military. Sent to deadly fronts, they suffered high casualty rates.
To support the war, Mexican exporters accepted promises of post-war payment. With peace, accumulated credits allowed Mexico to import machinery for national development. But when credits ran out, the U.S. subsidized the reconstruction of Europe and Japan, leaving Mexico to compete for scarce and expensive bank credit. Life came in cycles of boom and bust, debt crises and devaluations. Meanwhile, U.S. pharmaceutical sellers delivered the antibiotics that had saved soldiers in World War II to families across Mexico. Children lived—and Mexico’s population soared: from 20 million in 1940, to 50 million by 1970, 100 million in 2000. To feed growing numbers, Mexico turned to U.S. funding and scientists to pioneer a “green revolution.” Harvests of wheat and maize rose to feed growing cities. Reliance on machinery and chemical fertilizers, pesticides, and herbicides, however, cut rural employment. National industries also adopted labor saving ways, keeping employment scarce everywhere. So people trekked north, some to labor seasonally in a bracero program that lasted to 1964; others to settle families in once Mexican regions like Texas and California and places north and east.
Documentation and legality were uncertain. U.S. employers’ readiness to hire Mexicans for low wages was not. People kept coming. U.S. financing, corporations, and models of production shaped lives across the border; Mexican workers labored everywhere, too. With integrated economies, the nations faced linked challenges. In the 1980s the U.S. suffered from “stagflation” while Mexico faced a collapse called the “lost decade.” In 1986, Republican President Ronald Reagan authorized a path to legality for thousands of Mexicans in the U.S. tied to sanctions on employers aimed to end new arrivals. Legal status kept workers here; failed sanctions enabled employers to keep hiring Mexicans—who kept coming. They provided cheap and insecure workers to U.S. producers—subsidizing profits in times of challenge.
The 1980s also saw the demise of the Soviet Union, the end of the Cold War, and the presumed triumph of capitalism. What would that mean for people in Mexico and the U.S.? Reagan corroded union rights, leading to declining incomes, disappearing pensions, and enduring insecurities among U.S. workers. President Carlos Salinas of Mexico’s dominant PRI attacked union power—and in 1992 ended rural Mexicans’ right to land. A transnational political consensus saw the erosion of popular rights as key to post-Cold War times.
Salinas proposed NAFTA to Reagan’s Republican successor, George H.W. Bush. The goal was to liberate capital and goods to move freely across borders—while holding people within nations. U.S. business would profit; Mexicans would continue to labor as a reservoir of low wage workers—at home. The treaty was ratified in Mexico by Salinas and the PRI, in the U.S. by Democratic President Clinton and an allied Congress.
As NAFTA took effect in 1994, Mexico faced the Zapatista rising in the south, then a financial collapse—before NAFTA could bring investment and jobs. With recovery, the Clinton era hi-tech boom saw production flow to China. Mexico gained where transport costs mattered—as in auto assembly. But old textiles and new electronics went to Asia. Mexico returned to growth in the late 1990s, with jobs still scarce for a population nearing 100 million. Meanwhile, much of Mexican agriculture collapsed. NAFTA ended tariffs on goods crossing borders. The U.S. subsidizes corporate farmers--internal payments enabling agribusiness to sell below cost. NAFTA left Mexican producers to face U.S. subsidized staples. Mexican growers could not compete and migration to the U.S. accelerated.
NAFTA brought new concentrations of wealth and power across North America. In Mexico, cities grew as a powerful few and favored middle sectors prospered; millions more struggled with informality and marginality. The vacuum created by agricultural collapse and urban marginality made space for a dynamic violent drug economy. Historically, cocaine was an Andean specialty, heroin an Asian product. But as the U.S. pressed against drug economies elsewhere, Mexicans—some enticed by profit; many searching for sustenance—turned to supply U.S. consumers.
U.S. politicians and ideologues blame Mexico for the “drug problem”—a noisy “supply side” understanding that is historically untenable. U.S. demand drives the drug economy. The U.S. has done nothing effective to curtail consumption—or to limit the flow of weapons to drug cartels in Mexico. Laying blame helps block any national discussion of the underlying social insecurities brought by globalization—deindustrialization, scarce employment, low wages, lowered benefits, vanishing pensions—insecurities that close observers know fuel drug dependency. Drug consumption in the U.S. has expanded as migration from Mexico now slows (mostly due to slowing population growth)—a conversation steadfastly avoided.
People across North America struggle with shared challenges—common insecurities spread by globalizing capitalism. Too many U.S. politicians see advantages in polarization, blaming Mexicans for all that ails life north of the border. Better that we work to understand our inseparable histories. Then we might work toward a prosperity shared by diverse peoples facing common challenges in an integrated North America.
John Tutino is editor of Mexico and Mexicans in the Making of the United States, and the author of Making a New World: Founding Capitalism in the Bajío and Spanish North America and The Mexican Heartland: How Communities Shaped Capitalism, a Nation, and World History, 1500-2000. | https://www.rawstory.com/2018/07/ways-mexicans-made-america-great/ | 21 |
31 | Modern times originated in Italy in the 14th century during the period known as the Renaissance. A rich development of Western civilization marking the transition from the Middle Ages, the Renaissance refers to rebirth, or rediscovery, by scholars (humanists) of Greco-Roman culture.
The period prior to the Renaissance, the High Middle Ages, was marked by relative political stability, economic expansion, wide contact with other cultures, and flourishing urban civilization. However, the High Middle Ages served only to establish the foundations for change and to develop the background for the new view of the world. The Italian Renaissance was a distinct period in time, noted for ushering in the modern civilization, characterized by the alteration of the political, economic, and social status.
Prices start at $12
Prices start at $11
Prices start at $10
Renaissance civilization revamped the political scene from the Middle Ages into the modern age. The despotism created during the Renaissance bestowed incomparable unity and power upon Europe through the individual (Burckhardt, 509). Leaders such as Viconti displayed tremendous strength and vitality. During the 14th century, people no longer received and respected the Emperors as feudal lords, but as possible leaders and supporters of power already in existence (Burckhardt, 507).
The reverence of the heads of government aroused feelings of patriotism in the hearts of the people. For the first time a modern political spirit of Europe can be detected (Burckhardt, 507). Political support or nationalism is still evident in today s society and can be attributed to the Renaissance.
The Renaissance also harbored secular ideas of the state. The Renaissance marked the transition from the ecclesiastical to secular outlook. The people began to search for answers and a growing emphasis on reason, rather than faith, became apparent. Historian Marsilius of Padua proclaimed that according to the writings of Aristotle the Roman bishop called pope, or any other priest or bishop, or spiritual minister, collectively or individually, as such, has and ought to have no coercive jurisdiction over the property or person of any priest or bishop, or deacon, or group of them, and still less over any secular ruler or government, community, group, or individual (535).
Therefore, the ecclesiastical should not lawfully exercise any political power. Furthermore, Niccolo Machiavelli went to extremes by stating that Christian virtues and politics would result in an unstable form of government. In concurrence with Machiavellian politics, Isaiah Berlin suggests that to choose to lead a Christian life is to condemn oneself to political impotence…if one wishes to build a glorious community like those of Athens or Rome at their best, then one must abandon Christian education and substitute one better suited to the purpose (542). Nevertheless, secular ideas of the state were fostered during the Renaissance and have become one of the most critical components to a successful, modern nation.
Renaissance civilization also marked the birth of capitalism, the economic machine upon which the United States runs today. During the Renaissance, the economy went from feudal based to capitalist based. The revival of trade, urban life, and money economy had a dynamic influence in the midst of the agrarian feudal society of the high Middle Ages (Ferguson, 554).
As Wallace K. Ferguson says, …the historians whose special interest was religion, philosophy, literature, science, or art have all to frequently striven to explain the developments in these fields without correlating them with the changes in the economic, social, and political structure of society (554).
Ferguson went on to explain that medieval civilization, founded as it was upon the basis of land tenure and agriculture, could not continue indefinitely to absorb an expanding urban society and money economy without losing its essential character, without gradually changing into something recognizably different (554). The growth of a money economy brought changes in the whole character of urban economic and social organizations, still evident in modern times.
Into this agrarian feudal society the revival of commerce and industry, accompanied by the growth of towns and money economy, introduced a new and alien element (Ferguson, 554). This element was capitalism. The effect of the new economy was to stimulate the existing medieval civilization, freeing it from the economic, social, and cultural restrictions, making possible the rapid development of the economy.
The rise of capitalism in the Renaissance had measurable effects on the rest of society. For instance, the fall of feudalism gave way to the rise of city-states or centralized territorial states. In addition, the universal authority of the church was shaken by the growing power of the national states, while its internal organization was transformed by the evolution of a monetary fiscal system (Ferguson, 554).
Meanwhile, within the cities, the growth of capital was bringing significant changes in the whole character of urban economic and social organizations. Considering all the changes inspired by capitalism, the result was an essential change in the character of European civilization. This new and extraordinary economic system known as capitalism would develop during the time of the Renaissance and would become the economic clockwork of modern America.
Another distinct characteristic of the Renaissance present in today s society is a strong emphasis on individuality. During the Middle Ages, the common man was marked by faith, illusion, and childish prepossession (Burckhardt, 508). Men viewed themselves only as a member of some general category – race, people, party, family, or cooperation (Burckhardt, 508).
In earlier times, the development of free personality could not be detected in Northern Europe; however, with the onset of the Renaissance, man became a spiritual individual (Burckhardt, 508). Toward the close of the 13th century, Italy was overwhelmed with individuality, a recurring theme of today s society. The time period was characterized by a movement in which human values and capabilities were the central focus known as humanism. The individual spirit was beginning to appear in Europe, and in today’s society, this is a very important, if not, necessary idea.
It was the upbringing of humanism – this unfolding of the treasures of human nature in art and literature (Burckhardt, 508) which encompassed individuality. The contribution of Italian humanism to literature and scholarship made an impact that has remained in all regions of European civilization until the twentieth century (Palmer, 59). W.K. Ferguson comments that at the time of the Renaissance there was the appearance of a growing class of urban laymen who had the leisure and means to secure a liberal education and to take an active part in every form of intellectual and aesthetic culture (555).
To learn and appreciate culture showed the new concept that life was worthwhile to its own sake and not used for sheer preparation thereafter. People wished and were forced to know all the inward resources of their own nature, passing or permanent; and their enjoyment of life was enhanced and concentrated by the desire to obtain the greatest satisfaction from a possibly very brief period of influence (Burckhardt, 508). In accordance with Burckhardt, individualism inspired many people to achieve all that they could in their lifetimes – a very modern belief today where the sky is the limit.
Italy, in the 14th century, was characterized by a distinct period in time known as the Renaissance. The Renaissance was not merely an extension of the Middle Ages, but rather the transitional period in which the increasing lay culture of the cities, the political centralization of the territorial states, and the dominance of the money economy replaced the feudal and ecclesiastical civilization of the modern world (Conlon). The rediscovery of the classical Greco-Roman culture provided the world with great treasures.
The people, inspired by the need for reform, abandoned the Middle Ages and entered into a time of great intellectual, social, political, religious, and economic reform. The humanistic development of individualism, the dramatic change in political ideas and construction, and the introduction of capitalism remained powerful concepts in modern society that originated in the Renaissance. Without question, the Renaissance civilization was the foundation of modern times.
Example #2 – What were the achievements of Renaissance architecture?
The era is known to us as the Renaissance began approximately around the beginning of the fifteenth century, in Florence. The philosophy behind the whole movement is one of rebirth or the re-establishing of ancient classical culture.
Following the collapse of the Roman civilization, much of Europe fell into decline, losing a great deal of information concerning that period. Therefore knowledge concerning the architecture of that age could only be acquired via the classical ruins that litter the Italian landscape; and through the writings of the Roman architect Vitruvius.
Thus one of the greatest (and most fundamental) achievements of the renaissance is the rediscovery of the basic elements of classical architectural design, especially those concerning construction. The results of this achievement can be seen in the construction of buildings such as Florence Cathedral.
Begun in 1294, the Florentine people almost exceeded the limit of their abilities in their enthusiasm to build an impressively large Cathedral, and consequently could find no method to cover it. This problem was left unresolved for over a century before an architect by the name of Brunelleschi was able to find a solution.
Filippo Brunelleschi was born in 1377 and is considered to be the greatest architect of the early renaissance and is credited with the development of Renaissance style with buildings such as the Foundling Hospital. In 1420 he was appointed along with fellow architect Ghiberti to construct a dome over Florence Cathedral.
The main difficulty in this way that the opening was almost 140 feet in diameter and 180 feet off the ground, which made it impossible to build a framework strong enough to support a dome. In truth no tree would have been long enough to provide timbre to bridge the gap, or if there had it would have broken under its self-weight even before taking the weight of any stone.
The solution that Brunelleschi put forward was to build the dome in a series of horizontal courses using a certain herringbone pattern, which would bond together, each course carrying its own weight and supporting the next. There was also the question of the weight of the dome. The drum on to which the dome was to be built was already in place; therefore building the dome out of concrete (in the manner of the Pantheon in Rome) was out of the question, as the weight would crush the existing dome.
As a result, the dome was built with ribs and the lightest possible infill and an outer and inner shell. Such a solution could have only been reached through the intense study of classical ruins. For thousands of years no one had understood the structure of the ancient Roman domes or vaults, this fact serves to heighten Brunelleschi’s s achievement, as he would have had to consider issues that no one else would have thought of contemplating.
Another element of classical antiquity that was reintroduced was the order. The orders consisted of five styles the Tuscan, Doric, Ionic, Corinthian, and composite, which varied in popularity over the years.
Copying the examples of ancient Rome, Renaissance architects overlaid the orders using a different one for each story of a building. Similarly, the general appearance of different stories in a building took on different facades. An example of this can be seen in Michelozzo s Palazzo Medici in Florence.
There are different degrees of rustication of the stonework within the levels of the building. The ground floor is heavily rusticated in the manner of a fortress; the first floor is characterized by drafted stonework with incised lines, and the second floor features ashlar stonework. This distinguishing of levels, whether it is through columns or brickwork reflects the classical approach of creating logical relationships within a building.
Another achievement of the Renaissance was the development of perspective and the reestablishment of the classical importance of proportion. Throughout the renaissance, the proportions of a building determined its beauty. The great scholar and architect Leon Battista Alberti is quoted as saying,
I shall define beauty to be a harmony of all parts, in whatsoever subject it appears, fitted together with such proportion and connection, that nothing could be added, diminished or altered but for the worse.
Renaissance (or ancient classical) buildings are based on a modular system of proportions, whereby the module is half the diameter of the column base. The whole building will then be designed around that measurement, as the module determines not only the size of the column but the spacing between them. Likewise in the rest of the building, every detail will be related to every other detail.
This fascination with proportion can be seen in Alberti s Florentine church of Sta Maria Novella. The building is divided so that the height of the building is equal to its width forming a square. This is cut through horizontally exactly through the middle. The main doors separate the lower story, which forms two squares, each of which is a quarter of the area of the large square. The upper story, which is crowned by a classical pediment, is also exactly the same size as the lower squares.
Another example is Bramante s Tempietto in Rome. Born in Urbino in (approx) 1444, he was a painter in his early years and is thought to have been a pupil of both Pierro Della Francesca and Mantegna. Certainly, the harmony of the paintings of Piero, and the interest in the classical civilization of Mantegna is evident in his work.
The Tempietto was to be material to St Peter and was intended to be part of a courtyard of concentric circles, but was never completed. The building itself is made up of two cylinders, the peristyle, and the cella. (The peristyle being low and wide, and the cella tall and narrow.) The width of the epistyle is equal to the height of the cella (excluding the dome).
The dome is hemispherical both internally and externally and thus proportionate to the height of the cella. The introduction of the importance of proportion was a great achievement of the Renaissance, but admittedly one that takes a little time to understand and appreciate. As to the finely tuned eye of the Renaissance architects, an opening five inches too wide could be seen as an eyesore and is evidently a skill, which is very sensitive.
The idea of ideal proportions was also being applied to the anatomy most famously in Leonardo da Vinci s Vitruvian man. Similarly, whole buildings were proportioned to the human body, particularly because in ancient times the column was thought of as being in the image of a human body.
Overall it is possible to conclude that the achievements of Renaissance architecture were the revival of both structural and stylistic properties of Ancient architecture.
However, this does not mean that the architects of the Renaissance were satisfied with merely copying the classical style. The Renaissance was a period of great achievement scientifically, artistically, and philosophically and this resulted in supreme confidence and ambition, which meant that architects were constantly struggling for perfection and the ideal form, and would not be content with just copying.
There is also a separate factor, which provides the main difference between the Renaissance and ancient civilizations. The difference was the embracing of the Christian faith throughout Europe during the Renaissance. This can be seen through the importance placed on the concept of
proportion. An example of this can be seen in Alberti s church of S. Sebastian in Mantua. The plan is in the shape of a Greek cross, which is a perfect form and therefore symbolizes the perfection of God. In return the Renaissance architecture also influenced the Christian faith through the introduction of centrally planned churches, banishing the false assumption that religious buildings must be cruciform in plan. Christianity affected the way of thinking of the Renaissance. The French scholar Emile Male summarized it perfectly when he wrote,
Thus, the traveler who made his way from the Colosseum to St Peter s by way of Constantine s Basilica and the Pantheon, who visited the Sistine Chapel and the best of Raphael s Stanze, has seen in a day, the finest things in Rome. He will have learned at the same time, what the Renaissance was; it was Antiquity ennobled by the Christian faith.
This encapsulates the mentality of the Renaissance whilst the very man who so inspired the Renaissance can sum up its architectural achievements.
Architecture consists of Order and of Arrangement and of Proportion and Symmetry and D cor and Distribution-Vitruvius (De Architectura I).
Example #3 – The Renaissance Humanistic Concept of Man
Each century brings something new into this world. Some ages thus become prominent, others don’t seem to contribute a lot to humanity. The Renaissance became the symbol of awakening, the symbol of excellence and rebirth. It gave birth to the doctrines and principles that dominate the philosophy up until nowadays. Humanism developed as one of the principal philosophical concepts of the Renaissance.
What does this concept mean, why is it so crucial to the understanding of the epoch of the Renaissance? With the philosophy of humanism 14th century, Italy obtained the major doctrines of the revival: a study of the classics, importance on learning, and emphasis on human values, concern with a man, and his problems.
The latter is the main difference between the Middle Ages and Renaissance: the Renaissance is man-centered, the other one is God-centered. The problems of free will, virtue, fate are closely connected and broadly discussed by the thinkers of the Renaissance.
From the very beginning of humanistic thought, starting from Petrarch, the idea of an individual’s importance started to develop among the literary philosophers. In his writings, Petrarch expresses a great concern with the ignorance of men towards themselves. “Men go to admire the heights of mountains, the great floods of the sea, the courses of rivers, the shores of the ocean, and the orbits of the stars, and neglect themselves,” he quotes St. Augustine in “The Ascent of Mount Ventoux”.
In fact, this entire writing is an allegorical description of the struggle within himself that had eventually led to the conversion and elevation to the higher state of mind. The mountain itself can be an allegory for all the knowledge to be mastered in order to obtain the wisdom and virtue of happiness, or it could be a deceitful path to faith in God.
Petrarch believes that our understanding of the world starts with the self-exploration and awareness attained through classical learning, later known as Studia Humanitatis. He probably makes the first humanistic attempt to stress out the significance of the humans in the modern philosophical thought.
The characteristic feature of the Renaissance is the praise of the human mind, first found in ancient Greece. Nothing is admirable besides the mind; compared to its greatness nothing is great. Man is primarily praised for his reason, for his arts and skills, derived from his own potential through the path of secular knowledge. But human dignity has to be attained and realized through man’s effort.
Only then, as expressed in Marsilio Ficino’s writing in 1468, man becomes a dominant power over all elements and animals, he is the ruler of nature; he is assigned a central place in the hierarchy of the universe. While being extremely religious “Five Questions Concerning the Mind” deals with a system of the universe only because it justifies the glorification of the human soul.
The entire concept of human “dignity” was, in fact, based upon a heroic vision of humanity. The glorification of mangoes further in Vives stories where human is given the power of self-transformation: “A Fable about Man.” The perfect human “determines his own being, has material power over the world, and moral power over himself.”
Man is able to choose his own destiny, to become a sovereign beautiful being. Everything depends on his free will, according to Pico; man’s dignity is based on his “freedom”. The human has to strive for “dignity” by asserting his potentials, by cultivating reason rather than blind feeling within his mind. Only tasks that are morally and intellectually worthwhile can lead us beyond “the narrow confines of his personal interests and ambitions.”
A number of humanistic treatises deal with individual virtues. Some of them are discussed in the works of Neapolitan humanist, Giovanni Pontano: courage, altruism, or discretion. The notion of finding the precise philosophical definitions for them continued with the later literary works.
During the three centuries of the Renaissance in Western Europe, even though it went through some changes, the concept of self did not lose its original importance. The praise of the human mind and knowledge, as well as accent on classical studies, remained consistent even by the end of the era.
The difference between the works of early humanists like Petrarca and the later ones: Ficino and Pico; becomes very clear: they all use the same roots of the classical philosophers and take over from them a profound concern with humanity, but they develop a completely new idea of distinctive human position within the system of the universe.
Furthermore, now his dignity is defined and justified in terms of this position. Through the purification of the soul: obtaining the supreme knowledge, man becomes a central figure in the universal hierarchy. The figure of man becomes equal to God and his authority is almost unquestionable.
The idea of self-fashioning had gradually occupied the place of merely spoken at the time of Petrarch’s concept of a sovereign human being. The self-fashioning doctrine came from the North, developed by Erasmus and Thomas Moore. It was based on the ability of knowledge to shape human personality. And thus the ability of man to make his own choice was re-established in society.
The enthusiasm for the growing importance of the concept of individualism was strengthened with the emergence of civic humanism that brought a concept of man skillful in all the secular professions. Castiglione’s “The Book of the Courtier”creates a perfect picture of the “Renaissance man” of the end of the Renaissance era.
In his book, Castiglione is discussing the issue of the perfect courtier, who has to have all the virtues: kindness, courage, wisdom, knowledge. He is suggesting that knowledge would help someone to attain all those virtues, thus becoming a skillful spokesman, polemicist, writer, and thinker.
Even though, the capacity to grow was obtainable only to the elite, the renaissance, due to its humanistic tendency that allowed a crucial shift towards the development of sciences and arts, philosophy, and literature, became an important era in the history of mankind. And the idea of human importance played a major role in the process of renaissance achievements.
The mere fact of the existence of severe self-criticism of one’s abilities encouraged people to learn and thus evolve, produce masterpieces in art and relics of the modern philosophy. Renaissance has been praised for its magnificent heritage, seen as the first step in an intellectual development that led to Enlightenment and modern secular thought. And Humanism became its major vehicle that allowed people to believe in their aim to achieve a fair measure of human happiness.
Renaissance is a term with a variety of meanings but is used widely in the discussion of European history. Renaissance originates from the Latin word rinascere and refers to the act of being reborn. It is believed that during the time from about 1400AD to around 1600AD, Europe was reborn. Originally the term Renaissance only referred to the time when man rediscovered the knowledge of the ancient Greeks and Romans.
However, modern historians have realized these rediscoveries were also crucial to the formation of modern culture. The term Renaissance is now used to indicate all the historical developments that have inspired the end of the Middle Ages and the beginning of modern history. Thus, the term Renaissance has now taken on a more significant meaning: not only does the Renaissance mean the rebirth of knowledge, but also represents a step from the past and a leap towards the future.
The Renaissance overlapped the end of a period in European history called the Middle Ages. During this time, the great accomplishments of the ancient Greeks and Romans had been large, though not entirely forgotten. With the ending of the Middle Ages, the Renaissance great cultural movement arose. Beginning in Italy, the new Renaissance spirit spread to England, France, Germany, The Netherlands, Spain, and other countries. In Italy during the 14th and 15th centuries, certain scholars and historians began to display a remarkable new historical self-consciousness.
They believed their own time was a new age, at once sharply different from the barbaric darkness which was imagined had occurred in the centuries before. They grew to believe that there was more to be discovered about mankind and the world than medieval people had known. The Italians are very eager to rediscover what clever Greeks and Romans had known in ancient times, as well as making their own intelligent attempts to understand the world. This renewed interest in the world and in mankind is called Humanism.
Humanism was the most significant intellectual movement of the Renaissance. Humanism during the Renaissance received its name from one of the earliest concerns of the humanists: the need for a new education curriculum that would empathize with a group of subjects known collectively as the Studia Humanitatis involving grammar, history, poetry, ethics, and rhetoric. However, this new education curriculum conflicted directly with traditional education, which involved logic, science, and physics, and often sharp clashes occurred between the two educators. However, more was at stake than the content of education.
Traditional education was intended chiefly to prepare students for careers in medicine, law, and above all theology. To Renaissance humanists, this seemed too narrow, too abstract, and too exclusively intellectual. They proposed a system of education that centered on the general responsibilities of citizenship and social leadership. Humanities essential contribution to the modern world is not found in its concern with ancient knowledge, but in its new attitude of flexibility and openness to all the possibilities in life.
With people receiving education-involving leadership, they began to gain more confidants. More people began to reject ideas about science put forward by the ancient Greeks and began to search for the truth. They realized that the Greek’s ideas were often intelligent, but also often wrong. Many people still did not want the old ideas disapproved, and threatened scientists to stop having new ideas. However, this did not stop many brilliant scientific inventions from being produced at this time.
A great scientist of the Renaissance was the Polish student Nicolaus Copernicus who developed the theory that the earth was a moving planet. He is considered the founder of modern astronomy. In Copernicus’s time, most astronomers accepted the theory the Greek astronomer Ptolemy had formulated nearly 1400 years earlier. Ptolemy stated that the Earth was the center of the universe and motionless. He also stated that all the observed motions of the heavenly bodies were real and that those bodies moved in complicated patterns around the Earth.
As the church supported Ptolemy theory no one dared to challenge it until Copernicus. Copernicus believed Ptolemy s theory was too complicated. He decided that the simplest and most systematic explanation was that every planet, including the Earth, revolved around the sun. The Earth also had to spin around its axis once every day. Copernicus couldn t prove his theory, but his explanation of heavenly motion was mathematically strong and was less complicated than Ptolemy s theory. The later work of later scientists such as Galileo Galilei helped to prove that Copernicus theory was correct.
Galileo was a Florentine physicist, philosopher, and inventor, whose name became the chief emblem of Renaissance science and of an ensuing technological revolution. In 1609, he heard that the rulers of Florence and Venice were searching for someone who could invent an instrument that made distant objects appear closer.
Galileo set to work to construct one, and within a few days, he had finished, naming it a telescope. During the winter, he turned his telescope to the sky with startling results. He announced that the moon surface was quite similar to earth s irregular and mountainous; the Milky Way was made up of a host of stars, and the planet Jupiter is accompanied by at least four satellites.
The electrifying effects of these discoveries were amazing. They showed the human senses could be aided artificially to discover new truths about nature, something that neither philosophy nor theology had previously contended with. However, most importantly Ptolemy s astronomical theory was impossible. Galileo had proven the Copernicus theory correct. Galileo had great importance upon the history of ideas.
The Renaissance produced many important people who invented or theorized very important advances in history. They all became strong symbols of revolt against the forces of authority, whilst the Renaissance flourished with the power of a question. The Renaissance period provided modern culture with a variety of advances in technology, art, science and most importantly it gave mankind confidence.
The ancient civilizations, in particular the Greeks and Romans, laid the foundations for civilizations and the Renaissance added the most important ingredient; the ability to ask why. It is appropriate to use the label Rebirth to describe European history in the 15th and 16th centuries.
The Renaissance was a period by which modern scholars consider that between 1350-1600. Abundant in this new age was inventions and individualistic beliefs. Changes in music and cultural behavior were some of the most evident development from its predecessor of the Middle ages.
Period of new inventions, belief, musical styles of freedom, and individuality. It was a period of exploration and adventure from 1492-1519, which saw the likes of Christopher Columbus, Vasco da Gama, and Ferdinand Magellan. This was a drastic difference from the Middle Ages where the church held most of the power. The power was slowly transferred to the artist, musician, and people of high society. The word “Renaissance” means rebirth. Used by artists and musicians to recover and apply the ancient learning and standards of Greece and Rome.
Rich Italian cities, such as Florence, Ferrari, and mainland Venice started the Renaissance Age. Because these cities were very wealthy, people started spending money on different things, such as painting, learning materials, and new systems of government. These were good times for most and because of the ever-changing styles and attitudes towards culture and the church, music was the best buy for the money. This all gave rise to a new type of scholar, called the humanist.
Humanism was a subject concerned with humankind and the culture. Painters and sculptors now used subjects from classical literature and mythology such as characters from Homer epic poems. Painters like Raphael and Leonardo da Vinci were more interested in realism and used linear perspective in creating their subjects. The nude body was a favorite theme of the ages whereas in the Middle Ages was an object of shame and concealment. Artist was no longer regarded as mere artisans, as they were known in the past, but for the first time emerged as independent thinkers.
The Catholic Church was far less powerful now than they had been in the middle ages. The church no longer monopolized learning or the minds of the common worshiper. Aristocrats and the upper-middle-class now considered education a status symbol and music was an intricate part of that status quote. The invention of print accelerated the spread of learning.
Johan Gutenberg was credited with printing the first Bible during this period, which gave this excellent piece of literature a wider audience. The printing press made books much easier to come by which made them cheaper. Now common people could afford a literary luxury, which was once only accessible to the rich. Therefore, literacy became more widespread since common people had access to all forms of print to include music.
With the Renaissance was the idea of the universal man, every educated person was expected to be trained in music. As in the Middle Ages, the musicians worked in churches, courts, and towns. The church remained an important patron of music, but musical activity gradually shifted to the courtyards. King’s princes and dukes competed for the finest composers. With this, newfound fame musicians enjoyed higher status and pay than ever before.
Composers were regarded as higher and held important positions throughout Europe. Many musicians became interested in politics in hopes that their status as a musician or composer would help to foster one’s careers. This was a sharp contrast from most of the Renaissance composers and musicians. Most were from the Low Countries and from families that were not of prominent nobility.
In the renaissance, as in the Middle Ages vocal music was more important than instrumental music. The humanistic interest in language influenced vocal music in a new way. As a result, an especially close relationship was created between words and music. Composers often used word painting, a musical representation of specific poetic images. Renaissance music sounds were more full than medieval music and had a more pleasing effect to the ear.
New emphasis was put on the bass line for a richer harmony. Choruses music did not need instrumental accompaniment. The period was called the golden age of unaccompanied “a Cappella” choral music. This is where the present-day barbershop’s quartet originated. This new technique made renaissance music both a pleasure and challenge, for each singer had to maintain an individual rhythm. This must have been an innovation and refreshing change from the old monotone chanting choruses.
A new style of relating to the counterpoint was now spawning, in which bass voices were given greater independence. This took the average Mass to a different level of complexity and meaning. It created two forms of sacred music of the renaissance, which were the Motet and the Mass. The Motet was a polyphonic choral composition made up of five sections: Kyrie, Gloria, Credo, Sanctus, and Agnus. Josquin Desprez was a master of renaissance music. His compositions, which strongly influenced others, and were enthusiastic, welcomed by music lovers.
Among the most important Renaissance composers was Giovanni Pierluigi da Palestrina, who devoted himself to music for the catholic church. During early 1500?s, Protestants who sought to correct abuses within the structure that occurred in the past challenged the church. This led to the founding of the Jesuit order in 1540, which considered questions of organization within the church. They discussed church music, which they felt lost its purity and wholesomeness that was essential to a place of worship. Church music was attacked because it used tunes, noisy instruments, and theatrical singing portraying the church as being just a place for entertainment.
The council finally decreed that church music should be composed not to give empty pleasure to the ear, but to inspire religious contemplation. Palestrina’s Pope Marcellus Mass was long thought to have convinced the council that masses should be kept in catholic worship. Although it is now known that it did not play a role in the council decision, it does reflect the council’s desire for a clear projection of the sacred text.
During the renaissance secular vocal music became increasingly popular. This was music written for groups of solo voices with the accompaniment of instruments. Composers delighted in imitating natural sounds such as birds or animals that were more serene. Madrigal was important vocal music, which had a piece for secular solo voices set to short poems. Madrigal originated in Italy around 1520 and was published by the thousands in sixteenth-century Italy.
Among many Italian madrigalists were Luca Marzio and Carlo Gesualdo the prince of veno who had his wife and her lover murdered after finding them in bed together. In 1588 the year of the defeat of the Spanish armada, a volume of Italian madrigals was published in London. This triggered a spurt of madrigal writing by English composers, and for about thirty years, there was a steady flow of English madrigals and other secular vocal music.
Traditionally instrumentalists accompanied voices or played music intended for singing. During the sixteenth century however instrumental music became increasingly emancipated from vocal models. Renaissance musicians distinguished between loud outdoor instruments like trumpets and the shawm, which was a double reeded ancestor of the oboe, and soft indoor instruments like the lute and the recorder (an early flute). Large courts might employ thirty instrumentalists of all types. On state occasions such as Royal wedding, woodwinds plucked bowed strings, and keyboard, instruments all playing would entertain quest together.
In conclusion, the renaissance gave way to a new generation of music, musicians, and composers. During the Renaissance, music was no longer regarded as a merely skilled craftsman, as they had been in the medieval past, but for the first time emerged as independent personalities. The Renaissance was a time of new awakening in Europe.
Renaissance which is also referred to as rebirth is the period that started in the 14th century and ended up in the 17th century. The period was marked by increased interests and development in Art, literature, politics, science, religion, and music.
The period was characterized by a surge of interest in classical learning and values. Renaissance is usually taken as the bridge that linked the medieval era and modern civilization. Although Renaissance resulted in great changes in many intellectual undertakings such as political and social upheaval, it is mostly remembered for its great contributions to art and music.
This period is marked by the discovery of new continents, great growth in commerce and invention, and applications of innovations such as paper printing, gunpowder, and use of the marine compass. The era is regarded as a period of the revival of classical learning after a long time span of cultural stagnation and decline (Brotton, 2006).
The rebirth of the Renaissance is believed to have started in Italy as early as in the 14th century. The resurrection of the Renaissance in Italy is believed to have been influenced by a number of factors among them a favorable language. During this period, the Latin language was considered as the language of scholars.
Due to its complexity, it was not a common language to many people by then and thus not a very appropriate language for the learning process. Many people required a simpler language to understand higher knowledge that was associated with the Renaissance. This resulted in the growth of national vernacular language all over Europe that greatly facilitated the spread of the ideology of the new scholars.
Italy was the first nation to produce great writers in the Renaissance period. England on the other hand developed Standard English that was highly influential during the Renaissance in the learning process. Germany also took the opportunity to translate the bible into Germany language which greatly helped many Germans to read and understand the bible better (Guisepi, n.d).
The great scientific growth and development during this period boosted the Renaissance period greatly. The interactions of Christians and Arabs as they traded helped the Christians learn mathematics, chemistry, and experimental science from the Arabs who were more knowledgeable in these concepts.
The new knowledge they received from Arabs enabled them to become more critical with issues. Equipped with scientific knowledge, people started to accept and apply only what seemed logical to them. Thus, this learning transformed the views of many people who started to question some traditional beliefs which they had learned from the church about the certain national phenomenon (Guisepi, n.d).
Some scientific inventions such as the invention of the art of printing helped greatly in transmitting knowledge during the Renaissance process. This is because the printed materials were distributed and accessed more easily by many widely and by a large number of people.
This strategy was widely intensively used to educate people about the new and modern concepts that were related to modern civilizations. Similarly, the invention of magnetic compasses helped in the discovery of new continents such as the Africa continent. This, in turn, amplified the European trading routes which enabled them to make more profits.
The invention of gun powder transformed the politics in Europe greatly. Formerly, the Middle Ages were characterized by the supreme monarchy in Europe where nobles were the ones who was summoned to provide military support to the king during crisis instances. With the invention of gun powder, European politics greatly changed as kings started to assume the political power that was being exercised by the nobles. This, in turn, promoted the establishment of centralized governments in many parts of Europe.
The growth in trade and commerce also greatly helped in the Renaissance. New trading routes and cities emerged. The merchants were known to travel a lot and thus were greatly instrumental in the spread of the ideas of the modern civilizations as they traded in the new cities established.
The Renaissance influenced Europe culturally, politically, and economically. The renaissance was really very instrumental in the areas of scholarship, art, music, and architecture. The renaissance was associated with revisiting the knowledge of Greece and Rome to rediscover this knowledge and apply it in the contemporary context.
This facilitated many universities being established in many parts of Europe where many politicians were educated on classical knowledge under Guicciardini. The impact of the Renaissance on art was great. By the use of Humanism which focused on humanity, the modern concepts that were learned enabled the artist to break from art- dictated art of the Middle Ages and embrace the secular worldview.
In addition, architecture developed greatly which enabled the traditional architecture of the middle ages to be replaced by more modern human-centric architecture that was highly embraced all over Europe. Similarly, the Renaissance resulted in enhanced growth and development in trade and commerce that resulted in the emergence of banking facilities in many parts of Europe.
Enhance trade in turn resulted in the emergence of urban centers and cities such as Florence and Venice cities that eventually transcended to become empires.
Other European nations such as England and Spain followed suite to establish their own cities. The establishment of cities resulted in a great change in European politics which necessitated the idea of diplomacy. Many people in Europe and especially Italy studied diplomacy during the Renaissance period.
It was from Italy that the concept of permanent, resident ambassadors originated during the Renaissance period. The concept of diplomacy enables Italy to maintain very important international relations up to date (Craig, Graham, Kagan, Ozment, & Turner, 2009).
The Protestant Reformation was a European Christian reform movement that resulted to the establishment of Protestantism as a constituent part of contemporary Christianity. The movement was initiated as a protest towards certain catholic rituals, doctrines, and ecclesiastical structures of the Catholic Church.
The protest resulted in a Counter-Reformation movement which was headed by the Jesuit order. The Counter-Reformation resulted in the reclamation of many parts of Europe which include Poland, parts of England back to the Catholic faith.
The reason that motivated the reformation to initiate in Germany is that Germany was the first nation that translated the bible into Germany language which enhanced the Germans to understand the bible when they read it more effectively. Well, understanding of the bible prompted Germans Christians to start questioning some Catholic rituals and doctrines which they considered to contradict Christianity teaching as expressed by the bible.
The scientific discoveries that accompanied the Renaissance enlightened people greatly. One the great discovery is the Copernican theory which suggested that the sun and other planet rotated around a central sun. This discovery faced a lot of resistance from many scholars and also from the theologians who contested with this discovery as they claimed it contradicted what the bible stated.
Renaissance, French for rebirth, perfectly describes the intellectual and economic changes that occurred in Europe from the fourteenth to the sixteenth century. This intellectual movement developed in Italy, more specifically Florence. It is meant to imply a rebirth of the way of life of antiquity or a new set of attitudes that use Ancient Rome and Greece as models.
During this period, in Italy, the breakdown of the Roman Empire is taking place- Northern Italy is being overrun from the East. There is a continuation of urban life on the Italian Peninsula, contact and trade exist with countries outside of Europe. Politically, Italy is not as decentralized as the rest of Europe. The papacy behaved more like the state than the church. The papacy controlled land fielded armies and played the role of leader. The major city-states during the Renaissance were Naples, Florence, Milan, Venice, and the Papal States.
The Renaissance brought us many discoveries. Tools were developed for exploration. One of these tools was the astrolabe, a portable device used by sailors to help them find their way. The astrolabe measured the distance of the stars and sun above the horizon, which played a role in finding the latitude, which was important in navigation. The magnetic compass was also improved during the Renaissance. Maps became more reliable as cartographers began to incorporate the information of travelers and explorers into their work and shipbuilding improved. Large ships called galleons became common. These ships were powered by sail rather than by men using oars. Although navigation was still not a precise science, sailors were able to go farther than they were before, returning with imported goods from the East. The slave trade begins to flourish and Europeans are discovering more of the world.
In 1445, Gutenberg invented the printing press, forever changing the lives of the people in Europe and eventually people around the world. Before the printing press, books were hand-written. By 1500, twenty million books had been printed in Europe. Standard editions of texts like the Bible and religious texts are available. Books could be purchased by urban dwellers. Education expands when more people can afford to buy books.
The ability to draw a contrast between the past and the present is a new age. The Renaissance leads to a view that society has come out of the darkness. Nature had previously been viewed negatively. During the Renaissance, nature is perceived as something beautiful and worth studying. Plants are studied, which results in botanical gardens, cultivation, and breeding. The Renaissance also brought about the systematic collection of animals, exotic animals, people (slavery), and the use of studs in breeding horses.
Naturalism was also evident in the arts. Artists were trying to portray life as it was. In the past, drama often included personified moralities. A more naturalistic portrayal of humans begins in the Renaissance, drama includes jokes and obscenities, what you saw on stage is what occurred in life. Literature begins to use vernacular or the popular spoken language. Less is being written in Latin. Stage and canvas give an illusion of reality. There is a more realistic use of space.
Rationalism was very evident. An application of reason expanded to a wide range of human affairs. Space is broken down mathematically in architecture and painting. Space is intended to give an illusion of reality. Double-entry bookkeeping is used. During the Middle Ages, wealth was inland. In the Renaissance, the currency is standard. Profits can be made in buying and selling. An economic process is established.
Individualism also plays a role in the Renaissance. Biographies become popular. The lives of humans were worthy of artistic recreation. There is an emergence of celebrities, rulers, artists, and other well-known people. Artists began to sign their works. If a child showed promise in a particular area he was sent to learn and study the style of a master.
Humanism emerged. This placed an emphasis on classical authors and classical studies. This included studies of grammar, rhetoric, moral philosophy, poetry, and history. Humanism taught us that no man was above criticism and that in order to truly learn you would have to return to the original text. People studied and returned to their countries with their new thinking and teaching methods.
Renaissance society believed that the world could be studied, molded, and changed. People could create a world that they truly wanted. The people of the Renaissance no longer wanted only the recognition of God for their deeds. They now believed that deeds should bring them wealth and reputation.
Cite this page
This content was submitted by our community members and reviewed by Essayscollector Team. All content on this page is verified and owned by Essayscollector Team. All comments and user reviews are moderated by Essayscollector Team. In the case of any content-related problem, you can reach us through the report button. | https://essayscollector.com/examples/renaissance-essay/ | 21 |
16 | The purpose of this chapter is to help you understand the diversity of cultures.
The various components of culture are presented in the chapter.
Each component is described in terms of its relevance to the exam.
The components are detailed with examples both domestic and international.
There are discussions on the spatial aspects of cultural identity, cultural change, adaptation, globalization, and conflicts based on cultural differences.
The author doesn't have a good answer to this question despite studying cultural geography for several years.
You paid good money for this book, so you deserve better.
A group of people who have a common heritage are called culture.
The definition of culture is too abstract to be used.
We will give you the many components of the cultural landscape instead of trying to tell you what culture is.
To prepare for the AP Human Geography Exam, it is more effective to examine the categories of cultural expression than to try to define culture in a few sentences.
The human landscape expresses some form of culture.
Trying to understand culture can be difficult.
We need to understand how culture is found on the cultural landscape in order to get a better grip on it.
We can see the cultural landscape in the form of signs and symbols in the world around us, which is a general way of saying that there are different ways customs are imprinted on the several components of culture.
The components of culture are expressed in a number of ways.
As simple as the language used on a street sign, or as complex as the cooking methods and spices used in Louisiana Cajun food, these historical influences can be found.
This chapter will detail each component of culture and give examples that will help you answer cultural geography questions on the exam.
The cultural landscape can be seen as a form of text that can be read.
We can read the signs and symbols that we see in culture and understand the heritage of that place.
It is important to know the history of the place to understand what you are seeing.
Most things in the cultural landscape are the product of cultural synthesis, or syncretism, the blend of two or more cultural influences.
Country music in the United States and Canada is an example of cultural synthesis.
Folk music traditions such as bluegrass are strongly tied to American culture.
The Scots-Irish, the German, and African immigrants and slaves in the American South and Appalachia influenced the beginnings of country music.
The mixture of musical sounds, vocabulary, rhythms, and instruments from these four culture groups came together to form a new style of music, as well as later developing into other American musical styles like jazz, the blues, and rock and roll.
It's important to understand the roots of the things we see in the cultural landscape, whether it's original to a single culture or the product of cultural synthesis.
The many components come together to identify and define a single culture group.
Not all of the components will be questioned on the exam.
Detailed geographical examples from Anglo-America and internationally will be given to help broaden your perspective on the subject.
There is a cultural imprint on the landscape with different artistic forms.
Art is not a subject of the AP Human Geography Exam.
If asked a general question about cultural landscape, be able to express art's importance as a source of local pride.
You need to be aware of a number of architectural styles if you want to pass the exam.
Religious buildings and housing types are relevant.
These questions are likely to fall in the multiple choice section of the exam and will have a picture or diagram to look at.
A lot of architectural forms are the product of cultural influence.
When new buildings are built, there is a lot of news about innovative designs.
Some of the forms of traditional architecture have been used for centuries.
As in the art world, architects have a distinct modern period of architecture which is different from new forms.
When describing a building type, be specific.
The 1950s homes of Frank Lloyd Wright and the rectangular steel and glass skyscrapers of the 1970s and 1980s are examples of modern architecture that was developed during the 20th century.
The contemporary architecture of the present is more organic.
Postmodern means that the design abandons the use of blocky rectilinear shapes in favor of wavy, crystalline, or bending shapes in the form of the home or building.
Green energy technologies, recycled materials, and metal sheeting can be used in contemporary architecture.
The Guggenheim Museum in Bilbao, Spain, and the Walt Disney Theater in Los Angeles are examples of this.
One of two patterns can be expressed in traditional architecture.
Modern architecture is combined with traditional materials like stone, brick, steel, and glass in a form of traditional architecture that is used in new commercial buildings.
Folk house designs from different regions of the country are used in housing.
A hybrid Swiss chalet-Williamsburg-style home covered in stucco with a clay tile roof is one example of a new home incorporating more than one element of folk house design.
There are some basic traditional housing style forms that could appear on the exam.
Small one-story pitched roof Cape Cod style or the irregular roof Saltbox with one long pitched roof in front and a sort of low-angle roof in back can be found in New England.
These are connected to one another by two or three stories.
Classical Greek and Roman designs are found around windows and rooflines.
As stand-alone buildings, these are symmetrical homes with central doorways and equal numbers of windows on each side of the house.
Simple rectangular I-houses have a central door with one window on each side of the home's front and three symmetrical windows on the second floor.
The rectangle shape and symmetry were lost as the I-house style changed.
I-houses used to have additions on the back or side of the house.
Fireplaces on each end of the house and an even-pitched roof are part of the I-house.
The loss of form as the I-house moved across the Appalachian Mountains to the Midwest and across the Great Lakes to the Prairie Provinces is an example of relocation diffusion.
Religion is an area of architecture that is included in the AP Human Geography Exam tests.
The major world religious groups have their own architectural forms for places of worship.
The front of a traditional house of worship usually has a central steeple or two high bell towers.
The bell towers are found in larger churches and cathedrals.
St. Peters in the Vatican and St. Paul's Cathedral in London have domes similar to the U.S. Capitol building.
There is a sculpture of Darth Vader on the west tower of the National Cathedral.
Temples and shrines are usually rectangular in shape and feature one or more short towers of carved stone.
The towers have carvings of the heads and faces of deities.
The most famous example of this design is the temple complex in Cambodia.
The temple is in Varanasi, India.
Depending on which Buddhist tradition is followed in the region, temples and shrines vary.
In Nepal and Tibet, a temple can have a dome or tower with a pair of eyes.
In East Asia, the tower-style pagoda has several levels, each of which has a winged roof.
One- or two-story buildings with large, curved, winged roofs are found in temples and shrines in China and Shinto Japan.
The Temple of the Sun and Moon in Beijing is guarded by large lion statues.
Temples in Southeast Asia tend to have towers with thin pointed spires that point out at an angle.
Many mosques have central domes.
One or more minarets are narrow towers that are pointed at the top of the mosque.
The most holy place in Islam is the Al-Kaaba Mosque in Mecca, an open-air mosque with a large black cube in its center.
The third holiest place in Islam is the Al-Aqsa mosque in Jerusalem, which is an eight-sided mosque with a high central dome and thin spire on top.
The Hagia Sofia is a large mosque.
A former Eastern Orthodox cathedral has a broad central dome and four spires.
The main prayer area of most mosques is positioned toward Mecca.
There isn't a common architectural design style for synagogues.
The Western Wall of the former Temple of Solomon is the holiest place in Judaism.
The old foundation walls feature large rectangular stone blocks where Jews pray and place written prayers in the cracks between the blocks.
The common tongue of the country that we live in is what we most often think about when we think about language.
The United States federal government does not have an official language.
Some states have English-only laws.
Since most of the United States is monolingual, these affect education standards and state government publications.
California is one of the states that accepts that they have a large multilingual immigrant population and has made provisions to provide some services in multiple languages.
English and French are the official languages of Canada.
Canada is bilingual.
The Netherlands is an example of a multilingual society.
Students are required to learn English, French, and German in school, as well as learning their native Dutch.
It's common for citizens in South Africa to be able to speak English, Afrikaans, and one or more African languages.
Depending on where you are in a larger linguistic region, the way a common language is spoken can sound different.
There is a distinct "strain" of English spoken in Australia with a variety of different words and sounds.
When traveling to the United States from New England to the American South, dialect can change from region to region.
dialects within Great Britain are shaped by national heritage.
English is not the same in Scotland, Wales, Ireland, Cornwall, or the Isle of Man as it is in England.
The degree of Celtic influence and the degree to which Anglo-Saxon invaders brought their Germanic language with them contributed to the variations.
The received pronunciation is what some refer to as the King's English or "posh" English.
The language of the working-class areas of the East London docklands and surrounding neighborhoods is not posh.
The formation of Australian English is thought to haveTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkiaTrademarkia
Code phrases are used to describe everyday situations in cockney rhyming slang.
The word "going up the apples" means going up the stairs, and the word "apples and pears" means going up the stairs.
Slang is similar to other pidgin English dialects.
There are simplified forms of the language that use key vocabulary words.
This is heard in the spoken English of immigrants from India.
Over time, pidgin language forms can evolve into their own language groups.
French Creole is a language spoken in Haiti.
Many of the French overseas territories have their own forms of patois, like the one spoken in the islands of Martinique or Reunion.
Pidgin, Creole, and patois are syncretic language forms that integrate both colonial and indigenous language forms.
French has been used to bridge the linguistic gap between people of different national heritage.
lingua Franca is a bridge language and was created to describe its utility.
France has a long history of learning and diplomacy.
There were many French colonies around the world and Great Britain had territorial claims in France.
English is accepted as the global lingua franca as different forms of popular culture media, the Internet, and the business world are dominated by the English language.
English is the language of all airline pilots and air traffic controllers around the world.
The United States and Britain's international business dominance in the post-World War II era is demonstrated by the fact that this is done mainly for safety reasons.
There are a few major language families that are represented by the early or prehistoric language roots.
Each language family can be broken into groups.
Smaller language groups can be broken down into language subfamilies for larger language families.
Hindi is also from the Indian group along with Bengali and Nepali.
Genetic evidence of prehistoric migrations from the Indian subcontinent into Europe has led to the creation of the Indo-European concept.
These early immigrants brought their native language with them, which led to the creation of the modern European languages.
There are two different theories about the origin of European language.
The group of migrants from the Indian subcontinent and their language were concentrated in the peninsula that makes up most of present-day Turkey, according to the Anatolian theory.
A large migration crossed the Hellespont into continental Europe and spread out into a relatively unpopulated region.
The same group of migrants from the Indian subcontinent made their way into Central Asia, and then migrated across the Eurasian steppe into central and Western Europe, taking their language with them, according to the Kurgan theory.
It will be difficult to prove either theory without significant archaeological discovery or extensive genetic research.
Most Europeans are descended from populations that lived in the Indian subcontinent.
There are many theories as to why this light-skinned population, similar to many light-skinned Aryan Indians, pulled up their roots and moved west.
They took their language and genes with them.
People with red hair can still be found in northern India and Pakistan because of the Celtic trait for red hair.
Music is a form of nonmaterial culture that has geographic roots.
Since the geography of musical culture can be seen on the AP exam, you should know a few things about it.
Folk music is music that is original to a culture.
Folk music traditions often have instruments that are different to that culture.
Folk song lyrics often include cultural stories and religious traditions.
It is usually unplugged without electronic instruments.
Popular culture creates a global flow of pop music that often drowns out local folk music traditions from radio and other media.
Folk traditions accepting the influence of popular music indicates a form of acculturation.
Scots-Irish, German, and African culture have roots in American folk music and country music.
The fiddle, a variant of the European violin, and the banjo--an instrument of African origin--are folk traditions in Appalachia.
The most popular folk music type in the region is bluegrass, which originated in Kentucky and is known as the "Bluegrass State".
The lead instruments in bluegrass are the fiddle and banjo.
The region spans from Mississippi to the Maritime provinces and has a number of other folk styles.
Rock and roll and bluegrass have both influenced contemporary country music.
The guitar is the lead instrument in country music compared to bluegrass.
From country to Western music, the guitar is linked back to the Spanish Americans of colonial Mexico and the American Southwest.
Kentucky is one of the historic development cores of country music.
Folk musicians from other cultures have contributed to many of the recordings sold in the United States and Canada as World Music.
The top-selling group is the Gypsy Kings.
The band is from France, but their families left Spain decades earlier due to persecution by the Franco-led fascist government of gypsies.
TheGypsy Kings play from a variety of folk traditions and languages, including their native Roma to Spanish flamenco, as well as Basque and Catalan folk songs, which they have popularized.
Celts, Irish, Welsh, Scots, Manx (from the Isle of Man), Spanish Galicians, or their migrant descendants are all where Celtic folk music traditions are found.
Irish Celtic music has a large following.
The fiddle, flute, tin whistle, harp, concertina accordion, and "Uilleann" or Irish pipes are just some of the instruments used in the traditional music.
Nowadays, Pan-Celtic music draws from more than one Celtic region and uses other non-Celtic instruments like guitar, banjo, and bouzouki.
If you can pick out the Celtic Scots-Irish folk musical influence, then you should listen to country or bluegrass.
There is a cultural imprint on the land with different forms of film and television.
Film and TV are not subjects of the AP Human Geography Exam.
If you are asked a general question about cultural landscape, you should be able to express film and television's importance.
The media forms are major conduits for cultural globalization.
Food is a form of culture that is different in many ways.
Continental cuisine originated from mainland Europe in the 1800s.
In haute cuisine, a main meat course is served with a flour, cream, or wine-based sauce and side dishes of vegetables and potato.
Duck a l'orange, filet mignon, and chocolate mousse are haute cuisine dishes that are popular in North America.
escargots (snails in garlic butter) from Provence in Southern France, and coq Au vin (Rooster in red wine sauce) from a number of regions can be found in this style of cooking.
You thought that Arnold was the only Austrian in Malibu.
France, Spain, and Italy are some of the countries that have a Nouvelle cuisine.
The lighter, fresh fare of California-style cuisine has become very popular around the world, even though there is a strong style in France.
Gone are the heavy sauces and 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 888-270-6611 These have been popularized by celebrity chefs such as the Austrian-born Californian Wolfgang Puck, who uses a number of Mediterranean agriculture products in his dishes.
Wolfgang Puck is a proponent of fusion cuisine, in which more than one global tradition is incorporated in dishes.
One of the leaders of the fusion movement that integrates dishes and flavors from Japan, China, Southeast Asia, Polynesia, and Europe is a Japanese-American celebrity chef.
Hawaii's location makes it a place of heavy immigration from these parts of the world, where cultural synthesis in food and other cultural components such as music takes place.
Folk food dishes are the basis of all of these forms.
Folk food from Japan is called sushi.
A folk form called nigiri is a small pat of rice.
The contemporary form of sushi has become stylized with special rolls, makizushi or just maki and a variety of ingredients like raw, smoked, or fried seafood and fresh vegetables.
A number of regional ingredients from the Mediterranean and North Africa are used in the folk food tradition.
Chicken and lamb are popular in main dishes since they are rare in North Africa and pork is banned by Islam.
Permitted meats must be slaughtered according to religious rules if they want to be eaten by Muslims.
The high Atlas Mountains grow a lot of root vegetables and chickpeas, which are often served with meat.
Food is often cooked in a clay pot called a tajine and has a variety of spices in it.
Chickpeas can be ground and mixed with a sesame seed paste called tahini, along with olive oil, salt, and lemon to make hummus, which is increasingly popular in Europe and Anglo-America.
There is a cultural imprint on the landscape with different clothing styles.
The AP Human Geography Exam does not test clothing like art.
Since the way people dress is an important sign of their ethnicity, be able to express clothing's importance if asked a general question.
Film and TV are conduits for cultural globalization.
Different types of social interaction are culturally constructed, meaning they were created by a specific culture group.
A basic example of culturally different social interaction is physical greetings.
The bow still holds as the primary formal greeting in Japan, whereas in the West a handshake is a common greeting.
The forehead and nose are pressed together in the traditional New Zealand greeting.
Non-touching cheek kissing is an example.
Kiss four times in Paris, France, upon greeting, twice on each side; in Serbia and the Netherlands, three times, right side first; and twice in Spain, Austria, and Scandinavia, where there is a variable number of kisses.
Personal space varies from country to country.
If you like a large personal space bubble, it's rude to sit in an empty seat next to someone in Peru, even if they're not a stranger.
Next time you go to a movie theatre or a bus, think about this.
Religions are as common as languages, according to some social scientists.
Specific religions are drawn from larger global groups.
Religions can be categorized by their expanse.
Universalizing religions allow followers from all ethnicities worldwide, as opposed to ethnic religions which only allow followers from a specific culture group.
The scriptures are said to be written of divine origin.
Religion, worship, and ethical behavior in society are governed by formal doctrine.
Religions can be understood by their ability to compromise.
Compromising religions can be used to reform or integrate other beliefs into their practices.
The fundamentalists have little interest in compromising their beliefs or doctrine and strictly adhere to scriptural dictates.
The worship practices and morality tales of these groups define a right and ethical way to live.
In Latin, Animus means spirit.
Animists believe that items in nature can have spiritual being, including landforms, animals, and trees.
Hinduism was the oldest universalizing religion.
The polytheistic denominations spread throughout Asia by the 1200s C.E.
There are many levels of existence, the highest being nirvana, where someone can achieve total consciousness or enlightenment.
One's soul is reincarnated many times.
The outcome of reincarnation depends on the balance of good and bad in life.
The story of Abraham is a morality tale of respect for the will of God or Allah, similar to the descriptions of the Earth's origins in each of these religions.
Each is a belief system with a single supreme being.
Saints, angels, and archangels can be sub-deities.
prophecy predicts the coming or return of a messianic figure that defeats the forces of a satanic evil for souls of followers
Major religious groups can be broken down into these traditions.
There is a quick and basic guide to the world's major religious groups.
A lot of the questions on the exam rely on your knowledge of religious geography.
The major world religions are most likely to show up on the exam if you have a firm grasp of them.
There are many animist belief systems.
The system is based on belief in a supreme or Great spirit.
shamans, sometimes referred to as "medicine men", who are practitioners that lead worship and religious rites, provide spiritual interpretation.
Depends on tribal following.
Prayers or appeals to the sun, moon, animal spirits, and weather are important in most practices.
Different parts of the world are controlled by multiple deities.
shamanism is part of the system of worship.
Doctrine is dependent upon the community.
Common practices attempt to bring worshippers in contact with deities and family ancestors in the spiritual world through different ceremonies.
The degree of influence from parallel Christian worship by Voodoun followers affects theDenominations.
In addition to India, there are also significant populations in Indonesia, London, Manchester, and other parts of the former British Empire.
The practice of temple-based worship and festivals to praise supreme gods include Vishnu, Shiva, Krishna, and animal forms of Ganesha and Naga.
The historical moral traditions and practices are depicted in several writings.
Different denominations are often based on cults to deities.
The reincarnation principle states that people are born into a social level where they stay for the rest of their lives.
The historical Hindu temple complex at Angkor Wat in Cambodia is one of the locations where a remnant population is found today.
There are several texts known as Agamas in Western India.
The Tattvartha Sutra is often cited.
Every living soul is potentially a divine god because of the complete respect for all other animal life.
People who follow a strict vegetarian diet often wear face masks to protect themselves from insects.
There are three main groups that differ in practice and worship.
During the colonial period, Jain communities relocated to Great Britain.
Mohandas Gandhi's civil rights and peace activism were influenced by his mother's Jainism.
The Gangetic Plain of North Central India and spread throughout Asia contains early Hindu texts and the life and teachings of Siddhartha.
This is done by understanding the effects of suffering on human life and following a "Middle Way" to enlightenment.
The Hindu caste system was rejected by Buddhism because it was oppressive.
Tibetan Buddhists accept westerners into their community but are uncompromising in their beliefs.
Theravada tends to be less universalizing than Mahayana Buddhism.
Zen, Confucianism, Shinto, and Taoism are some of the different forms of philosophy in the Eastern tradition.
There are several examples of Buddhism moving across physical barriers, including Tibetan Buddhism across the Himalayas and the Tarim Basin desert to Siberia and Mongolia, Theravada from Sri Lanka across the Bay of Bengal to Southeast Asia, and Mahayana from the Himalayas to Eastern China.
There are several levels of existence, from the lowest animal forms to human forms and then higher animal forms, which are considered sacred, such as cattle, elephants, and snakes, according to the Hindu scriptures.
The levels are referred to as the chakras.
If the soul has a positive karmic balance, it can be elevated to a higher level.
The balance of good and bad in a person's life is called karma.
When someone is born into a caste, he stays there for the rest of his life, no matter how rich or poor he becomes.
The lowest human form, dalits, are considered less holy due to their distance from nirvana, while the highest human form, the Brahmans, are considered the priesthood of Hindus due to their relatively close position to the enlightened.
Some can be selected as high government officials, because they are responsible for temples and leading religious worship.
Some people may live as monks, who might live as hermits meditating, or as ascetics who sit on sidewalks and perform prayers for those who provide their food donations.
Despite their political power, hereditary princes and kings still bow to the Brahmans.
The vaisya caste is made up of doctors, lawyers, accountants, and middle-ranking officials in the government.
The caste is broken up into several hundred sub-castes, or jati, including potters, glassworkers, and jewelers.
The "untouchables," a name derived from their low position in the system, are considered "unholy" by higher castes.
The lowest-caste humans in the community were the only ones who could clean the train stations and sewer.
The caste structure in Indian society has been eliminated by the government since India gained independence.
Compulsory elementary education, opening public trade schools, high schools, and universities to large numbers of lower-caste members who had been discriminated against in the past are some of the programs that have been put in place to elevate the social and political standing of the lower castes.
The caste difference in Indian cities has become less and less.
Marriage is still one area in which there is an emphasis on caste, as most traditional parents want their children to marry within their caste.
European Ashkenazi Jews, Sephardic Jews from North Africa, and the Middle East and Native Israelis are some of the larger groups.
January 1st was on the Hebrew calendar.
In Israel, peripheral communities in Europe, United States, and Canada, as well as the metropolitan area around New York City and other urban areas worldwide, such as London, Antwerp, Paris, Los Angeles, Toronto, and Cleveland, there is a Torah.
The Jewish Diaspora begins in 70 C.E.
and is part of the Denominations: Hassidic, Orthodox, Conservative, Reform, and Reconstructionism.
The beginning of the Jews' movement to Israel from Europe can be traced back to the aftermath of the Nazi Holocaust.
The Middle East and North Africa were affected by conflicts in the 1950s and 1960s.
The Roman Empire was not officially recognized until the 4th century C.E.
Around 30 C.E., the following begins.
The Bible is divided into an Old Testament, a modification of the Torah, and a New Testament, which depicts the messianic life of Jesus of Nazareth.
Typically involves communion practices.
Denominations can be divided into further denominations.
Christianity spread from the Mediterranean to large cities such as Rome, Constantinople, and Alexandria.
The religion was spread to other towns and cities by missionaries.
The Islamic realm spans from West Africa, east to Indonesia, and the Philippine Island of Mindanao, north to Chechnya, and west to Western China.
If not more, all sects emphasize at least five pillars of Islam.
The Sunni and Shia sects have a number of denominations within them.
Shiites emphasize the need for religious leaders to have a direct blood line back to Muhammad, which leads to differences between the two major sects.
Islam spread very quickly from Mecca.
All of the Middle East and North Africa were followers of Islam by 700 C.E.
Expansion into Europe and Asia continued through the 1600s.
In the 1200s, relocation was seen to Indonesia.
The senior positions of governance are held by religious leaders in a few countries in the Middle East.
Iran has a supreme religious council that can overrule the elected parliament and president.
Some Middle-Eastern states are republics or monarchies that follow Islamic law based on the Koran and Haddith.
A few absolute monarchies have all-powerful kings and large aristocracies who enforce religious standards on the populace.
Other states in the region are secular, meaning they are not directly governed in a religious manner and instead use French or British legal tradition and government structure.
Tension between the secular government and religious activists can cause trouble in these states, even though the influence of religion on government policy remains.
The core beliefs from two or more other religions can be combined into a syncretic religions.
The Druze and the Sikhs incorporate both Christian and Islamic principles.
Sikhs do not agree with the idea of a caste-based social hierarchy.
The basic moral code for all followers of the Judeo-Christian system is based on the Ten Commandments from the Book of Exodus.
Five pillars are emphasized in the Koran in order to guide followers with a moral system.
All work stops and prayer mats are unrolled when the call to prayer is heard.
The azimuth is the angle of direction from Mecca to other parts of the Earth.
There is only one god, Allah, and Muhammad is the prophet.
The creed is a statement of monotheism.
Prior to Muhammad's conversion to Islam, many of the people in the region believed in polytheistic religions, such as tribal religions.
It is the duty of all Muslims to care for and donate to the poor and sick within their communities.
Large charitable foundations in the Islamic world help alleviate poverty, extend health care, and educate children.
Many of these international charities have come under increased scrutiny by the U.
During the month of Ramadan, there is a period of spiritual cleansing and repentance for past sins.
Our Gregorian calendar allows for a wide range of months in which the lunar month of Ramadan can fall.
A Muslim who is able must make at least one pilgrimage to Mecca.
"Haji" is an honorific name for those who make the journey.
The geography of religion can be broken down by region.
In Chapter 6 there is a section about religious-based conflict.
In chapter 9 of Know the Models there is a discussion on religion and ethnicity in urban American neighborhoods.
The collected stories, spoken-word histories, and writings that are specific to a culture and tell the societal histories and morality tales that define a culture's ethical foundations are called folklore.
Similar to religious scriptures, the morality tales dictate culturally constructed rules of behavior.
The folklore from the classical Greeks is an example of Aesop's fables.
Each fable taught a lesson about proper behavior.
Many American forms of folklore, such as Paul Bunyan, John Henry, and Mike Fink, mix a bit of the real to tell tales of a strong work ethic, a product of Puritan Protestantism.
The myth of George Washington chopping down his father's cherry tree can lead to distortions of reality in the lives of historical figures.
Hollywood depictions of American frontiersmen have furthered fictions and half-truths about the long-dead historical persons.
Folklore has been built around the life and travels of Christopher Columbus in many parts of the Americas.
The folklore varies from country to country.
Evidence shows that settlements were established on the northeastern tip of Newfoundland around 1000 C.E.
When a global climate cooling event resulted in crop failures, these were abandoned 100 years later.
People are likely to leave to other settlements in other countries.
The mainland United States was never visited by Columbus.
He traveled to Puerto Rico and the mainland of South and Central America.
The ships he took with him on his first voyage were shown here.
You might not know that the Santa Maria struck a reef off the northern coast of Haiti, the island Columbus named Hispanola.
Columbus and 40 other men were forced to leave a colony named for the Spanish Queen.
This was to establish trade with the Indians.
Not a single sign of the 40 men was found at the village after Columbus returned.
Columbus and the other crew that came back to rescue their friends were devastated.
Columbus was from Genoa, a coastal city in northern Italy.
Columbus tried to get funding for his expedition to the Indies, but was turned down by a number of potential donors because he wanted to sail across the Atlantic instead of around Africa.
Spain was nearly bankrupt from years of war trying to remove the Muslim Moors from the southern Iberian Peninsula and needed the new trade route to raise money.
The western route to land had been rumored for a long time.
The Portuguese were interested in protecting their African trade route and would likely fight to protect it.
The Spanish didn't have access to the sailing charts and maps.
Columbus was given the title of admiral of the seas but did not receive the promised 10 percent of treasure from the New World.
His sons were able to get money from the Spanish government after his death.
Columbus was blind when he died in Spain at the age of 54.
This may seem like a historical example.
The Columbus myth and the settlement of Latin America are areas of extensive research.
You need to be prepared for this kind of thing on the exam.
For an accurate description of the Columbus voyages and the first Spanish settlement in the New World, read Carl Sauer's Northern Mists.
Land survey techniques can show something about the cultural landscape.
Property can say something about culture through its impact on the landscape.
There are cultural differences in agriculture.
Farming can be culturally specific and influenced by technology.
It is, after all, a culture of agriculture.
There are a variety of farming practices seen in the First World.
Modern, mechanized farming is replacing traditional farming practices in the Third World.
There are still some culturally specific and low-tech farming practices in the First World.
Maple syrup production can be found in Vermont.
You can find blue sheep's milk cheese in France or Parmagianno Reggiano in Italy, which is a hard cow's milk cheese.
A culturally specific farm product that brings high value can be found in these high-value appellations.
Champagne and Vermont Maple Syrup are protected by international trade laws.
Chapter 7 contains information about appellations and specialized agriculture.
The distribution of living space is an indicator of culture in rural and tribal areas.
Many cultural traditions impose rules on living space because of singular clan relations, extended family units with more than one clan, or whole tribal communities with multiple clans living in one shared residential area.
Chapter 9 describes the distribution of urban land use.
In Europe, much of Latin America, and Anglo-America east of Central Ohio and Ontario used natural landscape features to divide up land on a system of metes and bounds that had been developed centuries earlier.
The European feudalist political economy can be seen in the Metes and bounds.
Territorial claims of large landholdings were the early form of the irregular property boundaries.
Landholdings became subdivided through partial land sales or nationwide land reform efforts.
Quebec and Louisiana have long-lot patterns.
These are close to a road or waterway with a long lot behind them.
Land survey in the United States and Canada used a rectilinear township and range survey system based on lines of latitude and longitude, which was transferred from sea navigation in the 1830s.
The geometric shape of many western U.S. states and Canadian provinces was produced by this.
It shows the impact of technology on the cultural landscape.
Cultural geography includes how people are identified and who they are.
The several dimensions of identity that may appear on the exam are examined in this section.
In normal conversation, the term nation is used lightly.
There is a specific definition for the term for cultural and political geographers.
A nation is a population represented by a single culture.
A culture group is a term for nation.
A common identity is a mix of genetic heritage and political loyalties embodied in the term ethnicity.
Ethnic groups often claim a single identifiable heritage, which all members tend to identify with as a common social bond.
Several ethnicities can be found within the same linguistic region as with the English language.
French Canadians, South Asian Indians, or Belgians can all be used in a single language.
A state in its most simple form is a population represented by a single government.
This is the case with our previous music example of the Romani peoples of Europe.
The Kurds of northern Iraq, southeastern Turkey, northeastern Syria, and western Iran are all defined as groups with no official government.
The Kurds are trying to establish a Kurdistan in northern Iraq and northeastern Syria.
The relationship between the United States and Turkey prevents the Kurds from being recognized as an independent state.
In the process of migration, ethnicity can be changed.
Italian-Americans and Irish-Canadians are some of the migrant groups in the United States and Canada.
A modified ethnicity can be evidence of acculturation by immigrants to their new home country.
There is a section on acculturation in this chapter.
Ethnicity and race can be confused.
Race refers to the physical characteristics of a common genetic heritage.
The concept of race was developed by anthropologists.
Skin color, bone structure, and the shape of the hair shafts were some of the variables used to categorize racial groups.
Over time these formerly scientific ideas were used as the basis for racism within society and have lead to oppression, suffering, and war throughout the world.
Three large, distinct racial groups emerged from this research: the Caucasian, with a light to dark skin tone, medium body type, and wavy hair shaft; the Asian, with a tan or yellowish skin tone, small body structure, and straight hair shaft; and the wavy hair shaft.
All Asians have similar physical features.
The Native Americans shared many of the same features with the Mongolians.
A body of theory connecting Native Americans to origins in Asia has been added to by archeological and genetic research.
During the prehistoric era, the Caucasus Mountains region is believed to have been a major migration route from the Indian subcontinent to Europe.
The Latin and French term for black is negro.
There are four small populations of physical anthropological groups within the Pacific Islands.
The dark skinned Melanesians, who are also known as the dark skinned, have relatively thin bodies and facial features with a curly hair shaft.
Polynesians have a lighter brown skin color, heavyset body shape, and curly hair shafts.
Micronesians have a light brown skin color, medium body shape, and curly hair shafts.
Light brown skin, medium body type, and wavy hair shafts are some of the characteristics of Australia's aboriginals.
Race has become less of an identity in the contemporary era.
Discrimination based on race was abolished in many countries during the 20th century.
The election of Barack Obama as president of the United States shows that progress has been made.
In many parts of the world, identity is based on a single race being the indigenous population.
Multiple mixed races define identities in other parts of the world.
Identity is based on mixed races in Latin America and the Caribbean.
Several thousand terms are used to describe different degrees of mixed heritage across the region.
The larger representative groups are the focus of the exam.
Mestizos have heritage from both European and Native American.
People with European and African heritage are called mulattos.
The Garifuna are a group of mixed Native American and African people.
The Garifuna live in the Caribbean islands of St.Vincent, Dominica, and Trinidad, as well as the coast of Honduras.
Creole is a term used to describe a culture that is derived from all three racial groups.
The term in Spanish meant someone who was born in the New World, regardless of heritage, and could refer to them with two European parents.
Creole heritage and culture can be found in the Greater Antilles, the Dominican Republic, Puerto Rico, and Jamaica, as well as coastal Louisiana, Texas, and Mississippi.
gumbo is an example of Creole food.
bouillabaisse with file is a French Mediterranean soup that is similar to Native American cooking made of sassafras.
West African red rice was first used in American cooking in the 1600s.
Human geographers developed the concept of environmental determinism to explain cultural differences around the world at the same time anthropologists were establishing the physical characteristic of race.
The former scientific ideology that states that a culture's traits are defined by the physical geography of its native region is called environmental determinism.
Modern human geography is based on deterministic philosophies.
The Anthropogeographie of the German geographer Friedrich Ratzel, considered the father of modern human geography, built a large body of research claiming that all aspects of culture were defined by physical geographic factors.
The problem with environmental determinism was that science was being used to reinforce racist ideologies of the 1800s and early 1900s.
An example of this logic is that people from extremely hot tropical regions are considered lazy because they don't want to work in the midday heat.
People from the cold regions had to be more physically and mentally tough to survive the cold winters.
These ideas are based on flimsy evidence and are scientifically incorrect.
Different races and cultures can survive in a variety of climates and environments.
Despite the global elimination of slavery by the late 1800s, racism and environmental determinism were accepted both socially and scientifically.
Carl Sauer was one of the human geographers who opposed the environmental determinists.
The revised concept was called Possibilism.
Cultures were to a partial degree shaped by their environment and the material resources available to them according to this ideology.
Culture groups can modify the environment.
In many cases, cultures made massive modifications to the landscape to meet their food and resource needs, often destroying the natural environment in the process.
The deterministic ideas first proposed by Ratzel had become ingrained in the European society and psychology despite the contribution of Sauer to the science of human geography in the 1920s.
The living space for each nation was based on the optimal physical geography of the culture group, which is what the Nazism concept was based on.
The living space of the Germanic or Aryan race was supposed to be expanded by Hitler.
It was at the expense of other European ethnic groups who were also Caucasians.
Despite Germany's defeat during World War II, Nazi ideologies still persist among some groups in the United States and Europe.
The neo-Nazism is not based on race, but on violence against immigrants and non-whites.
This is an expression of fear of outsiders.
The audience with which people communicate is what determines how they express their identity.
People who share their heritage or place of origin can use internal identity to express their cultural heritage, ethnicity, or place of origin.
People who do not share a common cultural or geographic background use external identity to express their cultural heritage, ethnicity, or place of origin.
An Egyptian in London is introduced to another person of Egyptian descent.
Local place-names, family names, and culturally specific language are included in the conversation immediately.
This is the same Egyptian who met someone from Canada an hour later.
On the other side of the conversation, the Canadian may have her own misconception which can further distance the two people.
The Canadian may lose face if they refer to Egyptians as Arabs, as many Egyptians consider themselves a single culture group as opposed to those who live in the Arabian Peninsula.
The Egyptian thinks that she should refer to the Canadian as an American.
It is possible that we use external identity to compensate for the lack of cultural knowledge from one group to another.
The world is covered with many cultures that create multiple layers on a global scale.
A region is an area of space with a single characteristic.
In the case of culture regions, the homogeneity can be more than one component of culture.
The culture region can be represented by the cultural concept of a nation or ethnicity.
If the culture region is defined by ethnicity, look for a number of cultural components with which to define a number of characteristics as a complex of defining factors.
The border characteristics of cultural regions are different from other types of regions.
fuzzy borders are what cultural regions have.
It's hard to tell where one cultural region ends and another begins.
It is not easy to measure the transition from one cultural region to another.
The cultural regions overlap in an irregular way.
There is a fuzzy border where the American Northeast or Midwest begins.
Some people try to apply a political boundary to it, like the Mason-Dixon Line, but this is not a good definition.
The Mason-Dixon Line runs from Delaware to Maryland.
One part of the state is decidedly Southern and the other part is Northeastern.
Some have tried to quantify certain cultural symbols in order to determine the region's boundaries.
It is possible to estimate the concentration of NASCAR fans or the market areas of country music radio listeners.
You would find that the phenomena of NASCAR and country music extends far beyond the South proper.
The culture hearth is based on the idea that every culture has a specific area where it started and its main population center.
Today's world has temporary culture hearths.
The idea of ancient culture hearths, which developed ideas and technologies that still exist today, is discussed by human geographers.
The domestication of staple food crops is the most common of these technologies.
In the ancient world, staple food crops were very important, as they fed the conquering armies of empires, provided sustenance for the labor force, and were the primary commodity for commercial trade networks.
Most large ancient civilizations had a single staple food, which they either domesticated or utilized heavily.
Rome and Greece were major consumers of wheat.
In Mesopotamia, wheat was domesticated long before.
This happened in northern Iraq and southeastern Turkey.
The Minoan culture of Crete was a progenitor of the culture traditions of ancient Greece and Rome.
The concepts of democracy and the republic are some of the things that Western societies draw upon from Greek and Roman politics.
The core of a contemporary culture region can be represented by fireplaces.
The Mormon culture region of the American West is an example of a region with a distinct core.
The homogeneity of the region is reflected in the religion of the Latter-Day Saints.
The LDS film industry is an example of a distinct Mormon culture.
Notable expressions of the region's culture have been produced by LDS filmmakers.
The Salt Lake City-Ogden-Provo metropolitan area is home to the population and cultural core of the region.
Most of the 1.5 million people in the area are church members.
The main office of the church is located in Temple Square in Salt Lake City, which is also home to a large convention center.
Outside of the Wasatch Front, the region is mostly rural and agricultural.
The Mormon culture region is spread across the farms and dry ranch lands of Utah and the border region of Idaho, Wyoming, Colorado, Arizona, and Nevada, with significant populations in rural eastern Oregon and suburban Southern California.
When you leave Utah's borders, the cultural signs and symbols become less important.
Mormonism is still present in the population even in the peripheral region.
There are large active Mormon communities in Las Vegas, Idaho Falls, Denver, Phoenix, and Los Angeles.
Mormonism was the fastest-growing Christian group between 2010 and 2013, and it has churches in almost all of the U.S. counties.
There are two types of regions, formal and functional, with a central place.
A functional region is an organized network with connections throughout the region.
The Mormon culture region could be argued to be both functional and formal.
Mormon culture can be seen through the population of followers who are concentrated in the Intermountain West.
Mormons in the region are a large and distinguishable populace, even though not everyone who lives there is a Mormon.
The Salt Lake City-based Church of Jesus Christ of Latter-Day Saints has a very well-organized and hierarchical network of neighborhood ward, local, state, and regional church administration.
The Mormon culture region is a functional region.
The culture of Islam is found along the Red Sea coast of Saudi Arabia.
Mecca is the most holy city of Islam and was where Muhammad was born.
The second holiest city in Islam is Medina, where Muhammad received a portion of the Koran.
The spiritual centers of the faith are the centers of Islamic learning and traditional philosophy.
The Middle East is not as well-populated as other parts of the Islamic world.
The majority of the world's Muslims are not in the Middle East.
66 percent of the world's 1 billion Muslims are found in Pakistan, India, Bangladesh, Malaysia, and Indonesia.
In all of the world's populated regions, long-term cultural changes can be seen.
The concept of sequent occupance is one way this is observed.
Over time, different dominant cultures replace each other in a single place.
Think of layers of culture building up on top of each other as if they were layers of rock.
We often see remnants of previous cultural influences when we look at the cultural landscape of a place.
European architecture can be found in former colonial cities of Africa.
There is a postcolonial Nigerian landscape with modern buildings, a product of globalized architecture, and place names and street names with Nigerian references that replaced the British colonial names after independence in 1960.
The British ruled New York City at one point.
The Dutch ruled the city prior to this.
The shores of New York Harbor were rich in oysters and other seafood before that.
Large garbage dumps of mainly oyster shells and other artifacts of Native American life are uncovered in construction site excavations along the waterfront.
There are signs and symbols of the postcolonial and modern American cultural.
The cultural landscape has the imprint of minority and immigrant groups.
The ethnic neighborhood is an example of how these groups make their way into a much smaller area.
In New York, Little Italy or Chinatown immediately come to mind.
Puerto Rican and Dominican immigrants settled in Spanish Harlem from the 1950s onward.
European immigrants came to America in the early 20th century and adopted many new beliefs and behaviors.
As they adjusted to life in America, they kept much of their original culture.
An example of acculturation is the process of adapting to a new culture while keeping some of the original culture.
acculturation is a two-way street, with both the original and incoming culture groups changing cultural characteristics.
It's more of an "all-or-nothing" process.
Assimilation is a complete change in the identity of a minority culture group as it becomes part of the majority culture group.
The U.S. government adopted a policy of "forced assimilation" when it came to the Native American population.
The Native Americans were forced to move to reservations because they were taught in government-run schools.
The people were told to give up their native tongue to learn English.
The dress, manners, language, and ways of the dominant American culture were insisted upon by the government.
The old ways were not allowed.
When new residents are forced to learn new languages and embrace new ways of life, the government usually encourages them to absorb this total absorption into the dominant culture.
Military invasions, mass migrations, and the decline of the indigenous culture have historically threatened national cultures in other parts of the world.
The original inhabitants of a place or region are referred to as indigenous.
The original culture of that region is the indigenous culture.
A major policy issue among governments is the loss of indigenous culture.
External cultural influences can threaten the indigenous culture.
Cultures are in danger of extinction if something is not done to help protect and promote the preservation of cultural heritage.
William Denevan's work on the depopulation of Native Americans in the early colonial era is one of the most important pieces of research on the destruction of indigenous culture groups.
The pre-Columbian population of North and South America was approximately 54 million people, according to research collected by Denevan and allied researchers.
By 1635, the total native population had declined to around 5 million people.
Understanding what caused the decline of the indigenous population was the next part of Denevan's research.
The decline in native culture was caused by diseases of European origin and was found in the diaries and journals of Jesuit priests, ship captains and other individuals.
Prior to the arrival of European colonizers, there were no known diseases in the Americas.
Native Americans did not have an immune system defense against these pathogens.
The flu resulted in deadly epidemics with high rates of mortality among indigenous groups.
According to research, deaths from European diseases vastly outnumbered all other causes of death.
Disease epidemics have a devastating effect on the survival of many unique and advanced civilizations in the Americas.
There is a growing theory that a large civilization existed in the Amazon basin that may have been wiped out by European disease.
The difference is that the people in the area did not use stonework.
The rapid decline of wooden houses and buildings in the tropical environment left little evidence of a large and extensive agricultural society.
terra preta soil formations are the focus of archaeological and geographic research to learn more about ancient Amazonian civilization.
terra preta means black earth, which was formed by combining charcoal, bone, and manure to increase the soil fertility.
A number of indigenous cultures from around the world are under threat from a variety of forces that have the potential to wipe them out.
The concept of cultural survival describes the efforts to research, understand, and promote the protection of indigenous cultures.
In addition to protecting the identity and promoting the livelihood of indigenous peoples, indigenous cultures are seen as valuable to the social, anthropological, and geographical composition and diversity of humankind.
Indigenous cultures are important to their people and governments.
The current research of McSweeney is an example.
The cultural and economic livelihood of the Miskito Indians along the Caribbean coastal region of Honduras are investigated by her.
The Miskito live in a tropical forest region that is under threat from a number of development interests, including plantation agriculture for crops such as bananas and sugar, and land development for new towns, mining, and ranching.
McSweeney's research shows that there is continuous encroachment on the traditional territory of these indigenous people.
The Miskito will continue to suffer from the loss of their indigenous territory and culture if official protections are not instituted by the Honduran government.
Cultural globalization can harm indigenous cultures and threaten the constitution of national cultures.
A number of influences such as literature, music, motion pictures, the Internet, and satellite and cable television, mainly from English-language sources, combine to diminish and potentially eliminate the media and culture of other linguistic groups.
Many unique cultures around the world are threatened by globalizing factors such as architecture, transportation infrastructure, food retailing, clothing styles, and the missionary efforts of proselytic religions.
When people are fully immersed in popular culture, they are denying the importance of their own ethnic culture.
Traditions can be lost and forgotten over time.
People who lose their heritage connection are also losing part of their personal connection to nature.
The social and psychological problems that can be caused by this can be better understood by psychologists.
Culture has value.
By protecting national cultures from the negative effects of globalization, a nation can promote its own cultural economy and products from creative arts and media.
These artistic products can be a draw for cultural tourism.
Large amounts of employment and value can be generated by whole media industries.
Bollywood is a movie industry based in Mumbai, India.
The release of the film Slumdog Millionaire generated its own global economic presence with theater receipts over $50 million by early 2009, and the Academy Award for Best Picture.
A number of national governments around the world have instituted laws and regulations to combat the negative effects of cultural globalization.
Laws and regulations can limit the amount of foreign media and other cultural influences.
There are attempts to ban external cultural influence.
The French government has taken a number of steps to limit the amount of English-language films and television programs released or broadcast in France.
The French government provides funding to develop and promote French-language media for internal release and export.
The goal of these media exports is to push back against the English-dominated global media.
Special funding for French-Canadian media is provided by the Canadian and Quebecois governments.
Bhutan has a number of limits on the import of foreign media.
Bhutan restricts the number of entrance visas for foreigners due to its location in the foothills of the Himalayas.
This is an effort by the royal government to preserve the ancient Buddhist culture and protect its people from the influence of popularized global media brought in and demanded by foreign visitors.
Unfortunately, cultural conflicts are still with us today.
There are cultural conflicts that do not lead to violence or armed conflict.
In a number of cases, bloodshed has been caused by cultural differences of people in the same region.
There have been bloody armed conflicts in places such as the former Yugoslavia and the Caucasus Mountains.
Differences in language or religion are just some of the differences that can lead to war.
The post-World War I Treaty of Versailles created the former Yugoslavia.
There was no such thing as a Yugoslav prior to that time.
There were many different ethnic groups in this part of the Balkans, including Serbs, Croats, Bosnian Muslims, Bosnians, Montenegrins, Kosovars, and Macedonians.
In World War I, the victors of Britain, France, and the United States decided to put them all together as one state.
The idea was short-lived.
There was a power vacuum after the death of the country's long time Communist leader, Josip Tito.
During World War II, he fought alongside Serbians against the Germans.
He was a representation of an artificial Yugoslav identity which existed before the 20th century.
After his death, people and politicians began to argue about ethnic and religious issues.
Croats are mostly Roman Catholic.
Eastern Orthodox Christians are Serbians.
Christianity and Serbo-Croatian language are very different from each other.
There was fighting in northern Yugoslavia in 1989.
Croats forced Serbs out of Serbian enclaves in Croatia and Serbs did the same in northern Serbia.
The first mention of the term ethnic cleansing is here, where people of one ethnic group are eliminated by another, often under threat of violence or death.
By 1990 fighting and ethnic cleansing had erupted in Bosnia between ethnic Croats, Serbs, and Bosnian Muslims, despite the fact that the conflict was quickly resolved by international diplomacy.
Thousands of men and boys were executed in Bosnia for being potential war brides.
The war was stopped by the Dayton Peace Accords.
There are roughly 20,000 foreign troops in Bosnia and Kosovo.
Several political and military leaders have been charged with war crimes in Bosnia.
The leader of the Bosnian Serbs was arrested in late 2008 after living in disguise for several years.
He is accused of ordering the genocide of Bosnian Muslim males in Srebrenica.
A large-scale killing of people of one ethnic group has been seen in a number of ethnic conflicts.
Six million Jews were killed in the Holocaust by the Nazis in World War II.
More recent cases involve the deaths of hundreds of thousands of people in 1994.
The situation in the west Sudanese province of Darfur, where Christians and others have been killed by Muslim militia groups known as Janjaweed, has been referred to as genocide today.
Culture is a source of conflict in Chapter 6.
The Human Mosaic is a cultural geography book.
Also see Cultural Geography in Practice by Pyrs Gruffudd.
A group of people with a common heritage share experiences, behaviors, and practices.
Language, religion, folklore, and various forms of the arts are included in the cultural landscape.
There are several major families and groups based on their prehistoric roots.
dialects can carry vastly different accents and vocabularies.
These distributions and patterns can be seen on maps, charts, or language trees on the AP Exam.
Universalizing religions accept followers from all ethnicities and cultures, and so may spread, depending on their expanse and openness.
There are three major world religious traditions, which can be broken down further into major religious groups.
Hinduism and Buddhism are polytheistic and believe that one's soul is reincarnated into different levels of existence.
It is important that you understand the difference between race and ethnicity, which is a blend of common genetic heritage and cultural identity.
Racist forms of oppression and suffering have been used to justify the categorization of racial groups in the 1800s.
Unlike other types of regions, cultural regions have fuzzy borders.
The culture of a region contributes to a strong sense of place.
We often think of ancient cultures such as Mesopotamia, but contemporary cultures exist as well.
Folk and popular cultures are at odds due to cultural globalization.
Folk culture is based in the traditions of a specific region, popular culture is changeable and contemporary, and may diffuse globally.
The local folk culture can be in danger when this happens.
There are answers and explanations at the end of this chapter.
A saltbox home has a long pitched roof in front and a low-angle roof in the back.
Make sure you qualify all of the clues in the question stem, as a Cape Cod house, (A), has a pitched roof, but not the other identifying features.
Classical Greek and Roman designs can be found in both Georgian and Federalist homes, as well as an I-house with chimneys on opposite ends of an evenly pitched roof.
The minarets are narrow towers that are pointed at the top of the mosque.
The architecture of the other religions does not include minarets, so (C) is the only valid answer.
Land surveys that use natural landscape features to divide up land are a system of metes and bounds.
India has a social hierarchy system called the caste system.
Belief systems have to do with religions, which should be irrelevant to a land division question.
Blending two or more cultural influences is called cultural synthesis.
The African slave rhythms and Scots-Irish song structures that blended to form American blues are examples of bringing two styles of music together.
Mosques have minarets, while Hindu shrines have rectangular-shaped main bodies and short towers of carved stone.
The steeple, bell tower, and cross-shaped floor plan are found in all cathedrals.
Because of the constraints of anti-Semitism, Jewish synagogues follow no single design.
The answer is correct.
The Altaic language family is named after the Altaic Mountains.
The experts believe that both languages are descended from the same language that existed thousands of years ago.
Korean and Japanese are both descended from Chinese, according to the speakers.
Urdu is a descendant of Farsi, the Iranian language, and he is one of the members of the Niger-Congo family.
They are members of the Sino-Tibetan family.
Several campaigns of slaughter have been conducted by Buddhist monks, who are associated with non-violence.
Both traditions believe in reincarnation, which can be defined as a higher plane of existence and continued life after death.
Indonesia is the largest Islamic country in the world.
Abrahamic religions are monotheistic, while Hindu-Buddhist countries are polytheistic.
A creole is a people or culture that is derived from European, African, and local indigenous groups.
African slaves were either shipped to other countries or died off in Argentina, which was colonized by Spanish and Italians.
The native population was wiped out by the European colonists.
The country claims 98% European descent.
The answer is correct.
Traditional architecture went all the way back to the Greeks, the Romans, and up through more recent folk styles such as Cape Cod, Federalist, and I-house design.
Designers such as Frank Gehry reject it completely.
When France was considered to be the most powerful cultural hub of the world, the termlingua franca was used.
French was used as a language to communicate with other foreign languages because of its perceived high value.
English is the global lingua franca.
A place that can boast a single product that is grown there, such as wine or ham, is called an "AOC of origin".
A pizza is made out of many different ingredients in a kitchen and then baked. | https://knowt.io/note/a37c672f-0336-4fc2-a07b-1f6e55483f34/5-Cultural-Geography | 21 |
18 | How can we compare the GDP of different countries
Latin America Institute (LAI)
The concept of purchasing power parity is mainly used in two different contexts:
- on the one hand in exchange rate theory (purchasing power parity theory). The not undisputed statement behind this concept is that the exchange rate of two currencies is determined in the long term by the purchasing power of the corresponding economies;
- on the other hand as a measurement concept in order to be able to indicate economic parameters such as GDP in an internationally comparable manner.
Because purchasing power in different economies and thus in two currency areas can vary significantly from one another, a simple measurement of GDP at current exchange rates is not sufficient for comparability; it would represent a GDP measure distorted by exchange rate fluctuations. We are interested in this in connection with the explanation of GDP.
In this case, however, the idea of a uniform price is less important than the application of purchasing power parities: that is, we want to be able to compare the economic indicators of different countries and different currencies with each other. Because in order to put the national income of different countries in relation to one another in a meaningful way, it is not enough to simply convert the total economic output into another currency and express it in it. This would mean that the different purchasing power levels in the countries would be ignored.
The approach of purchasing power parity is based on the fact that the nominal comparison of two incomes in two different countries says nothing about how much one can afford from this income, i.e. how high the associated purchasing power is. This in turn has something to do with the price level in the respective countries.
The American earns $ 2,500 a month and the price of a loaf of bread is $ 0.5. So theoretically she can afford 5,000 loaves of bread with her income. Suppose a German earns 4,000 euros. If you were to simply exchange the 4,000 euros at a nominal exchange rate of around 1.45 dollars per euro, we would get 5,800 dollars. So it seems that the German is richer than the American. So far so good. Assuming, however, a loaf of bread costs 1 euro in Germany, then a German can only buy 4,000 pieces of bread, while the American, as shown above, can afford 5,000 pieces. Converted to purchasing power parities using the example of loaves of bread, this results in an exchange ratio of 2 (1 euro / 0.5 US dollars). According to purchasing power parities, a German earns the equivalent of only 2,000 dollars (1US $ = 0.5 euros), which is due to the higher price level in Germany compared to the USA. The income of the German has a lower purchasing power.
For the comparison For economic key figures, a conversion and comparison of exchange rates according to the PPP (mostly in comparison to the US dollar) is therefore common.
GDP per capita and GDP (PPP) per capita for different countries in 2008
Source: World Bank WDI
- How is the temperature in September
- What indefinite article precedes the Eucharist?
- What is a guitar chord
- What is trust registration
- What are some medically unexplained symptoms
- Guys find Ariana Grande attractive
- The Tesla battery wears out
- What can I use to learn Pascal
- What are the best telemarketing scripts
- Know the bitcoin resurgence
- Marketing jobs will go up or down
- Can I see OZEE from USA
- What is the festival of unleavened bread?
- Are our health records secured?
- What are some good uses for Firebase
- What's hotter than the sun 1
- Who are the heroes in Borderlands 3
- Do pilots affairs
- What are some fruits used for cooking
- Investment bankers really do retire early
- Use transdermal magnesium daily
- What is a core branch in IITs
- What would Andhra Pradesh be without Vijayawada
- Is the dark web free
- What is my future in networking
- Which startup founders were active in investment banking?
- What was the height of Bhima
- Do you like Amazon or eBay better
- Dumbledore was gay in 2007
- What is active imagination
- Which gender is the more complex
- Bioinformatics is no different from data science
- Is conflict good for society
- What is MBS | https://filmoku.com/?post=4251 | 21 |
15 | History of Northern Ireland
Northern Ireland is one of the four countries of the United Kingdom, (although it is also described by official sources as a province or a region), situated in the north-east of the island of Ireland. It was created as a separate legal entity on 3 May 1921, under the Government of Ireland Act 1920. The new autonomous Northern Ireland was formed from six of the nine counties of Ulster: four counties with unionist majorities – Antrim, Armagh, Down, and Londonderry – and two counties with slight Irish nationalist majorities – Fermanagh and Tyrone – in the 1918 General Election. The remaining three Ulster counties with larger nationalist majorities were not included. In large part unionists, at least in the north-east, supported its creation while nationalists were opposed.
Part of a series on the
|History of Ireland|
Resistance to Home Rule
From the late 19th century, the majority of people living in Ireland wanted the British government to grant some form of self-rule to Ireland. The Irish Nationalist Party sometimes held the balance of power in the House of Commons in the late 19th and early 20th centuries, a position from which it sought to gain Home Rule, which would have given Ireland autonomy in internal affairs, without breaking up the United Kingdom. Two bills granting Home Rule to Ireland were passed by the House of Commons in 1886 and 1893, but rejected by the House of Lords. With the passing of the Parliament Act 1911 by the Liberal Party government (which reduced the powers of the Lords from striking down parliamentary Bills to delaying their implementation for two years) it was apparent that Home Rule would probably come into force in the next five years. The Home Rule Party had been campaigning for this for almost fifty years.
However, a significant minority was vehemently opposed to the idea and wished to retain the Union in its existing form. Irish unionists had been agitating successfully against Home Rule since the 1880s, and on 28 September 1912, the leader of the northern unionists, Edward Carson, introduced the Ulster Covenant in Belfast, pledging to exclude Ulster from home rule. The Covenant was signed by 450,000 men. Whilst precipitating a split with unionists in the south and west (including a particularly sizeable community in Dublin), it gave the northern unionists a feasible goal to aim for.
By the early 20th century, Belfast, the largest city in Ulster, had become the largest city in Ireland. Its industrial economy, with strong engineering and shipbuilding sectors, was closely integrated with that of Great Britain. Belfast was a substantially Ulster Protestant town with a Catholic minority of less than 30 per cent, concentrated in the west of the city.
A third Home Rule Bill was introduced by the Liberal minority government in 1912. However, the Conservative Party was sympathetic to the unionist case, and the political voice of unionism was strong in Parliament. After heavy amendment by the House of Lords, the Commons agreed in 1914 to allow four counties of Ulster to vote themselves out of its provisions and then only for six years. Throughout 1913 and 1914, paramilitary "volunteer armies" were recruited and armed, firstly the unionist Ulster Volunteer Force (UVF), and in response, the nationalist Irish Volunteers. But events in World War I Europe were to take precedence. Home rule was delayed for the duration of what was expected to be a short war and unionist and nationalist leaders agreed to encourage their volunteers to join the British army.
1916: Easter Rising, Battle of the Somme and aftermath
During World War I, tensions continued to mount in Ireland. Hardline Irish separatists (known at the time as Irish Nationalists and later as Republicans) rejected Home Rule entirely because it involved maintaining the connection with Britain. They retained control of one faction of the Irish Volunteers, and in Easter 1916, led by Thomas Clarke, James Connolly and others attempted a rebellion in Dublin. After summary trials, the British government had the leaders executed for treason. The government blamed the small Sinn Féin party, which had little to do with it. The execution of the leaders of the rebellion turned out to be a propaganda coup for militant republicanism, and Sinn Féin's previously negligible popular support grew. The surviving leaders of the Irish Volunteers infiltrated the party and assumed its leadership in 1917. (The Irish Volunteers would later become the Irish Republican Army (IRA) in 1919.) Republicans gained further support when the British government attempted to introduce conscription to Ireland in 1918. Sinn Féin was at the forefront of organising the campaign against conscription.
The 36th (Ulster) Division was one of the first units in the British Army to be sent into the Somme beginning in July 1916. Despite being one of the few divisions to achieve their objectives, the Ulstermen suffered nearly 85% casualties. Though the 36th Division was made up of both Catholics and Protestants from the north, one result from the heavy losses at the Somme was that the Unionist community became evermore determined to remain in the United Kingdom, believing themselves to have sacrificed theirs sons at the behest of the Crown. Nationalists joined in great numbers as well, with "old" Irish regiments from Munster and Leinster being greatly strengthened by these recruits. When the veterans of World War I, on both sides of the political divide, returned from the front in 1918 and 1919, they came back as battle-hardened soldiers. In the general election of 1918, the Irish Parliamentary Party lost almost all of its seats to Sinn Féin. Of the 30 seats in the six counties that would become Northern Ireland, 23 were won by Unionists, including 3 Labour Unionists and five of the six IPP members returned in Ireland were elected in Ulster as a result of local voting pacts with Sinn Féin.
Guerrilla warfare gathered pace in Ireland in the aftermath of the election, leading to the Anglo-Irish War. Although lower in intensity in Ulster than the rest of Ireland, the conflict was complicated there by involving not only the IRA, British Army and Royal Irish Constabulary, but the Ulster Volunteer Force (UVF) as well.
The fourth and final Home Rule Bill (the Government of Ireland Act 1920) partitioned the island into Northern Ireland (six northeastern counties) and Southern Ireland (the rest of the island). Some unionists such as Sir Edward Carson opposed partition, seeing it as a betrayal of unionism as a pan-Irish political movement. Three Counties unionists (those living in the Ulster counties of Cavan, Donegal, and Monaghan) who found themselves on the wrong side of the new border that partitioned Ulster, felt betrayed by those who had joined them in pledging to "stand by one another" in the Ulster Covenant. The Belfast Telegraph reassured unionists who felt guilty about this "that it was better for two-thirds of passengers to save themselves than for all to drown". Many Irish nationalists also opposed partition, although some were gratified that Northern Ireland contained a large nationalist minority that would deny it stability.
The Treaty was given effect in the United Kingdom through the Irish Free State Constitution Act 1922. Under Article 12 of the Treaty, Northern Ireland could exercise its opt out by presenting an address to the King requesting not to be part of the Irish Free State. Once the Treaty was ratified, the Parliament of Northern Ireland had one month to exercise this opt out during which month the Irish Free State Government could not legislate for Northern Ireland, holding the Free State's effective jurisdiction in abeyance for a month.
On 7 December 1922 (the day after the establishment of the Irish Free State) the Parliament of Northern Ireland resolved to make the following address to the King so as to opt out of the Irish Free State:
MOST GRACIOUS SOVEREIGN, We, your Majesty's most dutiful and loyal subjects, the Senators and Commons of Northern Ireland in Parliament assembled, having learnt of the passing of the Irish Free State Constitution Act, 1922, being the Act of Parliament for the ratification of the Articles of Agreement for a Treaty between Great Britain and Ireland, do, by this humble Address, pray your Majesty that the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland.
On 13 December 1922 Prime Minister James Craig addressed the Parliament of Northern Ireland informing them that the King had responded to the Parliament's address as follows:
I have received the Address presented to me by both Houses of the Parliament of Northern Ireland in pursuance of Article 12 of the Articles of Agreement set forth in the Schedule to the Irish Free State (Agreement) Act, 1922, and of Section 5 of the Irish Free State Constitution Act, 1922, and I have caused my Ministers and the Irish Free State Government to be so informed.
Early years of Home Rule
Northern Ireland, having received self-government within the United Kingdom under the Government of Ireland Act, was in some respects left to its own devices.
The first years of the new autonomous region were marked by bitter violence, particularly in Belfast. The IRA was determined to oppose the partition of Ireland so the authorities created the (mainly ex-UVF) Ulster Special Constabulary to aid the Royal Irish Constabulary (RIC) and introduced emergency powers to combat the IRA. Many died in political violence between 1920 and 1923, during which Belfast experienced the worst violence in its history. Killings petered out in 1923 after the signing of the Anglo-Irish Treaty in 1922.
In total, 636 people were killed between July 1920 and July 1922 in Northern Ireland. Approximately 460 of these deaths occurred in Belfast (258 Catholics, 159 Protestants, and 3 of unknown religion). However, as Catholics made up less than one-quarter of the population of the city, the per capita death rates were much higher.
The continuing violence created a climate of fear in the new region, and there was migration across the new border. As well as movement of Protestants from the Free State into Northern Ireland, some Catholics fled south, leaving some of those who remained feeling isolated. Despite the mixed religious affiliation of the old Royal Irish Constabulary and the transfer of many Catholic RIC police officers to the newly formed Royal Ulster Constabulary (1922), northern Catholics did not join the new force in great numbers. Many nationalists came to view the new police force as sectarian, adding to their sense of alienation from the state.
Under successive unionist Prime Ministers from Sir James Craig (later Lord Craigavon) onwards, the unionist establishment practised what is generally considered a policy of discrimination against the nationalist/Catholic minority.
This pattern was firmly established in the case of local government, where gerrymandered ward boundaries rigged local government elections to ensure unionist control of some local councils with nationalist majorities. In a number of cases, most prominently those of the Corporation of Derry, Omagh Urban District, and Fermanagh County Council, ward boundaries were drawn to place as many Catholics as possible into wards with overwhelming nationalist majorities while other wards were created where unionists had small but secure majorities, maximising unionist representation.
Voting arrangements which gave commercial companies multiple votes according to size, and which restricted the personal franchise to property owners, primary tenants and their spouses (which were ended in England in the 1940s), continued in Northern Ireland until 1969 and became increasingly resented. Disputes over local government gerrymandering were at the heart of the Northern Ireland civil rights movement in the 1960s.
In addition, there was widespread discrimination in employment, particularly at senior levels of the public sector and in certain sectors of the economy, such as shipbuilding and heavy engineering. Emigration to seek employment was significantly more prevalent among the Catholic population. As a result, Northern Ireland's demography shifted further in favour of Protestants, leaving their ascendancy seemingly impregnable by the late 1950s.
The abolition of proportional representation in 1929 meant that the structure of party politics gave the Ulster Unionist Party a continual sizeable majority in the Parliament of Northern Ireland, leading to fifty years of one-party rule. While nationalist parties continued to retain the same number of seats that they had under proportional representation, the Northern Ireland Labour Party and various smaller leftist unionist groups were smothered, meaning that it proved impossible for any group to sustain a challenge to the Ulster Unionist Party from within the unionist section of the population.
In 1935, the worst violence since partition convulsed Belfast. After an Orange Order parade decided to return to the city centre through a Catholic area instead of its usual route; the resulting violence left nine people dead. Over 2,000 Catholics were forced to leave their homes across Northern Ireland.
While disputed for decades, many unionist leaders now admit that the Northern Ireland government in the period 1922–72 was discriminatory, although prominent Democratic Unionist Party figures continue to deny it or its extent. One unionist leader, Nobel Peace Prize joint-winner, former UUP leader and First Minister of Northern Ireland David Trimble, described Northern Ireland as having been a "cold house for Catholics".
Despite this, Northern Ireland was relatively peaceful for most of the period from 1924 until the late 1960s, except for some brief flurries of IRA activity, the (Luftwaffe) Belfast blitz during the Second World War in 1941 and the so-called "Border Campaign" from 1956 to 1962. It found little support among nationalists. However, many Catholics were resentful towards the state, and nationalist politics was fatalist. Meanwhile, the period saw an almost complete synthesis between the Ulster Unionist Party and the loyalist Orange Order, with Catholics (even unionist Catholics) being excluded from any position of political or civil authority outside of a handful of nationalist-controlled councils.
Throughout this time, although the Catholic birth rate remained higher than for Protestants, the Catholic proportion of the population declined, as poor economic prospects, especially west of the River Bann, saw Catholics emigrate in disproportionate numbers.
Nationalist political institutions declined, with the Nationalist Party boycotting the Stormont Parliament for much of this period and its constituency organisations reducing to little more than shells. Sinn Féin was banned although it often operated through the Republican Clubs or similar vehicles. At various times the party stood and won elections on an abstentionist platform.
Labour-based politics were weak in Northern Ireland in comparison with Britain. A small Northern Ireland Labour Party existed but suffered many splits to both nationalist and unionist factions.
Second World War
Belfast was a representative British city that has been well studied by historians. It was a key industrial city producing ships, tanks, aircraft, engineering works, arms, uniforms, parachutes and a host of other industrial goods. The unemployment that had been so persistent in the 1930s disappeared, and labour shortages appeared. There was a major munitions strike in 1944. As a key industrial city, Belfast became a target for German bombing missions, but it was thinly defended; there were only 24 anti-aircraft guns in the city. The Northern Ireland government under Richard Dawson Bates (Minister for Home Affairs) had prepared too late, assuming that Belfast was far enough away to be safe. When Germany conquered France in Spring 1940 it gained closer airfields. The city's fire brigade was inadequate, there were no public air raid shelters as the Northern Ireland government was reluctant to spend money on them, and there were no searchlights in the city, which made shooting down enemy bombers all the more difficult. After the Blitz in London during the autumn of 1940, the government began to build air raid shelters. In early 1941, the Luftwaffe flew reconnaissance missions that identified the docks and industrial areas to be targeted. Working class areas in the north and east of the city were particularly hard hit, as over 1,000 people were killed and hundreds were seriously injured. Many people left the city in fear of future attacks. The bombing revealed terrible slum conditions in the city. In May 1941, the Luftwaffe hit the docks and the Harland and Wolff shipyard, closing it for six months. The Belfast blitz saw half of the city's houses destroyed. About £20 million worth of damage was caused. The Northern Ireland government was criticised heavily for its lack of preparation, and Northern Ireland's Prime Minister J. M. Andrews resigned. The bombing raids continued until the invasion of Russia in summer 1941. The American army arrived in 1942–44, setting up bases around Northern Ireland.
The Troubles were a period of ethno-political conflict in Northern Ireland which spilled over at various times into England, the Republic of Ireland, and mainland Europe. The duration of the Troubles is conventionally dated from the late 1960s and considered by many to have ended with the Belfast "Good Friday" Agreement of 1998. Violence nonetheless continues on a sporadic basis.
In the 1960s, moderate unionist prime minister Terence O'Neill (later Lord O'Neill of the Maine) tried to introduce reforms, but encountered strong opposition from both fundamentalist Protestant leaders like Ian Paisley and within his own party. The increasing pressures from Irish nationalists for reform and opposition by Ulster loyalists to compromise led to the appearance of the Northern Ireland Civil Rights Association, under figures such as Austin Currie and John Hume. It had some moderate Protestant support and membership, and a considerable dose of student radicalism after Northern Ireland was swept up in the worldwide protests of 1968. Clashes between marchers and the RUC led to increased communal strife, culminating in an attack by a unionist mob (which included police reservists) on a march, known as the Burntollet bridge incident, outside Derry on 4 January 1969. Wholescale violence erupted after an Apprentice Boys march was forced through the Irish nationalist Bogside area of Derry on 12 August 1969 by the RUC, which led to large-scale disorder known as the Battle of the Bogside. Rioting continued until 14 August, and in that time 1,091 canisters, each containing 12.5g of CS gas and 14 canisters containing 50g, were released by the RUC. Even more severe rioting broke out in Belfast and elsewhere in response to events in Derry (see Northern Ireland riots of August 1969). The following thirty years of civil strife came to be known as "the Troubles".
At the request of the unionist-controlled Northern Ireland government, the British army was deployed by the UK Home Secretary James Callaghan two days later on 14 August 1969. Two weeks later, control of security in Northern Ireland was passed from the Stormont government to Lieutenant-General Ian Freeland (GOC). At first the soldiers received a warm welcome from Irish nationalists, who hoped they would protect them from loyalist attack (which the IRA had, for ideological reasons, not done effectively). However, tensions rose throughout the following years, with an important milestone in the worsening relationship between the British Army and Irish nationalists being the Falls Curfew of 3 July 1970, when 3,000 British troops imposed a three-day curfew on the Lower Falls area of West Belfast.
After the introduction of internment without trial for suspected IRA men on 9 August 1971, even the most moderate Irish nationalists reacted by completely withdrawing their co-operation with the state. The Social Democratic and Labour Party (SDLP) members of the Parliament of Northern Ireland withdrew from that body on 15 August and a widespread campaign of civil disobedience began.
Tensions were ratcheted to a higher level after the killing of fourteen unarmed civilians in Derry by the 1st Battalion, Parachute Regiment on 30 January 1972, an event dubbed Bloody Sunday. Many civilians were killed and injured by the indiscriminate bombing campaigns carried out, mainly by the Provisional IRA. Throughout this period, the main paramilitary organisations began to form. 1972 was the most violent year of the conflict. In 1970 the Provisional IRA, was created as a breakaway from what then became known as the Official IRA (the Provisionals came from various political perspectives, though most rejected the increasingly Marxist outlook of the Officials and were united in their rejection of the Official's view that physical force alone would not end partition), and a campaign of sectarian attacks by loyalist paramilitary groups like the Ulster Defence Association (formed to co-ordinate the various Loyalist vigilante groups that sprung up) and others brought Northern Ireland to the brink of civil war. On 30 March 1972, the British government, unwilling to grant the unionist Northern Ireland government more authoritarian special powers, and now convinced of its inability to restore order, pushed through emergency legislation that prorogued the Northern Ireland Parliament and introduced direct rule from London. In 1973 the British government dissolved the Parliament of Northern Ireland and its government under the Northern Ireland Constitution Act 1973.
The British government held talks with various parties, including the Provisional IRA, during 1972 and 1973. The Official IRA declared a ceasefire in 1972, and eventually ended violence against the British altogether, although a breakaway group, the Irish National Liberation Army, continued. The Provisional IRA remained the largest and most effective nationalist paramilitary group.
On 9 December 1973, after talks in Sunningdale, Berkshire, the UUP, SDLP and Alliance Party of Northern Ireland and both governments reached the Sunningdale Agreement on a cross-community government for Northern Ireland, which took office on 1 January 1974. The Provisional IRA was unimpressed[why?], increasing the tempo of its campaign, while many unionists were outraged at the participation of Irish nationalists in the government of Northern Ireland and at the cross-border Council of Ireland. Although the pro-Sunningdale parties had a clear majority in the new Northern Ireland Assembly, the failure of the pro-Agreement parties to co-ordinate their efforts in the general election of 28 February, combined with an IRA-sponsored boycott by hardline republicans, allowed anti-Sunningdale unionists to take 51.1% of the vote and 11 of Northern Ireland's 12 seats in the UK House of Commons.
Emboldened by this, a coalition of anti-Agreement unionist politicians and paramilitaries organised the Ulster Workers' Council strike which began on 15 May. The strikers brought Northern Ireland to a standstill by shutting down power stations, and after Prime Minister Harold Wilson refused to send in troops to take over from the strikers, the power-sharing executive collapsed on 28 May 1974.
Some British politicians, notably former British Labour minister Tony Benn, advocated British withdrawal from Ireland, but many opposed this policy, and called their prediction of the possible results of British withdrawal the 'Doomsday Scenario', anticipating widespread communal strife. The worst fear envisaged a civil war which would engulf not just Northern Ireland, but also the Republic of Ireland and Scotland, both of which had major links with the people of Northern Ireland. Later, the feared possible impact of British withdrawal was the 'Balkanisation' of Northern Ireland.
The level of violence declined from 1972 onwards, decreasing to under 150 deaths a year after 1976 and under 100 after 1988. The Provisional IRA, using weapons and explosives obtained from the United States and Libya, bombed England and various British army bases in Europe, as well as conducting ongoing attacks within Northern Ireland. These attacks were not only on "military" targets but also on commercial properties and various city centres. Arguably its signature attack would involve cars packed with high explosives. At the same time, loyalist paramilitaries largely (but not exclusively) focused their campaign within Northern Ireland, ignoring the uninvolved military of the Republic of Ireland, and instead claiming a (very) few republican paramilitary casualties. They usually targeted Catholics (especially those working in Protestant areas), and attacked Catholic-frequented pubs using automatic fire weapons. Such attacks were euphemistically known as "spray jobs". Both groups would also carry out extensive "punishment" attacks against members of their own communities for a variety of perceived, alleged, or suspected crimes.
Various fitful political talks took place from then until the early 1990s, backed by schemes such as rolling devolution, and 1975 saw a brief Provisional IRA ceasefire. The two events of real significance during this period, however, were the hunger strikes (1981) and the Anglo-Irish Agreement (1985).
Despite the failure of the hunger strike, the modern republican movement made its first foray into electoral politics, with modest electoral success on both sides of the border, including the election of Bobby Sands to the House of Commons. This convinced republicans to adopt the Armalite and ballot box strategy and gradually take a more political approach.
While the Anglo-Irish Agreement failed to bring an end to political violence in Northern Ireland, it did improve co-operation between the British and Irish governments, which was key to the creation of the Belfast Agreement/Good Friday Agreement a decade later.
At a strategic level the agreement demonstrated that the British recognised as legitimate the wishes of the Republic to have a direct interest in the affairs of Northern Ireland. It also demonstrated to paramilitaries that their refusal to negotiate with the governments might be self-defeating in the long run. Unlike the Sunningdale Agreement, the Anglo-Irish Agreement withstood a much more concerted campaign of violence and intimidation, as well as political hostility, from unionists. However, unionists from across the spectrum felt betrayed by the British government and relations between unionists and the British government were at their worst point since the Ulster Covenant in 1912, with similar mass rallies in Belfast. Unionist co-operation needed in tackling Republican violence became so damaged that in 1998 Margaret Thatcher said she regretted signing the Agreement for this reason. Republicans were also left in the position of rejecting the only significant all-Ireland structures created since partition.
By the 1990s, the perceived stalemate between the IRA and British security forces, along with the increasing political successes of Sinn Féin, convinced a majority inside the republican movement that greater progress towards republican objectives might be achieved through negotiation rather than violence at this stage. This change from paramilitary to political means was part of a broader Northern Ireland peace process, which followed the appearance of new leaders in London (John Major) and Dublin (Albert Reynolds).
New government structure
The Belfast Agreement/ Good Friday Agreement
Increased government focus on the problems of Northern Ireland led, in 1993, to the two prime ministers signing the Downing Street Declaration. At the same time Gerry Adams, leader of Sinn Féin, and John Hume, leader of the Social Democratic and Labour Party, engaged in talks. The UK political landscape changed dramatically when the 1997 general election saw the return of a Labour government, led by prime minister Tony Blair, with a large parliamentary majority. A new leader of the Ulster Unionist Party, David Trimble, initially perceived as a hardliner, brought his party into the all-party negotiations which in 1998 produced the Belfast Agreement ("Good Friday Agreement"), signed by eight parties on 10 April 1998, although not involving Ian Paisley's Democratic Unionist Party or the UK Unionist Party. A majority of both communities in Northern Ireland approved this Agreement, as did the people of the Republic of Ireland, both by referendum on 22 May 1998. The Republic amended its constitution, to replace a claim it made to the territory of Northern Ireland with an affirmation of the right of all the people of Ireland to be part of the Irish nation and a declaration of an aspiration towards a United Ireland (see the Nineteenth Amendment of the Constitution of Ireland).
Under the Good Friday Agreement, properly known as the Belfast Agreement, voters elected a new Northern Ireland Assembly to form a parliament. Every party that reaches a specific level of support gains the right to name members of its party to government and claim one or more ministries. Ulster Unionist party leader David Trimble became First Minister of Northern Ireland. The Deputy Leader of the SDLP, Seamus Mallon, became Deputy First Minister of Northern Ireland, though his party's new leader, Mark Durkan, subsequently replaced him. The Ulster Unionists, Social Democratic and Labour Party, Sinn Féin and the Democratic Unionist Party each had ministers by right in the power-sharing assembly.
The Assembly and its Executive operated on a stop-start basis, with repeated disagreements about whether the IRA was fulfilling its commitments to disarm, and also allegations from the Police Service of Northern Ireland's Special Branch that there was an IRA spy-ring operating in the heart of the civil service. It has since emerged that the spy-ring was run by MI5 (see Denis Donaldson). Northern Ireland was then, once more, run by the Direct Rule Secretary of State for Northern Ireland, Peter Hain, and a British ministerial team answerable to him. Hain was answerable only to the Cabinet.
The changing British position to Northern Ireland was represented by the visit of Queen Elizabeth II to Stormont, where she met nationalist ministers from the SDLP as well as unionist ministers and spoke of the right of people who perceive themselves as Irish to be treated as equal citizens along with those who regard themselves as British. Similarly, on visits to Northern Ireland, the President of Ireland, Mary McAleese, met with unionist ministers and with the Lord Lieutenant of each county – the official representatives of the Queen.
Dissident Republicans in the Provisional IRA who refused to recognize the Good Friday Agreement split from the main body and formed a separate entity known as the Real IRA. It was this paramilitary group that was responsible for the Omagh Bombing in August 1998 that claimed the lives of 29 including a mother and her unborn twins. In a break from traditional Republican policy, Martin McGuinness officially condemned the actions of the Real IRA, setting a precedent that resulted in the alienation and minuscule support for dissident groups within the Republican movement.
Elections and politics in 2000s
However, the Assembly elections of 30 November 2003 saw Sinn Féin and the Democratic Unionist Party (DUP) emerge as the largest parties in each community, which was perceived as making a restoration of the devolved institutions more difficult to achieve. However, serious talks between the political parties and the British and Irish governments saw steady, if stuttering, progress throughout 2004, with the DUP in particular surprising many observers with its newly discovered pragmatism. However, an arms-for-government deal between Sinn Féin and the DUP broke down in December 2004 due to a row over whether photographic evidence of IRA decommissioning was necessary, and the IRA refusal to countenance the provision of such evidence.
The 2005 British general election saw further polarisation, with the DUP making sweeping gains, although Sinn Féin did not make the breakthrough many had predicted. In particular, the failure of Sinn Féin to gain the SDLP leader Mark Durkan's Foyle seat marked a significant rebuff for the republican party. The UUP only took one seat, with the leader David Trimble losing his and subsequently resigning as leader.
On 28 July 2005, the IRA made a public statement ordering an end to the armed campaign and instructing its members to dump arms and to pursue purely political programmes. While the British and Irish governments warmly welcomed the statement, political reaction in Northern Ireland itself demonstrated a tendency to suspicion engendered by years of political and social conflict. In August the British government announced that due to the security situation improving and in accordance with the Good Friday Agreement provisions, Operation Banner would end by 1 August 2007.
On 13 October 2006 an agreement was proposed after three days of multiparty talks at St. Andrews in Scotland, which all parties including the DUP, supported. Under the agreement, Sinn Féin would fully endorse the police in Northern Ireland, and the DUP would share power with Sinn Féin. All the main parties in Northern Ireland, including the DUP and Sinn Féin, subsequently formally endorsed the agreement.
On 8 May 2007, devolution of powers returned to Northern Ireland. DUP leader Ian Paisley and Sinn Féin's Martin McGuinness took office as First Minister and Deputy First Minister, respectively. (BBC). "You Raise Me Up", the 2005 track by Westlife, was played at their inauguration.
On 5 June 2008, Peter Robinson was confirmed as First Minister, succeeding Ian Paisley. In November 2015 he announced his intention to resign, stepping down officially in January 2016. His successor as the leader of the Democratic Unionist Party (DUP), Arlene Foster, became the new First Minister on 11 January 2016. She was the first woman to hold the post of First Minister. In April 2021, Arlene Foster announced that she is going to resign as DUP leader on 28 May and end her tenure as First Minister at the end of June 2021.
Impact of 2017 elections on the Northern Ireland Executive
On 9 January 2017, following the Renewable Heat Incentive scandal, Martin McGuinness resigned as deputy First Minister, triggering the 2017 Northern Ireland Assembly election and the collapse of the Northern Ireland Executive. Since then, the Executive has been in suspension and has not reformed.
The election marked a significant shift in Northern Ireland's politics, being the first election since Ireland's partition in 1921 in which unionist parties did not win a majority of seats, and the first time that unionist and nationalist parties received equal representation in the Assembly (39 members between Sinn Féin and the SDLP, 39 members between the DUP, UUP, and TUV). The DUP's loss of seats also prevents it from unilaterally using the petition of concern mechanism, which the party had controversially used to block measures such as the introduction of same-sex marriage to Northern Ireland.
Sinn Féin reiterated that it would not return to a power-sharing arrangement with the DUP without significant changes in the DUP's approach, including Foster not becoming First Minister until the RHI investigation is complete. The parties had three weeks to form an administration; failing that, new elections would likely be called.
While unionism has lost its overall majority in the Assembly, the result has been characterised by political analyst Matthew Whiting as being more about voters seeking competent local leadership, and about the DUP having less success than Sinn Féin in motivating its traditional voter base to turn out, than about a significant move towards a united Ireland.
Secretary of State for Northern Ireland James Brokenshire gave the political parties more time to reach a coalition agreement after the 27 March deadline passed. Sinn Féin called for fresh elections if agreement could not be reached. Negotiations were paused over Easter, but Brokenshire threatened a new election or direct rule if no agreement could be reached by early May. On 18 April, the Conservative Party Prime Minister, Theresa May, then called a snap general election for 8 June 2017. A new deadline of 29 June was then set for power-sharing talks.
The UK General Election saw both the DUP and Sinn Féin advance, with the UUP and SDLP losing all their MPs. The overall result saw the Conservatives losing seats, resulting in a hung parliament. May sought to continue as Prime Minister running a minority administration through seeking the support of the DUP. Various commentators suggested this raised problems for the UK government's role as a neutral arbiter in Northern Ireland, as is required under the Good Friday Agreement. Talks restarted on 12 June 2017, while a Conservative–DUP agreement was announced and published on 26 June.
A new deadline was set for 29 June, but it appeared that no agreement would be reached in time, with the main sticking point over Sinn Féin's desire for an Irish language act, rejected by the DUP, while Sinn Féin reject a hybrid act that also covers Ulster Scots. The deadline passed with no resolution. Brokenshire extended the time for talks, but Sinn Féin and the DUP remained pessimistic about any quick resolution.
Negotiations resumed in the autumn of 2017 but failed, leaving it in the hands of the UK Parliament to pass a budget for the ongoing financial year of 2017–18. The bill, which began its passage on 13 November, would if enacted release the final 5% of Northern Ireland's block grant.
Negotiations resume, 2018
Talks between the DUP and Sinn Féin recommenced on 6 February 2018, only days before the mid-February deadline where, in the absence of an agreement, a regional budget will have to be imposed by Westminster. Despite being attended by Theresa May and Leo Varadkar, the talks collapsed and DUP negotiator Simon Hamilton stated "significant and serious gaps remain between ourselves and Sinn Féin". The stalemate continued into September, at which point Northern Ireland reached 590 days without a fully functioning administration, eclipsing the record set in Belgium between April 2010 and December 2011.
On 18 October the Northern Ireland Secretary Karen Bradley introduced the Northern Ireland (Executive Formation and Exercise of Functions) Bill, removing the time frame of an Assembly election until 26 March 2019, which could be replaced by a later date by the Northern Ireland Secretary for once only, and during which the Northern Ireland Executive could be formed at any time, enabling civil servants to take a certain degree of departmental decisions that would be in public interest, and also allowing Ministers of the Crown to have several Northern Ireland appointments. The Bill's third reading was passed in the House of Commons and in the House of Lords on 24 and 30 October respectively. The Bill became Northern Ireland (Executive Formation and Exercise of Functions) Act 2018 and came into effect after it received Royal Assent and was passed on 1 November.
During question period to the Northern Ireland Secretary on 31 October Karen Bradley announced that she would hold a meeting in Belfast the following day with the main parties regarding the implementation of the Bill (which was not an Act yet on that day) and next steps towards the restoration of the devolution and that she would fly to Dublin alongside Theresa May's de facto deputy David Lidington to hold an inter-governmental conference with the Irish Government. No deal was reached at that time.
Other current developments
Gay marriage and the liberalisation of abortion was legalised in Northern Ireland on 22 October 2019. The legalisation received royal assent on 24 July 2019 by way of an amendment to the Northern Ireland (Executive Formation etc) Act 2019 which was primarily to implement sustainable governance in Northern Ireland in the absence of an executive. The British government stated that the legalisation would only come into effect if the executive was not functioning by 22 October deadline. Attempts to restart the assembly were made, predominantly by unionist parties, on 21 October, but Sinn Féin and Alliance refused to enter the Assembly. The legalisation was enacted and progress to restart Stormont stagnated for several months until a fresh election became likely.
The Northern Ireland Assembly and Executive (which collapsed three years ago) resumed on 11 January 2020 after an agreement titled 'New Decade, New Approach' was signed between the DUP and Sinn Féin, and the British and Irish governments, and subsequently by most other parties.
- Flag of Northern Ireland
- History of Ireland
- History of the United Kingdom
- Murals in Northern Ireland
- History of the British Isles
- Politics of Northern Ireland
Notes and references
- (of 53.6 and 54.6 per cent respectively
- "The Countries of the UK". 11 November 1997. Archived from the original on 11 November 2009. Retrieved 21 April 2019.
The top-level division of administrative geography in the UK is the 4 countries – England, Scotland, Wales and Northern Ireland.
- "Countries within a country". 10 Downing Street. 10 January 2003. Archived from the original on 9 September 2008. Retrieved 10 October 2012.
The United Kingdom is made up of four countries: England, Scotland, Wales and Northern Ireland.
- "'Normalisation' plans for Northern Ireland unveiled". 10 Downing Street. 1 August 2005. Archived from the original on 2 August 2005. Retrieved 10 October 2012.
Plans to reduce troops and abolish watchtowers in Northern Ireland to 'normalise' the province, have been outlined by the Government.
- "The European Sustainable Competitiveness Programme for Northern Ireland 2007–2013" (PDF). Northern Ireland Executive. 4 October 2007. p. 16. Archived from the original (PDF) on 17 February 2010. Retrieved 28 March 2010.
NI (NI) is a region of the United Kingdom (UK) that operates in an island economy sharing a land border with Ireland
- Statutory Rules & Orders published by authority, 1921 (No. 533); Additional source for 3 May 1921 date: Alvin Jackson (2004). Home Rule – An Irish History. Oxford University Press. p. 198.
- "The Irish Election of 1918". ark.ac.uk. Northern Ireland Social and Political Archive.
- Nicholas Whyte (25 March 2006). "The Irish Election of 1918". ARK. Retrieved 25 October 2012.
- http://www.historyireland.com/20th-century-contemporary-history/the-emergence-of-the-two-irelands-1912-25/ [bare URL]
- http://cain.ulst.ac.uk/issues/sectarian/brewer.htm [bare URL]
- "Northern Ireland Parliamentary Report". ahds.ac.uk. Stormont Papers. 7 December 1922. pp. 1147 and 1150.
- "Northern Ireland Parliamentary Report". ahds.ac.uk. 2. Stormont Papers. 13 December 1922. pp. 1191–1192.
- Lynch, Robert, "The People's Protectors? The Irish Republican Army and the "Belfast Pogrom", 1920–1922", Journal of British Studies Volume 47, Issue 2 April 2008 , pp. 375-391, fn. 9
- Fionnuala McKenna (14 January 2012). "'How much discrimination was there under the Unionist regime, 1921–1968?' by John Whyte". ARK. Retrieved 25 October 2012.
- O'Brien, John. (2010). Discrimination in Northern Ireland, 1920-1939 : myth or reality?. Newcastle upon Tyne: Cambridge Scholars Pub. ISBN 9781443817448. OCLC 745969863.
- "CAIN Chronology". Entry under 25 November 1969.
The Electoral Law Act (Northern Ireland) became law. The main provision of the act was to make the franchise in local government elections in Northern Ireland the same as that in Britain.Check date values in:
- Walker, Graham (4 September 2004). A History of the Ulster Unionist Party: Protest, Pragmatism and Pessimism (Manchester Studies in Modern History). p. 162. ISBN 978-0-7190-6109-7.
- Curley, Helen (2000). Local Ireland Almanac and Yearbook of Facts 2000 (Local Ireland almanac & yearbook of facts). p. 17. ISBN 978-0-9536537-0-6.
- http://www.mydup.com/news/article/campbell-to-raise-complaints-with-bbc-on-fair-employment-documentary [bare URL]
- Trimble, David (10 December 1998). "The Nobel Lecture given by The Nobel Peace Prize Laureate 1998". Oslo. Archived from the original on 16 March 2007. Retrieved 13 February 2007.
- MARC MULHOLLAND (2000). "Assimilation versus Segregation: Unionist Strategy in the 1960s" (PDF). Twentieth Century Brit Hist. Archived from the original (PDF) on 28 September 2007. Retrieved 25 October 2012.
- Brian Barton, "The Belfast Blitz: April–May 1941," History Ireland, (1997) 5#3 pp 52–57
- Robson S. Davison, "The Belfast Blitz," Irish Sword: Journal of the Military History Society of Ireland, (1985) 16#63 pp 65–83
- Boyd Black, "A Triumph of Voluntarism? Industrial Relations and Strikes in Northern Ireland in World War Two," Labour History Review (2005) 70#1 pp 5–25
- Kennedy-Pipe, Caroline (January 1997). The Origins of the Present Troubles in Northern Ireland. Longman. ISBN 978-0-582-10073-2.
- McGarry, John; Brendan O'Leary (15 June 1995). Explaining Northern Ireland. Wiley-Blackwell. p. 18. ISBN 978-0-631-18349-5.
- Dermot Keogh, ed. (28 January 1994). Northern Ireland and the Politics of Reconciliation. Cambridge University Press. pp. 55–59. ISBN 978-0-521-45933-4.
- Weitzer, Ronald (January 1995). Policing Under Fire: Ethnic Conflict and Police-Community Relations in Northern Ireland. State University Press. ISBN 978-0-7914-2248-9.
- Coakley, John. "ETHNIC CONFLICT AND THE TWO-STATE SOLUTION: THE IRISH EXPERIENCE OF PARTITION". Archived from the original on 3 April 2012. Retrieved 15 February 2009.
- Aughey, Arthur (2005). The Politics of Northern Ireland: Beyond the Belfast Agreement. p. 7. ISBN 978-0-415-32788-6.
- Holland, Jack (1999). Hope against History: The Course of Conflict in Northern Ireland. Henry Holt & Company. p. 221. ISBN 0-8050-6087-1.
The troubles were over, but the killing continued. Some of the heirs to Ireland's violent traditions refused to give up their inheritance.
- Gillespie, Gordon (2008). Historical Dictionary of the Northern Ireland Conflict. p. 250. ISBN 978-0-8108-5583-0.
- Elliot, Marianne (2007). The Long Road to Peace in Northern Ireland: Peace Lectures from the Institute of Irish Studies at Liverpool University. University of Liverpool Institute of Irish Studies, Liverpool University Press. p. 2. ISBN 978-1-84631-065-2.
- Goodspeed, Michael (2002). When reason fails: portraits of armies at war : America, Britain, Israel, and the future. Greenwood Publishing Group. pp. 44 and 61. ISBN 0-275-97378-6.
- "Draft List of Deaths Related to the Conflict. 2002–". Retrieved 31 July 2008.
- (Elliot 2007, p. 188)
- Northern Ireland (Temporary Provisions) Act 1972 (c. 22)
- http://cain.ulst.ac.uk/sutton/book/ [bare URL]
- "Background Information on Northern Ireland Society – Security and Defence". ARK. 17 January 2012. Retrieved 25 October 2012.
- http://cain.ulst.ac.uk/cgi-bin/tab2.pl [bare URL]
- "Thatcher misgivings over Anglo-Irish Agreement revealed". Belfast Telegraph. Retrieved 23 April 2020.
- Taylor, Peter (1999). "21: Stalemate". Behind the mask: The IRA and Sinn Féin. pp. 246–261. ISBN 1-57500-077-6.
- "Spotlight on the Troubles: A Secret History Epside 7". BBC. Retrieved 2020. Check date values in:
- Lavery, Brian; Cowell, Alan (29 July 2005). "I.R.A. Renounces Use of Violence; Vows to Disarm". The New York Times. Retrieved 4 May 2010.
- Brian Rowan (2 August 2005). "Military move heralds end of era". BBC News. Retrieved 21 March 2008.
- https://www.theguardian.com/politics/2015/nov/19/peter-robinson-to-step-down-as-northern-ireland-first-minister [bare URL]
- https://www.bbc.com/news/av/uk-northern-ireland-35283387 [bare URL]
- https://www.bbc.com/news/uk-northern-ireland-politics-35260201 [bare URL]
- https://www.bbc.com/news/uk-northern-ireland-politics-56938539 [bare URL]
- "Sinn Fein cuts DUP lead to one seat in Stormont Assembly as nationalists surge in Northern Ireland".
- Humphries, Conor. "Northern Ireland braces for uncertain new era after McGuinness". Reuters UK. Retrieved 23 March 2017.
- "Mike Nesbitt steps down as UUP leader". 3 March 2017. Archived from the original on 4 March 2017 – via www.bbc.co.uk.
- "'No revolt within DUP,' says Foster". 6 March 2017. Archived from the original on 6 March 2017 – via www.bbc.co.uk.
- "One step closer to a united Ireland? Explaining Sinn Féin's electoral success". blogs.lse.ac.uk. 6 March 2017. Archived from the original on 7 March 2017.
- Kroet, Cynthia (27 March 2017). "No Snap Election in Northern Ireland After Talks Collapse". Politico. Retrieved 27 March 2017.
- "Stormont talks: Sinn Féin wants election if no deal". 10 April 2017. Archived from the original on 20 April 2017 – via www.bbc.co.uk.
- "Stormont talks: Direct rule or election 'if no deal'". 12 April 2017 – via www.bbc.co.uk.
- "Stormont power-sharing talks deadline set for 29 June". 21 April 2017. Archived from the original on 21 April 2017 – via www.bbc.co.uk.
- "How will the Northern Irish power-sharing be affected by the Tory-DUP 'friendship'? – Left Foot Forward". leftfootforward.org.
- "The Deciding Votes from Ulster". Archived from the original on 20 June 2018. Retrieved 10 January 2020.
- The Andrew Marr Show, BBC1, 11 June 2017
- Meredith, Robbie (28 June 2017). "Ulster-Scots 'forgotten in some ways'". Archived from the original on 28 June 2017 – via www.bbc.co.uk.
- "Stormont talks: Brokenshire to 'reflect' amid ongoing deadlock". 4 July 2017. Archived from the original on 4 July 2017 – via www.bbc.co.uk.
- Campbell, John (12 July 2017). "Brokenshire to allocate NI money next week". BBC News. Retrieved 13 November 2017.
- "Westminster to set NI budget amid crisis". BBC News. 1 November 2017. Retrieved 1 November 2017.
- "Brokenshire orders review as Stormont MLAs receive full salaries". BelfastTelegraph.co.uk. ISSN 0307-1235. Retrieved 13 November 2017.
- "Deal between Sinn Féin and DUP over power sharing 'achievable'". The Guardian. 6 February 2018.
- "Talks to restore power-sharing government in Northern Ireland collapse". The Guardian. 15 February 2018.
- "Protesters say 'we deserve better' as Stormont hiatus ties record". The Irish Times. 28 August 2018.
- "Executive Formation & Exercise of Functions Bill introduced into Parliament". GOV.UK.
- "NOTICES OF AMENDMENTS". parliament.uk.
- "Northern Ireland (Executive Formation and Exercise of Functions) Bill: Commons stages". parliament.uk.
- "Bill stages — Northern Ireland (Executive Formation and Exercise of Functions) Act 2018". parliament.uk.
- "Royal Assent". Hansard.
- "Northern Ireland (Executive Formation and Exercise of Functions) Act 2018". legislation.gov.uk.
- "Northern Ireland (Executive Formation and Exercise of Functions) Act 2018". parliament.uk.
- "House of Commons". parliamentlive.tv.
- Stormont talks: Draft deal to break deadlock published. 9 January 2020.
- Edwards, Mark (20 October 2019). "Alliance MLAs will not attend Assembly recall, as Long calls move a 'cynical political stunt'". Belfast Telegraph. Retrieved 23 April 2020.
- Adamson, Ian. The Identity of Ulster, 2nd edition (Belfast, 1987)
- Bardon, Jonathan. A History of Ulster (Belfast, 1992.)
- Bew, Paul, Peter Gibbon and Henry Patterson, Northern Ireland 1921-1994: Political Forces and Social Classes (1995)
- Bew, Paul, and Henry Patterson. The British State and the Ulster Crisis: From Wilson to Thatcher (London: Verso, 1985).
- Brady, Claran, Mary O'Dowd and Brian Walker, eds. Ulster: An Illustrated History (1989)
- Buckland, Patrick. A History of Northern Ireland (Dublin, 1981)
- Elliott, Marianne. The Catholics of Ulster: A History. Basic Books. 2001. online edition
- Farrell, Michael. Northern Ireland: The Orange State, 2nd edition (London, 1980)
- Henessy, Thomas. A History of Northern Ireland, 1920-1996. St. Martin's, 1998. 365 pp.
- Kennedy, Líam; Ollerenshaw, Philip, eds. (1985). An Economic History of Ulster 1820 – 1940. Manchester UP. ISBN 0-7190-1827-7.
- Kennedy, Liam and Philip Ollerenshaw, eds, Ulster Since 1600: Politics, Economy, and Society (2013) excerpts
- McAuley, James White. Very British Rebels?: The Culture and Politics of Ulster Loyalism (Bloomsbury Publishing USA, 2015).
- Miller, David, ed. Rethinking Northern Ireland: culture, ideology and colonialism (Routledge, 2014)
- Ollerenshaw, Philip. "War, industrial mobilisation and society in Northern Ireland, 1939–1945." Contemporary European History 16#2 (2007): 169–197. | https://worddisk.com/wiki/History_of_Northern_Ireland/ | 21 |
18 | The British Empire kept its distance from European countries
Throughout history, some countries took the world by storm, but it is from the Age of Exploration that European countries attained supremacy. In the 16th century, Spain and Portugal moved into South America. In the 17th century, the Netherlands moved into Southeast Asia, based around the Dutch East India Company. At first, the UK got a late start, but through annexation of Scotland in 1707, it became a great enough power to defeat the Spanish Armada and move into India while also repeatedly fighting against the Netherlands. In 1840, the UK won the Opium War to gain Hong Kong from China. In addition, the industrial revolution took place in the country from the middle of the 18th century through the 19th century, enabling the UK to attain supremacy not only in naval power but also in the economy with such a force they came to be known as the “factory of the world.” At that time, it was the British Empire who reigned over the world.
Meanwhile, on the European Continent, the French Revolution took place at the end of the 18th century, and Napoleon swept through Europe. Germany was finally unified in 1871, and dragged Europe into hostilities during the First World War (1914-1918), which began in 1914. In this way, conflicts and chaos continued. At the Paris Peace Conference after the War, the UK proposed to prevent excessive humiliation on Germany and to demand a performable amount of reparations. This proposal obtained support from the United States (US), while it outraged France, which had suffered great damage from Germany. France insisted on a great amount of reparations to contain Germany completely. The total of the reparations ended up being 132 billion gold marks. This is a huge amount which was 2.5 times larger than the gross national income of Germany in 1913. In 1923, Germany filed for insolvency, and the UK showed an attitude of acceptance regarding an extension of payment, while France kept a firm attitude. Therefore, the decline in the value of the Deutsche Mark continued, and the inflationary trend was aggravated across the country. In 1924, the exchange rate reached as much as one trillion to one. The despair, rage and hostility of the German people invited the rise of Nazism, which resulted in the Second World War to bring Europe into submission by force.
Over the history of the European Continent, bordering countries have continually fought against each other. However, the UK, which lies off the Continent and had attained supremacy over the world, was in a different situation from such European countries. From Japan, the UK looks as steady a member of Europe as France, Germany and Italy, but it can be said that the UK has kept its distance from other European countries. Churchill, one of the UK’s old Prime Ministers, said soon after the War that Europe had no choice but to form a unified state. This phrase points out European history and Europe’s situation while representing the UK’s situation of looking down at Europe from some distance. This is the background behind the UK voting to leave the EU in 2016.
The UK joined the EU for economic reasons
After the Second World War, momentum toward actually integrating Europe increased. First, in 1952, the six countries of France, West Germany, Italy, Belgium, the Netherlands and Luxembourg set up the European Coal and Steel Community (ECSC). This was an organization for joint management of coal and steel because over the course of history, battles for resources had often developed into war. The ECSC was the first European organization, which can be referred to as the threshold of European integration. In 1958, the European Economic Community (EEC) was set up. In 1967, the ECSC, the ECC and the European Atomic Energy Community (EURATOM) were integrated to give birth to the European Community (EC), the predecessor of the European Union (EU). At that time, the UK did not join the EC.
By looking at European history, you can tell that European integration has an economic aspect as well as a political/security aspect. France, promoting the integration, had a powerful ulterior motive to contain their neighbor Germany and prevent it from ever becoming violent again. The UK’s motive was not as powerful as France’s. Rather than Europe, the UK emphasized connections with the US, which came to attain supremacy over the world in the 20th century instead of the UK, and the countries of the British Commonwealth including former colonies. From the 1950s, however, nationalization of industries was advanced by Labor policies, when the UK’s industries competitiveness in the global marketplace declined, and the UK’s economy slowed down. This was the time of “British disease.” Therefore, counting on the accessible European markets, the UK joined the EC in 1973. It can be said that this choice was successful. Coupled with the “Thatcher Revolution” of the 1980s, the UK’s economy headed for revitalization. That means the UK joined the EC only for economic reasons, and did not have any motive to create a confederation of states like France. In fact, the UK, having memories of the British Empire, which attained supremacy over the world, could never hand over its national sovereignty. This attitude is prominently visible in the fact that when the EU was set up in 1993, based on the EC, the UK did not join the adoption of the Euro, though they did not leave the EU. The right to issue currency is an important sovereign authority. The UK will not give it up.
Japan should look over the UK’s diplomatic negotiations calmly
The collapse of the Soviet Union in 1991 had a great impact on the EU. In 2004, former communist countries were quick to join the EU, and the EU was enlarged. Taking advantage of this, Germany, suffering from a labor shortage, built factories in East European low-labor-cost countries to raise production efficiency. However, problems also arose. The EU fundamental principle is “free movement of persons, goods, services and capital.” In accordance with this principle, people from former communist countries came to move to other countries seeking a job or a richer life. Actually in the UK, there are 800-900,000 immigrants from Poland. This swelled the social burden on the UK. The EU membership ended up causing a burden on the UK, which did not have a motive to integrate with Europe from the beginning, rather joining the EU for economic reasons. This situation triggered a protest among the people, which resulted in them voting to leave the EU.
However, the withdrawal is not without a few disadvantages. First of all, the “single passport rule” (whereby a financial institution holding a license from any country within the EU can conduct its business anywhere within the area) becomes inapplicable due to withdrawal from the EU. The City of London, which was quick to deregulate finance and built its position as the “financial center of Europe,” will lose this position. It is important to take countermeasures. In addition, the UK was the window on the EU for Japan and other English-speaking countries. These countries built factories first in the UK, where communication can take place in English, and marketed the products produced at these factories to the EU markets. Japanese companies have also made a large investment in the UK. However, when tariffs are imposed on exports from the UK to the EU, that’s no longer going to happen. New investments in the UK will decrease. The UK’s finance minister has said that corporate tax will be reduced as soon as possible, but further measures need to be taken.
Even so, the withdrawal could produce positive effects. For this purpose, it is important to develop a new framework. First of all, withdrawal negotiations with the EU need to progress well. The Canada-EU Comprehensive Economic and Trade Agreement (CETA) will be helpful to the UK. The UK should also aim at building a relationship with the EU that will make transactions easier by reducing tariffs like CETA (without the free movement of people). In addition, the UK has a brotherly relationship with the US, with which the EU has taken a lot of time to finalize negotiations, so the UK can more easily develop a new trade agreement framework with the US, also involving the British Commonwealth countries. If able to do so, the UK could have a greater opportunity than that to be taken by remaining in the EU.
The result of the referendum in the UK surprised Japan, which was pervaded by pessimistic speculation. Overreaction to bad news is one of the national characteristics of Japan, but we should now look calmly and carefully to see the UK’s response. After that, we had better build a new relationship with the UK. The withdrawal negotiations will take two years. Japan should look over the hard-nosed British diplomacy process calmly and closely.
* The information contained herein is current as of August 2016.
* The contents of articles on M’s Opinion are based on the personal ideas and opinions of the author and do not indicate the official opinion of Meiji University.
Information noted in the articles and videos, such as positions and affiliations, are current at the time of production. | https://english-meiji.net/articles/33/ | 21 |
24 | Reforestation (occasionally, Reafforestation) is the natural or intentional restocking of existing forests and woodlands (forestation) that have been depleted, usually through deforestation, but also after clearcutting.
Reforestation can be used to undo and rectify the effects of deforestation and improve the quality of human life by absorbing pollution and dust from the air, rebuilding natural habitats and ecosystems, mitigating global warming via biosequestration of atmospheric carbon dioxide, and harvesting for resources, particularly timber, but also non-timber forest products. Since the beginning of the 21st century, significant attention has been given to reforestation as a technique for mitigating climate change as one of the best methods to do it. To this end, the international community has agreed on Sustainable Development Goal 15, which promotes implementation of sustainable management of all types of forests, stop deforestation, restore degraded forests and increase afforestation and reforestation.
Though net loss of forest area has decreased substantially since 1990, the world is unlikely to achieve the target set forth in the United Nations Strategic Plan for Forests to increase forest area by 3 percent by 2030. While deforestation is taking place in some areas, new forests are being established through natural expansion or deliberate efforts in others. As a result, the net loss of forest area is less than the rate of deforestation and it too is decreasing: from 7.8 million hectares per year in the 1990s to 4.7 million hectares per year during 2010–2020. In absolute terms, the global forest area decreased by 178 million hectares between 1990 and 2020, which is an area about the size of Libya.
A debated issue in managed reforestation is whether or not the succeeding forest will have the same biodiversity as the original forest. If the forest is replaced with only one species of tree and all other vegetation is prevented from growing back, a monoculture forest similar to agricultural crops would be the result. However, most reforestation involves the planting of different selections of seedlings taken from the area, often of multiple species. Another important factor is the natural regeneration of a wide variety of plant and animal species that can occur on a clear cut. In some areas the suppression of forest fires for hundreds of years has resulted in large single aged and single species forest stands. The logging of small clear cuts and/or prescribed burning actually increases the biodiversity in these areas by creating a greater variety of tree stand ages and species.
See also: Harvesting. Reforestation need not be only used for recovery of accidentally destroyed forests. In some countries, such as Finland, many of the forests are managed by the wood products and pulp and paper industry. In such an arrangement, like other crops, trees are planted to replace those that have been cut. The Finnish Forest Act from 1996 obliges the forest to be replanted after felling. In such circumstances, the industry can cut the trees in a way to allow easier reforestation. The wood products industry systematically replaces many of the trees it cuts, employing large numbers of summer workers for tree planting work. For example, in 2010, Weyerhaeuser reported planting 50 million seedlings. However replanting an old-growth forest with a plantation is not replacing the old with the same characteristics in the new.
In just 20 years, a teak plantation in Costa Rica can produce up to about 400 m³ of wood per hectare. As the natural teak forests of Asia become more scarce or difficult to obtain, the prices commanded by plantation-grown teak grows higher every year. Other species, such as mahogany, grow more slowly than teak in Tropical America but are also extremely valuable. Faster growers include pine, eucalyptus, and Gmelina.
Reforestation, if several indigenous species are used, can provide other benefits in addition to financial returns, including restoration of the soil, rejuvenation of local flora and fauna, and the capturing and sequestering of 38 tons of carbon dioxide per hectare per year.
The reestablishment of forests is not just simple tree planting. Forests are made up of a community of species and they build dead organic matter into soils over time. A major tree-planting program could enhance the local climate and reduce the demands of burning large amounts of fossil fuels for cooling in the summer.
Forests are an important part of the global carbon cycle because trees and plants absorb carbon dioxide through photosynthesis. By removing this greenhouse gas from the air, forests function as terrestrial carbon sinks, meaning they store large amounts of carbon. At any time, forests account for as much as double the amount of carbon in the atmosphere. Forests remove around three billion tons of carbon every year. This amounts to about 30% of anthropogenic all carbon dioxide emissions. Therefore, an increase in the overall forest cover around the world would mitigate global warming.
At the beginning of the 21st century, interest in reforestation grew over its potential to mitigate climate change. Even without displacing agriculture and cities, earth can sustain almost one billion hectares of new forests. This would remove 25% of carbon dioxide from the atmosphere and reduce its concentration to levels that existed in the early 20th century. A temperature rise of 1.5 degrees would reduce the area suitable for forests by 20% by the year 2050, because some tropical areas will become too hot. The countries that have the most forest-ready land are: Russia, Canada, Brazil, Australia, United States and China.
The four major strategies are:
Implementing the first strategy is supported by many organizations around the world. For example, in China, the Jane Goodall Institute, through their Shanghai Roots & Shoots division, launched the Million Tree Project in Kulun Qi, Inner Mongolia to plant one million trees. China used 24 million hectares of new forest to offset 21% of Chinese fossil fuel emissions in 2000. In Java, Indonesia newlywed couples give whoever is conducting their wedding 5 seedlings. Each divorcing couple gives 25 seedlings to whoever divorces them. Costa Rica doubled its forest cover in 30 years using its system of grants and other payments for environmental services, including compensation for landowners. These payments are funded through international donations and nationwide taxes.
The second strategy has to do with selecting species for tree-planting. In theory, planting any kind of tree to produce more forest cover would absorb more carbon dioxide from the atmosphere. However, a genetically modified variant might grow much faster than unmodified specimens. Some of these cultivars are under development. Such fast-growing trees would be planted for harvest and can absorb carbon dioxide faster than slower-growing trees.
Impacts on temperature are affected by the location of the forest. For example, reforestation in boreal or subarctic regions has less impact on climate. This is because it substitutes a high-albedo, snow-dominated region with a lower-albedo forest canopy. By contrast, tropical reforestation projects lead to a positive change such as the formation of clouds. These clouds then reflect the sunlight, lowering temperatures.
Planting trees in tropical climates with wet seasons has another advantage. In such a setting, trees grow more quickly (fixing more carbon) because they can grow year-round. Trees in tropical climates have, on average, larger, brighter, and more abundant leaves than non-tropical climates. A study of the girth of 70,000 trees across Africa has shown that tropical forests fix more carbon dioxide pollution than previously realized. The research suggested almost one fifth of fossil fuel emissions are absorbed by forests across Africa, Amazonia and Asia. Simon Lewis stated, "Tropical forest trees are absorbing about 18% of the carbon dioxide added to the atmosphere each year from burning fossil fuels, substantially buffering the rate of change."
As of 2008 1.3 billion hectares of tropical regions were deforested every year. Reducing this would reduce the amount of planting needed to achieve a given degree of mitigation.
A study finds that almost 300 million people live on tropical forest restoration opportunity land in the Global South, constituting a large share of low-income countries' populations, and argues for prioritized inclusion of "local communities" in forest restoration projects.
Planting new trees often leads to up to 90% of seedlings failing. However, even in deforested areas, existing root systems often exist. Growth can be accelerated by pruning and coppicing where a few branches of new shoots are cut and often used for charcoal, itself a major driver of deforestation. Since new seeds are not planted, it is cheaper. Additionally, they are much more likely to survive as their root systems already exist and can tap into groundwater during harsher seasons with no rain. While this method has existed for centuries, it is now sometimes referred to as Farmer-managed natural regeneration.
Some incentives for reforestation can be as simple as a financial compensation. Streck and Scholz (2006) explain how a group of scientists from various institutions have developed a compensated reduction of deforestation approach which would reward developing countries that disrupt any further act of deforestation. Countries that participate and take the option to reduce their emissions from deforestation during a committed period of time would receive financial compensation for the carbon dioxide emissions that they avoided. To raise the payments, the host country would issue government bonds or negotiate some kind of loan with a financial institution that would want to take part in the compensation promised to the other country. The funds received by the country could be invested to help find alternatives to the extensive cutdown of forests. This whole process of cutting emissions would be voluntary, but once the country has agreed to lower their emissions they would be obligated to reduce their emissions. However, if a country was not able to meet their obligation, their target would get added to their next commitment period. The authors of these proposals see this as a solely government-to-government agreement; private entities would not participate in the compensation trades.
The 2020 World Economic Forum, held in Davos, announced the creation of the Trillion Tree Campaign, which is an initiative aiming to plant 1 trillion trees across the globe. The implementation can have big environmental and societal benefits but needs to be tailored to local conditions.
See also: Great Green Wall (Africa). One plan in this region involves planting a nine-mile width of trees on the Southern Border of the Sahara Desert for stopping its expansion to the south. The Great Green Wall initiative is a pan-African proposal to "green" the continent from west to east in order to battle desertification. It aims at tackling poverty (through employment of workers required for the project) and the degradation of soils in the Sahel-Saharan region, focusing on a strip of land that is 15 km (9 mi) wide and 7,500 km (4,750 mi) long from Dakar to Djibouti. As of May 2020, 21 countries joined the project, many of them are directly affected by the expansion of the Sahara desert. It should create 10 millions green jobs by 2030.
In 2019, Ethiopia begun a massive tree planting campaign "Green Legacy" with a target to plant 4 billion trees in 1 year. In 1 day only, over 350 million trees were planted.
Through reforestation and environmental conservation, Costa Rica doubled its forest cover in 30 years.
Costa Rica has a long-standing commitment to the environment. The country is now one of the leaders of sustainability, biodiversity, and other protections. It wants to be completely fossil fuel free by 2050. The country has generated all of its electric power from renewable sources for three years as of 2019. It has committed to be carbon-free and plastic-free by 2021.
As of 2019, half of the country's land surface is covered with forests. They absorb a huge amount of carbon dioxide, combating climate change.
In the 1940s, more than 75% of the country was covered in mostly tropical rainforests and other indigenous woodlands. Between the 1940s and 1980s, extensive, uncontrolled logging led to severe deforestation. By 1983, only 26% of the country had forest cover. Realizing the devastation, policymakers took a stand. Through a continued environmental focus they were able to turn things around to the point that today forest cover has increased to 52%, 2 times more than 1983 levels.
An honorable world leader for ecotourism and conservation, Costa Rica has pioneered the development of payments for environmental services. Costa Rica's extensive system of environmental protection has been encouraging conservation and reforestation of the land by providing grants for environmental services. The system is not just advanced for its time but is also unparalleled in the world. It received great international attention.
The country has established programs to compensate landowners for reforestation. These payments are funded through international donations and nationwide taxes. The initiative is helping to protect the forests in the country.
"Robert Blasiak, a research fellow at the University of Tokyo, said: "Taking a closer look at what Costa Rica has accomplished in the past 30 years may be just the impetus needed to spur real change on a global scale."
"Costa Rican President Carlos Alvarado has called the climate crisis "the greatest task of our generation"; one that his country is strongly committed to excel in. "
Natural Resources Canada (The Department of Natural Resources) states that the national forest cover was decreased by 0.34% from 1990 to 2015, and Canada has the lowest deforestation rate in the world. The forest industry is one of the main industries in Canada, which contributes about 7% of Canadian economy, and about 9% of the forests on earth are in Canada. Therefore, Canada has many policies and laws to commit to sustainable forest management. For example, 94% of Canadian forests are public land, and the government obligates planting trees after harvesting to public forests.
In China, extensive replanting programs have existed since the 1970s, which have had overall success. The forest cover has increased from 12% of China's land area to 16%. However, specific programs have had limited success. The "Green Wall of China," an attempt to limit the expansion of the Gobi Desert, is planned to be 2,800 miles (4,500 km) long and to be completed in 2050. China plans to plant 26 billion trees in the next decade; that is, two trees for every Chinese citizen per year. China requires that students older than 11 years old plant one tree a year until their high school graduation.
Between 2013 and 2018, China planted 338,000 square kilometres of forests, at a cost of $82.88 billion. By 2018, 21.7% of China's territory was covered by forests, a figure the government wants to increase to 26% by 2035. The total area of China is 9,596,961 square kilometres (see China), so 412,669 square kilometres more needs to be planted. According to the government's plan, by 2050, 30% of China’s territory should be covered by forests.
In 2017, the Saihanba Afforestation Community won the UN Champions of the Earth Award in the Inspiration and Action category for their successful reforestation efforts, which began upon discovering the survival of a single tree.
The first historically proven successful method of afforestation with coniferous seeds on a large scale was developed in 1368 by the Nuremberg councillor and merchant Peter Stromer (around 1315-1388) in the Nuremberg Reichswald. This forest area thus became the first artificial forest in the world and Stromer the "father of forest culture".
Reforestation is required as part of the federal forest law. 31% of Germany is forested, according to the second forest inventory of 2001–2003. The size of the forest area in Germany increased between the first and the second forest inventory due to forestation of degenerated bogs and agricultural areas.
Jadav Payeng had received national awards for reforestation efforts, known as the "Molai forest". He planted 1400 hectares of forest on the bank of river Brahmaputra alone. There are active reforestation efforts throughout the country. In 2016, India more than 50 million trees were planted in Uttar Pradesh and in 2017, more than 66 million trees were planted in Madhya Pradesh. In addition to this and individual efforts, there are startup companies, such as Afforest, that are being created over the country working on reforestation. Lots of plantation is being carried out in Indian continent but the survivability is very poor and especially for massive plantation, it is nearly about less than 20%. To improve the forest cover and to achieve the national mission of forest cover of 33%, need to improve the way we do plantation. Rather than mass planting, need to work on performance measurement & tracking of trees growth. In account of this, a leading non-profit Ek Kadam Sansthan is working and prepared a module of mass tracking of the plantation. The pilot has been done successfully and hopes to implement nationwide by the end of 2021.
In 2019 the government of Ireland decided to plant 440 million trees by 2040. The decision is part of the government's plan to make Ireland carbon neutral by 2050 with renewable energy, land use change and carbon tax.
Since 1948, large reforestation and afforestation projects were accomplished in Israel. 240 million trees have been planted. The carbon sequestration rate in these forests is similar to the European temperate forests.
The Ministry of Agriculture, Forestry and Fishery explain that about two-thirds of Japanese land is covered with forests, and it was almost unchanged from 1966 to 2012. Japan needs to reduce 26% of green house gas emission from 2013 by 2030 to accomplish Paris Agreement and is trying to reduce 2% of them by forestry.
Mass environmental and human-body pollution along with relating deforestation, water pollution, smoke damage, and loss of soils caused by mining operations in Ashio, Tochigi became the first environmental social issue in Japan, efforts by Shōzō Tanaka had grown to large campaigns against copper operation. This led to the creation of 'Watarase Yusuichi Pond', to settle the pollution which is a Ramsar site today. Reforestation was conducted as a part of afforestation due to inabilities of self-recovering by the natural land itself due to serious soil pollution and loss of woods consequence in loss of soils for plants to grow, thus needing artificial efforts involving introducing of healthy soils from outside. Starting from around 1897, about 50% of once bald mountains are now back to green.
For thousands of years, Lebanon was covered by forests, one particular species of interest, Cedrus libani was exceptionally valuable and was almost eliminated due to lumbering operations. Virtually every ancient culture that shared the Mediterranean Sea harvested these trees from the Phoenicians who used cedar, pine and juniper to build their famous boats to the Romans, who cut them down for lime-burning kilns, to early in the 20th century when the Ottomans used much of the remaining cedar forests of Lebanon as fuel in steam trains. Despite two millennia of deforestation, forests in Lebanon still cover 13.6% of the country, and other wooded lands represent 11%.
Law No. 558, which was ratified by the Lebanese Parliament on April 19, 1996, aims to protect and expand existing forests, classifying all forests of cedar, fir, high juniper (juniperus excelsa), evergreen cypress (cupressus sempervirens) and other trees, whether diverse or homogeneous, whether state-owned or not as conserved forests.
Since 2011, more than 600,000 trees, including cedars and other native species, have been planted throughout Lebanon as part of the Lebanon Reforestation Initiative, which aims to restore Lebanon's native forests. Projects financed locally and by international charity are performing extensive reforestation of cedar being carried out in the Mediterranean region, particularly in Lebanon and Turkey, where over 50 million young cedars are being planted annually.
The Lebanon Reforestation Initiative has been working since 2012 with tree nurseries throughout Lebanon to help grow stronger tree seedlings that are better suited to survive once planted.
The Billion Tree Tsunami was launched in 2014 by planting 1 billion trees, by the government of Khyber Pakhtunkhwa (KPK) and Imran Khan, Pakistan, as a response to the challenge of global warming. Pakistan's Billion Tree Tsunami restored 350,000 hectares of forests and degraded land to surpass its Bonn Challenge commitment.
In 2018, Pakistan's prime minister declared that the country will plant 10 billion trees in the next five years.
In 2020, the Pakistani government launched an initiative to hire 63,600 laborers to plant trees in the northern Punjab region, with indigenous species such as acacia, mulberry and moringa. This initiative was meant to alleviate unemployment caused by lockdowns to mitigate the spread of COVID-19.
In 2011, the Philippines established the National Greening Program as a priority program to help reduce poverty, promote food security, environmental stability, and biodiversity conservation, as well as enhance climate change mitigation and adaptation in the country. The program paved the way for the planting of almost 1.4 billion seedlings in about 1.66 million hectares nationwide during the 2011-2016 period. The Food and Agriculture Organization of the United Nations ranked the Philippines fifth among countries reporting the greatest annual forest area gain, which reached 240,000 hectares during the 2010-2015 period.
4000 years ago Anatolia was 60% to 70% forested. Although the flora of Turkey remains more biodiverse than many European countries deforestation occurred during both prehistoric and historic times, including the Roman and Ottoman periods.
Since the first forest code of 1937 the official government definition of 'forest' has varied. According to the current definition 21 million hectares are forested, an increase of about 1 million hectares over the past 30 years, but only about half is 'productive'. However, according to the United Nations Food and Agriculture Organization definition of forest about 12 million hectares was forested in 2015, about 15% of the land surface.
The amount of greenhouse gas emissions by Turkey removed by forests is very uncertain. however a new assessment is being made with the help of satellites and new soil measurements and better information should be available by 2020.
According to the World Resources Institute "Atlas of Forest Landscape Restoration Opportunities" 50 million hectares are potential forest land, a similar area to the ancient Anatolian forest mentioned above. This could help limit climate change in Turkey. To help preserve the biodiversity of Turkey more sustainable forestry has been suggested. Improved rangeland management is also needed.
National Forestation Day is on 11 November but, according to the agriculture and forestry trade union although volunteers planted a record number of trees in 2019, most had died by 2020 in part due to lack of rainfall.
It is the stated goal of the US Forest Service to manage forest resources sustainably. This includes reforestation after timber harvest, among other programs.
United States Department of Agriculture (USDA) data shows that forest occupied about 46% of total U.S. land in 1630 (when European settlers began to arrive in large numbers), but had decreased to 34% by 1910. After 1910, forest area has remained almost constant although U.S. population has increased substantially. In the late 19th century the U.S. Forest Service was established in part to address the concern of natural disasters due to deforestation, and new reforestation programs and federal laws such as The Knutson-Vandenberg Act (1930) were implemented. The U.S. Forest Service states that human-directed reforestation is required to support natural regeneration and the agency engages in ongoing research into effective ways to restore forests.
As for the year 2020, United States of America planted 2.5 billion trees per year. At the beginning of the year 2020, a bill that will increase the number to 3.3 billion, was proposed by the Republican Party, after President Donald Trump joined the Trillion Tree Campaign.
Ecosia is a non-profit organisation based in Berlin, Germany that has planted over 100 million trees worldwide as of July 2020.
Ecologi is an organisation that has its members pay a monthly fee to offset their carbon emissions, primarily through tree planting. As well as this they work to promote sustainability and low carbon alternatives. So far over 2 million trees have been planted through Ecologi.
Shanghai Roots & Shoots, a division of the Jane Goodall Institute, launched The Million Tree Project in Kulun Qi, Inner Mongolia to plant one million trees to stop desertification and alleviate global warming.
Team Trees was a 2019 fundraiser with an initiative to plant 20 million trees. The initiative was started by American YouTubers MrBeast and Mark Rober, and was mostly supported by YouTubers. The Arbor Day Foundation will work with its local partners around the world to plant one tree for each dollar they raise.
Many companies are trying to achieve carbon offsets by Nature-based solutions like reforestation, including mangrove forests and soil restoration. Among them are Microsoft and Eni. Increasing the forest cover of Earth by 25% will offset the human emissions in the latest 20 years. In any case it will be necessary to pull from the atmosphere the CO2 that already have been emitted. However, it can work only if the companies will stop to pump new emissions to the atmosphere and stop deforestation.
A similar concept, afforestation, refers to the process of restoring and recreating areas of woodlands or forests that may have existed long ago but were deforested or otherwise removed at some point in the past or lacked it naturally (e.g., natural grasslands). Sometimes the term "re-afforestation" is used to distinguish between the original forest cover and the later re-growth of forest to an area. Special tools, e.g. tree planting bars, are used to make planting of trees easier and faster.
Another alternative strategy, proforestation, is similar as it can be used to counteract the negative environmental and ecological effects of deforestation through growing an existing forest intact to its full ecological potential.
Reforestation competes with other land uses, such as food production, livestock grazing, and living space, for further economic growth. Reforestation often has the tendency to create large fuel loads, resulting in significantly hotter combustion than fires involving low brush or grasses. Reforestation can divert large amounts of water from other activities. Reforesting sometimes results in extensive canopy creation that prevents growth of diverse vegetation in the shadowed areas and generating soil conditions that hamper other types of vegetation. Trees used in some reforesting efforts (e.g., Eucalyptus globulus) tend to extract large amounts of moisture from the soil, preventing the growth of other plants.
There is also the risk that, through a forest fire or insect outbreak, much of the stored carbon in a reforested area could make its way back to the atmosphere. Reduced harvesting rates and fire suppression have caused an increase in the forest biomass in the western United States over the past century. This causes an increase of about a factor of four in the frequency of fires due to longer and hotter dry seasons.
The European Commission found that, in terms of environmental services, it is better to avoid deforestation than to allow for deforestion to subsequently reforest, as the former leads to irreversible effects in terms of biodiversity loss and soil degradation.
Furthermore the probability that legacy carbon will be released from soil is higher in younger boreal forest. Global greenhouse gas emissions caused by damage to tropical rainforests may be have been underestimated by a factor of six. Additionally the effects of af- or reforestation will be farther in the future than those of proforestation. It takes much longer − several decades − for the benefits for global warming to manifest to the same carbon sequestration benefits from mature trees in tropical forests and hence from limiting deforestation. Some researchers note that instead of planting entirely new areas, reconnecting forested areas and restoring the edges of forest, to protect their mature core and make them more resilient and longer-lasting, should be prioritized. | https://everything.explained.today/Reforestation/ | 21 |
19 | Wolf–Rayet stars, often abbreviated as WR stars, are a rare heterogeneous set of stars with unusual spectra showing prominent broad emission lines of ionised helium and highly ionised nitrogen or carbon. The spectra indicate very high surface enhancement of heavy elements, depletion of hydrogen, and strong stellar winds. The surface temperatures of known Wolf-Rayet stars range from 20,000 K to around 210,000 K, hotter than almost all other kinds of stars. They were previously called W-type stars referring to their spectral classification.
Classic (or Population I) Wolf–Rayet stars are evolved, massive stars that have completely lost their outer hydrogen and are fusing helium or heavier elements in the core. A subset of the population I WR stars show hydrogen lines in their spectra and are known as WNh stars; they are young extremely massive stars still fusing hydrogen at the core, with helium and nitrogen exposed at the surface by strong mixing and radiation-driven mass loss. A separate group of stars with WR spectra are the central stars of planetary nebulae (CSPNe), post-asymptotic giant branch stars that were similar to the Sun while on the main sequence, but have now ceased fusion and shed their atmospheres to reveal a bare carbon-oxygen core.
All Wolf–Rayet stars are highly luminous objects due to their high temperatures—thousands of times the bolometric luminosity of the Sun (L☉) for the CSPNe, hundreds of thousands L☉ for the Population I WR stars, to over a million L☉ for the WNh stars—although not exceptionally bright visually since most of their radiation output is in the ultraviolet.
In 1867, using the 40 cm Foucault telescope at the Paris Observatory, astronomers Charles Wolf and Georges Rayet discovered three stars in the constellation Cygnus (HD 191765, HD 192103 and HD 192641, now designated as WR 134, WR 135, and WR 137 respectively) that displayed broad emission bands on an otherwise continuous spectrum. Most stars only display absorption lines or bands in their spectra, as a result of overlying elements absorbing light energy at specific frequencies, so these were clearly unusual objects.
The nature of the emission bands in the spectra of a Wolf–Rayet star remained a mystery for several decades. Edward C. Pickering theorized that the lines were caused by an unusual state of hydrogen, and it was found that this "Pickering series" of lines followed a pattern similar to the Balmer series, when half-integer quantum numbers were substituted. It was later shown that these lines resulted from the presence of helium; a chemical element that was discovered in 1868. Pickering noted similarities between Wolf–Rayet spectra and nebular spectra, and this similarity led to the conclusion that some or all Wolf–Rayet stars were the central stars of planetary nebulae.
By 1929, the width of the emission bands was being attributed to Doppler broadening, and hence that the gas surrounding these stars must be moving with velocities of 300–2400 km/s along the line of sight. The conclusion was that a Wolf–Rayet star is continually ejecting gas into space, producing an expanding envelope of nebulous gas. The force ejecting the gas at the high velocities observed is radiation pressure. It was well known that many stars with Wolf–Rayet type spectra were the central stars of planetary nebulae, but also that many were not associated with an obvious planetary nebula or any visible nebulosity at all.
In addition to helium, Carlyle Smith Beals identified emission lines of carbon, oxygen and nitrogen in the spectra of Wolf–Rayet stars. In 1938, the International Astronomical Union classified the spectra of Wolf–Rayet stars into types WN and WC, depending on whether the spectrum was dominated by lines of nitrogen or carbon-oxygen respectively.
In 1969, several CSPNe with strong O VI emissions lines were grouped under a new "O VI sequence", or just OVI type. These were subsequently referred to as [WO] stars. Similar stars not associated with planetary nebulae were described shortly after and the WO classification was eventually also adopted for population I WR stars.
The understanding that certain late, and sometimes not-so-late, WN stars with hydrogen lines in their spectra are at a different stage of evolution from hydrogen-free WR stars has led to the introduction of the term WNh to distinguish these stars generally from other WN stars. They were previously referred to as WNL stars, although there are late-type WN stars without hydrogen as well as WR stars with hydrogen as early as WN5.
Wolf–Rayet stars were named on the basis of the strong broad emission lines in their spectra, identified with helium, nitrogen, carbon, silicon, and oxygen, but with hydrogen lines usually weak or absent. The first system of classification split these into stars with dominant lines of ionised nitrogen (N III, N IV, and N V) and those with dominant lines of ionised carbon (C III and C IV) and sometimes oxygen (O III – O VI), referred to as WN and WC respectively. The two classes WN and WC were further split into temperature sequences WN5–WN8 and WC6–WC8 based on the relative strengths of the 541.1 nm He II and 587.5 nm He I lines. Wolf–Rayet emission lines frequently have a broadened absorption wing (P Cygni profile) suggesting circumstellar material. A WO sequence has also been separated from the WC sequence for even hotter stars where emission of ionised oxygen dominates that of ionised carbon, although the actual proportions of those elements in the stars are likely to be comparable. WC and WO spectra are formally distinguished based on the presence or absence of C III emission. WC spectra also generally lack the O VI lines that are strong in WO spectra.
The WN spectral sequence was expanded to include WN2–WN9, and the definitions refined based on the relative strengths of the N III lines at 463.4–464.1 nm and 531.4 nm, the N IV lines at 347.9–348.4 nm and 405.8 nm, and the N V lines at 460.3 nm, 461.9 nm, and 493.3–494.4 nm. These lines are well separated from areas of strong and variable He emission and the line strengths are well correlated with temperature. Stars with spectra intermediate between WN and Ofpe have been classified as WN10 and WN11 although this nomenclature is not universally accepted.
The type WN1 was proposed for stars with neither N IV nor N V lines, to accommodate Brey 1 and Brey 66 which appeared to be intermediate between WN2 and WN2.5. The relative line strengths and widths for each WN sub-class were later quantified, and the ratio between the 541.1 nm He II and 587.5m, He I lines was introduced as the primary indicator of the ionisation level and hence of the spectral sub-class. The need for WN1 disappeared and both Brey 1 and Brey 66 are now classified as WN3b. The somewhat obscure WN2.5 and WN4.5 classes were dropped.
|Spectral Type||Original criteria||Updated criteria||Other features|
|WN2||N V weak or absent||N V and N IV absent||Strong He II, no He I|
|WN2.5||N V present, N IV absent||Obsolete class|
|WN3||N IV ≪ N V, N III weak or absent||He II/He I > 10, He II/C IV > 5||Peculiar profiles, unpredictable N V strength|
|WN4||N IV ≈ N V, N III weak or absent||4 < He II/He I < 10, N V/N III > 2||C IV present|
|WN4.5||N IV > N V, N III weak or absent||Obsolete class|
|WN5||N III ≈ N IV ≈ N V||1.25 < He II/He I < 8, 0.5 < N V/N III < 2||N IV or C IV > He I|
|WN6||N III ≈ N IV, N V weak||1.25 < He II/He I < 8, 0.2 < N V/N III < 0.5||C IV ≈ He I|
|WN7||N III > N IV||0.65 < He II/He I < 1.25||Weak P-Cyg profile He I, He II > N III, C IV > He I|
|WN8||N III ≫ N IV||He II/He I < 0.65||Strong P-Cyg profile He I, He II ≈ N III, C IV weak|
|WN9||N III > N II, N IV absent||N III > N II, N IV absent||P-Cyg profile He I|
|WN10||N III ≈ N II||N III ≈ N II||H Balmer, P-Cyg profile He I|
|WN11||N III weak or absent, N II present||N III ≈ He II, N III weak or absent,||H Balmer, P-Cyg profile He I, Fe III present|
The WC spectral sequence was expanded to include WC4–WC11, although some older papers have also used WC1–WC3. The primary emission lines used to distinguish the WC sub-types are C II 426.7 nm, C III at 569.6 nm, C III/IV465.0 nm, C IV at 580.1–581.2 nm, and the O V (and O III) blend at 557.2–559.8 nm. The sequence was extended to include WC10 and WC11, and the subclass criteria were quantified based primarily on the relative strengths of carbon lines to rely on ionisation factors even if there were abundance variations between carbon and oxygen.
|Spectral type||Original criteria||Quantitative criteria||Other features|
|WC4||C IV strong, C II weak, O V moderate||C IV/C III > 32||O V/C III > 2.5||O VI weak or absent|
|WC5||C III ≪ C IV, C III < O V||12.5 < C IV/C III < 32||0.4 < C III/O V < 3||O VI weak or absent|
|WC6||C III ≪ C IV, C III > O V||4 < C IV/C III < 12.5||1 < C III/O V < 5||O VI weak or absent|
|WC7||C III < C IV, C III ≫ O V||1.25 < C IV/C III < 4||C III/O V > 1.25||O VI weak or absent|
|WC8||C III > C IV, C II absent, O V weak or absent||0.5 < C IV/C III < 1.25||C IV/C II > 10||He II/He I > 1.25|
|WC9||C III > C IV, C II present, O V weak or absent||0.2 < C IV/C III < 0.5||0.6 < C IV/C II < 10||0.15 < He II/He I < 1.25|
|WC10||0.06 < C IV/C III < 0.15||0.03 < C IV/C II < 0.6||He II/He I < 0.15|
|WC11||C IV/C III < 0.06||C IV/C II < 0.03||He II absent|
For WO-type stars the main lines used are C IV at 580.1 nm, O IV at 340.0 nm, O V (and O III) blend at 557.2–559.8 nm, O VI at 381.1–383.4 nm, O VII at 567.0 nm, and O VIII at 606.8 nm. The sequence was expanded to include WO5 and quantified based the relative strengths of the O VI/C IV and O VI/O V lines. A later scheme, designed for consistency across classical WR stars and CSPNe, returned to the WO1 to WO4 sequence and adjusted the divisions.
|Spectral type||Original criteria||Quantitative criteria||Other features|
|WO1||O VII ≥ O V, O VIII present||O VI/O V > 12.5||O VI/C IV > 1.5||O VII ≥ O V|
|WO2||O VII < O V, C IV < O VI||4 < O VI/O V < 12.5||O VI/C IV > 1.5||O VII ≤ O V|
|WO3||O VII weak or absent, C IV ≈ O VI||1.8 < O VI/O V < 4||0.1 < O VI/C IV < 1.5||O VII ≪ O V|
|WO4||C IV ≫ O VI||0.5 < O VI/O V < 1.8||0.03 < O VI/C IV < 0.1||O VII ≪ O V|
Detailed modern studies of Wolf–Rayet stars can identify additional spectral features, indicated by suffixes to the main spectral classification:
- h for hydrogen emission;
- ha for hydrogen emission and absorption;
- w for weak lines;
- s for strong lines;
- b for broad strong lines;
- d for dust (occasionally vd, pd, or ed for variable, periodic, or episodic dust).
The classification of Wolf–Rayet spectra is complicated by the frequent association of the stars with dense nebulosity, dust clouds, or binary companions. A suffix of "+OB" is used to indicate the presence of absorption lines in the spectrum likely to be associated with a more normal companion star, or "+abs" for absorption lines with an unknown origin.
The hotter WR spectral sub-classes are described as early and the cooler ones as late, consistent with other spectral types. WNE and WCE refer to early type spectra while WNL and WCL refer to late type spectra, with the dividing line approximately at sub-class six or seven. There is no such thing as a late WO-type star. There is a strong tendency for WNE stars to be hydrogen-poor while the spectra of WNL stars frequently include hydrogen lines.
Spectral types for the central stars of planetary nebulae are qualified by surrounding them with square brackets (e.g. [WC4]). They are almost all of the WC sequence with the known [WO] stars representing the hot extension of the carbon sequence. There are also a small number of [WN] and [WC/WN] types, only discovered quite recently. Their formation mechanism is as yet unclear.
Temperatures of the planetary nebula central stars tend to the extremes when compared to population I WR stars, so [WC2] and [WC3] are common and the sequence has been extended to [WC12]. The [WC11] and [WC12] types have distinctive spectra with narrow emission lines and no He II and C IV lines.
Certain supernovae observed before their peak brightness show WR spectra. This is due to the nature of the supernova at this point: a rapidly expanding helium-rich ejecta similar to an extreme Wolf–Rayet wind. The WR spectral features only last a matter of hours, the high ionisation features fading by maximum to leave only weak neutral hydrogen and helium emission, before being replaced with a traditional supernova spectrum. It has been proposed to label these spectral types with an "X", for example XWN5(h). Similarly, classical novae develop spectra consisting of broad emission bands similar to a Wolf–Rayet star. This is caused by the same physical mechanism: rapid expansion of dense gases around an extremely hot central source.
The separation of Wolf–Rayet stars from spectral class O stars of a similar temperature depends on the existence of strong emission lines of ionised helium, nitrogen, carbon, and oxygen, but there are a number of stars with intermediate or confusing spectral features. For example, high luminosity O stars can develop helium and nitrogen in their spectra with some emission lines, while some WR stars have hydrogen lines, weak emission, and even absorption components. These stars have been given spectral types such as O3If∗/WN6 and are referred to as slash stars.
Class O supergiants can develop emission lines of helium and nitrogen, or emission components to some absorption lines. These are indicated by spectral peculiarity suffix codes specific to this type of star:
- f for N iii and He ii emission
- f* for N and He emission with N iv stronger than N iii
- f+ for emission in Si iv in addition to N and He
- parentheses indicating He ii absorption lines instead of emission, e.g. (f)
- double parentheses indicating strong He ii absorption and N iii emission diluted, e.g. ((f+))
These codes may also be combined with more general spectral type qualifiers such as p or a. Common combinations include OIafpe and OIf*, and Ofpe. In the 1970s it was recognised that there was a continuum of spectra from pure absorption class O to unambiguous WR types, and it was unclear whether some intermediate stars should be given a spectral type such as O8Iafpe or WN8-a. The slash notation was proposed to deal with these situations and the star Sk−67°22 was assigned the spectral type O3If*/WN6-A. The criteria for distinguishing OIf*, OIf*/WN, and WN stars have been refined for consistency. Slash star classifications are used when the Hβ line has a P Cygni profile; this is an absorption line in O supergiants and an emission line in WN stars. Criteria for the following slash star spectral types are given, using the nitrogen emission lines at 463.4–464.1 nm, 405.8 nm, and 460.3–462.0 nm, together with a standard star for each type:
|Spectral type||Standard star||Criteria|
|O2If*/WN5||Melnick 35||N iv ≫ N iii, N v ≥ N iii|
|O2.5If*/WN6||WR 25||N iv > N iii, N v < N iii|
|O3.5If*/WN7||Melnick 51||N iv < N iii, N v ≪ N iii|
Another set of slash star spectral types is in use for Ofpe/WN stars. These stars have O supergiant spectra plus nitrogen and helium emission, and P Cygni profiles. Alternatively they can be considered to be WN stars with unusually low ionisation levels and hydrogen. The slash notation for these stars was controversial and an alternative was to extend the WR nitrogen sequence to WN10 and WN11 Other authors preferred to use the WNha notation, for example WN9ha for WR 108. A recent recommendation is to use an O spectral type such as O8Iaf if the 447.1 nm He i line is in absorption and a WR class of WN9h or WN9ha if the line has a P Cygni profile. However, the Ofpe/WN slash notation as well as WN10 and WN11 classifications continue to be widely used.
A third group of stars with spectra containing features of both O class stars and WR stars has been identified. Nine stars in the Large Magellanic Cloud have spectra that contain both WN3 and O3V features, but do not appear to be binaries. Many of the WR stars in the Small Magellanic Cloud also have very early WN spectra plus high excitation absorption features. It has been suggested that these could be a missing link leading to classical WN stars or the result of tidal stripping by a low-mass companion.
The first three Wolf–Rayet stars to be identified, coincidentally all with hot O companions, had already been numbered in the HD catalogue. These stars and others were referred to as Wolf–Rayet stars from their initial discovery but specific naming conventions for them would not be created until 1962 in the "fourth" catalogue of galactic Wolf–Rayet stars. The first three catalogues were not specifically lists of Wolf–Rayet stars and they used only existing nomenclature. The fourth catalogue numbered the Wolf–Rayet stars sequentially in order of right ascension. The fifth catalogue used the same numbers prefixed with MR after the author of the fourth catalogue, plus an additional sequence of numbers prefixed with LS for new discoveries. Neither of these numbering schemes is in common use.
The sixth Catalogue of Galactic Wolf–Rayet stars was the first to actually bear that name, as well as to describe the previous five catalogues by that name. It also introduced the WR numbers widely used ever since for galactic WR stars. These are again a numerical sequence from WR 1 to WR 158 in order of right ascension. The seventh catalogue and its annex use the same numbering scheme and insert new stars into the sequence using lower case letter suffixes, for example WR 102ka for one of the numerous WR stars discovered in the galactic centre. Modern high volume identification surveys use their own numbering schemes for the large numbers of new discoveries. An IAU working group has accepted recommendations to expand the numbering system from the Catalogue of Galactic Wolf–Rayet stars so that additional discoveries are given the closest existing WR number plus a numeric suffix in order of discovery. This applies to all discoveries since the 2006 annex, although some of these have already been named under the previous nomenclature; thus WR 42e is now numbered WR 42-1.
Wolf–Rayet stars in external galaxies are numbered using different schemes. In the Large Magellanic Cloud, the most widespread and complete nomenclature for WR stars is from "The Fourth Catalogue of Population I Wolf–Rayet stars in the Large Magellanic Cloud" prefixed by BAT-99, for example BAT-99 105. Many of these stars are also referred to by their third catalogue number, for example Brey 77. As of 2018, 154 WR stars are catalogued in the LMC, mostly WN but including about twenty-three WCs as well as three of the extremely rare WO class. Many of these stars are often referred to by their RMC (Radcliffe observatory Magellanic Cloud) numbers, frequently abbreviated to just R, for example R136a1.
In the Small Magellanic Cloud SMC WR numbers are used, usually referred to as AB numbers, for example AB7. There are only twelve known WR stars in the SMC, a very low number thought to be due to the low metallicity of that galaxy
Wolf–Rayet stars are a normal stage in the evolution of very massive stars, in which strong, broad emission lines of helium and nitrogen ("WN" sequence), carbon ("WC" sequence), and oxygen ("WO" sequence) are visible. Due to their strong emission lines they can be identified in nearby galaxies. About 500 Wolf–Rayets are catalogued in our own Milky Way Galaxy. This number has changed dramatically during the last few years as the result of photometric and spectroscopic surveys in the near-infrared dedicated to discovering this kind of object in the Galactic plane. It is expected that there are fewer than 1,000 WR stars in the rest of the Local Group galaxies, with around 166 known in the Magellanic Clouds, 206 in M33, and 154 in M31. Outside the local group, whole galaxy surveys have found thousands more WR stars and candidates. For example, over a thousand WR stars have been detected in M101, from magnitude 21 to 25. WR stars are expected to be particularly common in starburst galaxies and especially Wolf–Rayet galaxies.
The characteristic emission lines are formed in the extended and dense high-velocity wind region enveloping the very hot stellar photosphere, which produces a flood of UV radiation that causes fluorescence in the line-forming wind region. This ejection process uncovers in succession, first the nitrogen-rich products of CNO cycle burning of hydrogen (WN stars), and later the carbon-rich layer due to He burning (WC and WO-type stars).
It can be seen that the WNh stars are completely different objects from the WN stars without hydrogen. Despite the similar spectra, they are much more massive, much larger, and some of the most luminous stars known. They have been detected as early as WN5h in the Magellanic clouds. The nitrogen seen in the spectrum of WNh stars is still the product of CNO cycle fusion in the core, but it appears at the surface of the most massive stars due to rotational and convectional mixing while still in the core hydrogen burning phase, rather than after the outer envelope is lost during core helium fusion.
Some Wolf–Rayet stars of the carbon sequence ("WC"), especially those belonging to the latest types, are noticeable due to their production of dust. Usually this takes place on those belonging to binary systems as a product of the collision of the stellar winds forming the pair, as is the case of the famous binary WR 104; however this process occurs on single ones too.
A few (roughly 10%) of the central stars of planetary nebulae are, despite their much lower (typically ~0.6 solar) masses, also observationally of the WR-type; i.e. they show emission line spectra with broad lines from helium, carbon and oxygen. Denoted [WR], they are much older objects descended from evolved low-mass stars and are closely related to white dwarfs, rather than to the very young, very massive population I stars that comprise the bulk of the WR class. These are now generally excluded from the class denoted as Wolf–Rayet stars, or referred to as Wolf–Rayet-type stars.
The numbers and properties of Wolf–Rayet stars vary with the chemical composition of their progenitor stars. A primary driver of this difference is the rate of mass loss at different levels of metallicity. Higher metallicity leads to high mass loss, which affects the evolution of massive stars and also the properties of Wolf–Rayet stars. Higher levels of mass loss cause stars to lose their outer layers before an iron core develops and collapses, so that the more massive red supergiants evolve back to hotter temperatures before exploding as a supernova, and the most massive stars never become red supergiants. In the Wolf–Rayet stage, higher mass loss leads to stronger depletion of the layers outside the convective core, lower hydrogen surface abundances and more rapid stripping of helium to produce a WC spectrum.
These trends can be observed in the various galaxies of the local group, where metallicity varies from near-solar levels in the Milky Way, somewhat lower in M31, lower still in the Large Magellanic Cloud, and much lower in the Small Magellanic Cloud. Strong metallicity variations are seen across individual galaxies, with M33 and the Milky Way showing higher metallicities closer to the centre, and M31 showing higher metallicity in the disk than in the halo. Thus the SMC is seen to have few WR stars compared to its stellar formation rate and no WC stars at all (one star has a WO spectral type), the Milky Way has roughly equal numbers of WN and WC stars and a large total number of WR stars, and the other main galaxies have somewhat fewer WR stars and more WN than WC types. LMC, and especially SMC, Wolf–Rayets have weaker emission and a tendency to higher atmospheric hydrogen fractions. SMC WR stars almost universally show some hydrogen and even absorption lines even at the earliest spectral types, due to weaker winds not entirely masking the photosphere.
The maximum mass of a main-sequence star that can evolve through a red supergiant phase and back to a WNL star is calculated to be around 20 M☉ in the Milky Way, 32 M☉ in the LMC, and over 50 M☉ in the SMC. The more evolved WNE and WC stages are only reached by stars with an initial mass over 25 M☉ at near-solar metallicity, over 60 M☉ in the LMC. Normal single star evolution is not expected to produce any WNE or WC stars at SMC metallicity.
Mass loss is influenced by a star's rotation rate, especially strongly at low metallicity. Fast rotation contributes to mixing of core fusion products through the rest of the star, enhancing surface abundances of heavy elements, and driving mass loss. Rotation causes stars to remain on the main sequence longer than non-rotating stars, evolve more quickly away from the red supergiant phase, or even evolve directly from the main sequence to hotter temperatures for very high masses, high metallicity or very rapid rotation.
Stellar mass loss produces a loss of angular momentum and this quickly brakes the rotation of massive stars. Very massive stars at near-solar metallicity should be braked almost to a standstill while still on the main sequence, while at SMC metallicity they can continue to rotate rapidly even at the highest observed masses. Rapid rotation of massive stars may account for the unexpected properties and numbers of SMC WR stars, for example their relatively high temperatures and luminosities.
Massive stars in binary systems can develop into Wolf–Rayet stars due to stripping by a companion rather than inherent mass loss due to a stellar wind. This process is relatively insensitive to the metallicity or rotation of the individual stars and is expected to produce a consistent set of WR stars across all the local group galaxies. As a result, the fraction of WR stars produced through the binary channel, and therefore the number of WR stars observed to be in binaries, should be higher in low metallicity environments. Calculations suggest that the binary fraction of WR stars observed in the SMC should be as high as 98%, although less than half are actually observed to have a massive companion. The binary fraction in the Milky Way is around 20%, in line with theoretical calculations.
A significant proportion of WR stars are surrounded by nebulosity associated directly with the star, not just the normal background nebulosity associated with any massive star forming region, and not a planetary nebula formed by a post-AGB star. The nebulosity presents a variety of forms and classification has been difficult. Many were originally catalogued as planetary nebulae and sometimes only a careful multi-wavelength study can distinguish a planetary nebula around a low mass post-AGB star from a similarly shaped nebula around a more massive core helium-burning star.
A Wolf–Rayet galaxy is a type of starburst galaxy where a sufficient number of WR stars exist that their characteristic emission line spectra become visible in the overall spectrum of the galaxy. Specifically a broad emission feature due to the 468.6 nm He ii and nearby spectral lines is the defining characteristic of a Wolf–Rayet galaxy. The relatively short lifetime of WR stars means that the starbursts in such galaxies must have lasted less than a million years and occurred within the last few million years, or else the WR emission would be swamped by large numbers of other luminous stars.
Theories about how WR stars form, develop, and die have been slow to form compared to the explanation of less extreme stellar evolution. They are rare, distant, and often obscured, and even into the 21st century many aspects of their lives are unclear.
Although Wolf–Rayet stars have been clearly identified as an unusual and distinctive class of stars since the 19th century, the nature of these stars was uncertain until towards the end of the 20th century. Before the 1960s, even the classification of WR stars was highly uncertain, and their nature and evolution was essentially unknown. The very similar appearance of the central stars of planetary nebulae (CSPNe) and the much more luminous classical WR stars contributed to the uncertainty.
By about 1960, the distinction between CSPNe and massive luminous classical WR stars was more clear. Studies showed that they were small dense stars surrounded by extensive circumstellar material, but not yet clear whether the material was expelled from the star or contracting onto it. The unusual abundances of nitrogen, carbon, and oxygen, as well as the lack of hydrogen, were recognised, but the reasons remained obscure. It was recognised that WR stars were very young and very rare, but it was still open to debate whether they were evolving towards or away from the main sequence.
By the 1980s, WR stars were accepted as the descendants of massive OB stars, although their exact evolutionary state in relation to the main sequence and other evolved massive stars was still unknown. Theories that the preponderance of WR stars in massive binaries and their lack of hydrogen could be due to gravitational stripping had been largely ignore or abandoned. WR stars were being proposed as possible progenitors of supernovae, and particularly the newly-discovered type Ib supernovae, lacking hydrogen but apparently associated with young massive stars.
By the start of the 21st century, WR stars were largely accepted as massive stars that had exhausted their core hydrogen, left the main sequence, and expelled most of their atmospheres, leaving behind a small hot core of helium and heavier fusion products.
Most WR stars, the classical population I type, are now understood as being a natural stage in the evolution of the most massive stars (not counting the less common planetary nebula central stars), either after a period as a red supergiant, after a period as a blue supergiant, or directly from the most massive main-sequence stars. Only the lower mass red supergiants are expected to explode as a supernova at that stage, while more massive red supergiants progress back to hotter temperatures as they expel their atmospheres. Some explode while at the yellow hypergiant or LBV stage, but many become Wolf–Rayet stars. They have lost or burnt almost all of their hydrogen and are now fusing helium in their cores, or heavier elements for a very brief period at the end of their lives.
Massive main-sequence stars create a very hot core which fuses hydrogen very rapidly via the CNO process and results in strong convection throughout the whole star. This causes mixing of helium to the surface, a process that is enhanced by rotation, possibly by differential rotation where the core is spun up to a faster rotation than the surface. Such stars also show nitrogen enhancement at the surface at a very young age, caused by changes in the proportions of carbon and nitrogen due to the CNO cycle. The enhancement of heavy elements in the atmosphere, as well as increases in luminosity, create strong stellar winds which are the source of the emission line spectra. These stars develop an Of spectrum, Of* if they are sufficiently hot, which develops into a WNh spectrum as the stellar winds increase further. This explains the high mass and luminosity of the WNh stars, which are still burning hydrogen at the core and have lost little of their initial mass. These will eventually expand into blue supergiants (LBVs?) as hydrogen at the core becomes depleted, or if mixing is efficient enough (e.g. through rapid rotation) they may progress directly to WN stars without hydrogen.
WR stars are likely to end their lives violently rather than fade away to a white dwarf. Thus every star with an initial mass more than about 9 times the Sun would inevitably result in a supernova explosion, many of them from the WR stage.
A simple progression of WR stars from low to hot temperatures, resulting finally in WO-type stars, is not supported by observation. WO-type stars are extremely rare and all the known examples are more luminous and more massive than the relatively common WC stars. Alternative theories suggest either that the WO-type stars are only formed from the most massive main-sequence stars, and/or that they form an extremely short-lived end stage of just a few thousand years before exploding, with the WC phase corresponding to the core helium burning phase and the WO phase to nuclear burning stages beyond. It is still unclear whether the WO spectrum is purely the result of ionisation effects at very high temperature, reflects an actual chemical abundance difference, or if both effects occur to varying degrees.
|Initial Mass (M☉)||Evolutionary Sequence||Supernova Type|
|60+||O → Of → WNh ↔ LBV →[WNL]||IIn|
|45–60||O → WNh → LBV/WNE? → WO||Ib/c|
|20–45||O → RSG → WNE → WC||Ib|
|15–20||O → RSG ↔ (YHG) ↔ BSG (blue loops)||II-L (or IIb)|
|8–15||B → RSG||II-P|
- O: O-type main-sequence star
- Of: evolved O-type showing N and He emission
- BSG: blue supergiant
- RSG: red supergiant
- YHG: yellow hypergiant
- LBV: luminous blue variable
- WNh: WN plus hydrogen lines
- WNL: "late" WN-class Wolf–Rayet star (about WN6 to WN11)
- WNE: "early" WN-class Wolf–Rayet star (about WN2 to WN6)
- WN/WC: Transitional (transitioning from WN to WC) Wolf–Rayet star (may be WN#/WCE or WC#/WN)
- WC: WC-class Wolf–Rayet star
- WO: WO-class Wolf–Rayet star
Wolf–Rayet stars form from massive stars, although the evolved population I stars have lost half or more of their initial masses by the time they show a WR appearance. For example, γ2 Velorum A currently has a mass around 9 times the Sun, but began with a mass at least 40 times the Sun. High-mass stars are very rare, both because they form less often and because they have short lives. This means that Wolf–Rayet stars themselves are extremely rare because they only form from the most massive main-sequence stars and because they are a relatively short-lived phase in the lives of those stars. This also explains why type Ibc supernovae are less common than type II, since they result from higher-mass stars.
WNh stars, spectroscopically similar but actually a much less evolved star which has only just started to expel its atmosphere, are an exception and still retain much of their initial mass. The most massive stars currently known are all WNh stars rather than O-type main-sequence stars, an expected situation because such stars show helium and nitrogen at the surface only a few thousand years after they form, possibly before they become visible through the surrounding gas cloud. An alternative explanation is that these stars are so massive that they could not form as normal main-sequence stars, instead being the result of mergers of less extreme stars.
The difficulties of modelling the observed numbers and types of Wolf–Rayet stars through single star evolution have led to theories that they form through binary interactions which could accelerate loss of the outer layers of a star through mass exchange. WR 122 is a potential example that has a flat disk of gas encircling the star, almost 2 trillion miles wide, and may have a companion star that stripped its outer envelope.
It is widely suspected that many type Ib and type Ic supernova progenitors are WR stars, although no conclusive identification has been made of such a progenitor.
Type Ib supernovae lack hydrogen lines in their spectra. The more common type Ic supernova lack both hydrogen and helium lines in their spectra. The expected progenitors for such supernova are massive stars that respectively lack hydrogen in their outer layers, or lack both hydrogen and helium. WR stars are just such objects. All WR stars lack hydrogen and in some WR stars, most notably the WO group, helium is also strongly depleted. WR stars are expected to experience core collapse when they have generated an iron core, and resulting supernova explosions would be of type Ib or Ic. In some cases it is possible that direct collapse of the core to a black hole would not produce a visible explosion.
WR stars are very luminous due to their high temperatures but not visually bright, especially the hottest examples that are expected to make up most supernova progenitors. Theory suggests that the progenitors of type Ibc supernovae observed to date would not be bright enough to be detected, although they place constraints on the properties of those progenitors. A possible progenitor star which has disappeared at the location of supernova iPTF13bvn may be a single WR star, although other analyses favour a less massive binary system with a stripped star or helium giant. The only other possible WR supernova progenitor is for SN 2017ein, and again it is uncertain whether the progenitor is a single massive WR star or binary system.
By far the most visible example of a Wolf–Rayet star is γ2 Velorum (WR 11), which is a bright naked eye star for those located south of 40 degrees northern latitude, although most of the light comes from an O7.5 giant companion. Due to the exotic nature of its spectrum (bright emission lines in lieu of dark absorption lines) it is dubbed the "Spectral Gem of the Southern Skies". The only other Wolf–Rayet star brighter than magnitude 6 is θ Muscae (WR 48), a triple star with two O class companions. Both are WC stars. The "ex" WR star WR 79a (HR 6272) is brighter than magnitude 6 but is now considered to be a peculiar O8 supergiant with strong emission. The next brightest at magnitude 6.4 is WR 22, a massive binary with a WN7h primary.
The most massive and most luminous star currently known, R136a1, is also a Wolf–Rayet star of the WNh type that is still fusing hydrogen in its core. This type of star, which includes many of the most luminous and most massive stars, is very young and usually found only in the centre of the densest star clusters. Occasionally a runaway WNh star such as VFTS 682 is found outside such clusters, probably having been ejected from a multiple system or by interaction with other stars.
An example of a triple star system containing a Wolf–Rayet binary is Apep. It releases huge amounts of carbon dust driven by their extreme stellar winds. As the two stars orbit one another, the dust gets wrapped into a glowing sooty tail.
All of the very hottest non-degenerate stars (the hottest few) are Wolf-Rayet stars, the hottest of which being WR 102, which seems to be as hot as 210,000 K, followed by WR 142 which is around 200,000 K in temperature. LMC195-1, located in the Large Magellanic Cloud, should have a similar temperature, but at the moment this temperature is unknown.
Only a minority of planetary nebulae have WR type central stars, but a considerable number of well-known planetary nebulae do have them.
|Planetary nebula||Central star type|
|NGC 5189 (Spiral Planetary Nebula)||[WO1]|
|NGC 6369 (Little Ghost Nebula)||[WO3]|
|MyCn18 (Hourglass Nebula)||[WC]-PG1159|
- Murdin, P. (2001). "Wolf, Charles J E (1827-1918)". The Encyclopedia of Astronomy and Astrophysics. p. 4101. Bibcode:2000eaa..bookE4101.. ISBN 978-0333750889.
- Huggins, W.; Huggins, Mrs. (1890). "On Wolf and Rayet's Bright-Line Stars in Cygnus". Proceedings of the Royal Society of London. 49 (296–301): 33–46. doi:10.1098/rspl.1890.0063. S2CID 120014472.
- Fowler, A. (1912). "Hydrogen, Spectrum of, Observations of the principal and other series of lines in the". Monthly Notices of the Royal Astronomical Society. 73 (2): 62–105. Bibcode:1912MNRAS..73...62F. doi:10.1093/mnras/73.2.62.
- Wright, W. H. (1914). "The relation between the Wolf–Rayet stars and the planetary nebulae". The Astrophysical Journal. 40: 466. Bibcode:1914ApJ....40..466W. doi:10.1086/142138.
- Beals, C. S. (1929). "On the nature of Wolf–Rayet emission". Monthly Notices of the Royal Astronomical Society. 90 (2): 202–212. Bibcode:1929MNRAS..90..202B. doi:10.1093/mnras/90.2.202.
- Beals, C. S. (1940). "On the Physical Characteristics of the Wolf–Rayet Stars and their Relation to Other Objects of Early Type (with Plates VIII, IX)". Journal of the Royal Astronomical Society of Canada. 34: 169. Bibcode:1940JRASC..34..169B.
- Beals, C. S. (1930). "The Wolf–Rayet Stars". Publ. Dominion Astrophysical Observatory. 4: 271–301. Bibcode:1930PDAO....4..271B.
- Beals, C. S. (1933). "Classification and temperatures of Wolf–Rayet stars". The Observatory. 56: 196–197. Bibcode:1933Obs....56..196B.
- Swings, P. (1942). "The Spectra of Wolf–Rayet Stars and Related Objects". The Astrophysical Journal. 95: 112. Bibcode:1942ApJ....95..112S. doi:10.1086/144379. hdl:2268/72172.
- Starrfield, S.; Cox, A. N.; Kidman, R. B.; Pensnell, W. D. (1985). "An analysis of nonradial pulsations of the central star of the planetary nebula K1-16". Astrophysical Journal. 293: L23. Bibcode:1985ApJ...293L..23S. doi:10.1086/184484.
- Sanduleak, N. (1971). "On Stars Having Strong O VI Emission". The Astrophysical Journal. 164: L71. Bibcode:1971ApJ...164L..71S. doi:10.1086/180694.
- Barlow, M. J.; Hummer, D. G. (1982). "The WO Wolf–Rayet stars". Wolf–Rayet stars: Observations, physics, evolution; Proceedings of the Symposium, Cozumel, Mexico. 99. pp. 387–392. Bibcode:1982IAUS...99..387B. doi:10.1007/978-94-009-7910-9_51. ISBN 978-90-277-1470-1.
- Smith, Nathan; Conti, Peter S. (2008). "On the Role of the WNH Phase in the Evolution of Very Massive Stars: Enabling the LBV Instability with Feedback". The Astrophysical Journal. 679 (2): 1467–1477. arXiv:0802.1742. Bibcode:2008ApJ...679.1467S. doi:10.1086/586885. S2CID 15529810.
- Sander, A.; Hamann, W.-R.; Todt, H. (2012). "The Galactic WC stars". Astronomy & Astrophysics. 540: A144. arXiv:1201.6354. Bibcode:2012A&A...540A.144S. doi:10.1051/0004-6361/201117830. S2CID 119182468.
- Beals, C. S. (1933). "Classification and temperatures of Wolf–Rayet stars". The Observatory. 56: 196. Bibcode:1933Obs....56..196B.
- Van Der Hucht, Karel A. (2001). "The VIIth catalogue of galactic Wolf–Rayet stars". New Astronomy Reviews. 45 (3): 135–232. Bibcode:2001NewAR..45..135V. doi:10.1016/S1387-6473(00)00112-3.
- Crowther, P. A.; De Marco, O.; Barlow, M. J. (1998). "Quantitative classification of WC and WO stars". Monthly Notices of the Royal Astronomical Society. 296 (2): 367–378. Bibcode:1998MNRAS.296..367C. doi:10.1046/j.1365-8711.1998.01360.x. ISSN 0035-8711.
- Smith, Lindsey F. (1968). "A revised spectral classification system and a new catalogue for galactic Wolf–Rayet stars". Monthly Notices of the Royal Astronomical Society. 138: 109–121. Bibcode:1968MNRAS.138..109S. doi:10.1093/mnras/138.1.109.
- Crowther, P. A.; Smith, L. J. (1997). "Fundamental parameters of Wolf–Rayet stars. VI. Large Magellanic Cloud WNL stars". Astronomy and Astrophysics. 320: 500. Bibcode:1997A&A...320..500C.
- Conti, Peter S.; Massey, Philip (1989). "Spectroscopic studies of Wolf–Rayet stars. IV – Optical spectrophotometry of the emission lines in galactic and large Magellanic Cloud stars". The Astrophysical Journal. 337: 251. Bibcode:1989ApJ...337..251C. doi:10.1086/167101.
- Smith, L. F.; Michael m., S.; Moffat, A. F. J. (1996). "A three-dimensional classification for WN stars". Monthly Notices of the Royal Astronomical Society. 281 (1): 163–191. Bibcode:1996MNRAS.281..163S. doi:10.1093/mnras/281.1.163.
- Kingsburgh, R. L.; Barlow, M. J.; Storey, P. J. (1995). "Properties of the WO Wolf–Rayet stars". Astronomy and Astrophysics. 295: 75. Bibcode:1995A&A...295...75K. ISSN 0004-6361.
- Smith, J. D. T.; Houck, J. R. (2001). "A Mid-Infrared Spectral Survey of Galactic Wolf–Rayet Stars". The Astronomical Journal. 121 (4): 2115–2123. Bibcode:2001AJ....121.2115S. doi:10.1086/319968.
- Crowther, Paul A. (2007). "Physical Properties of Wolf–Rayet Stars". Annual Review of Astronomy and Astrophysics. 45 (1): 177–219. arXiv:astro-ph/0610356. Bibcode:2007ARA&A..45..177C. doi:10.1146/annurev.astro.45.051806.110615. S2CID 1076292.
- Todt, H.; et al. (2010). "The central star of the planetary nebula PB 8: a Wolf–Rayet-type wind of an unusual WN/WC chemical composition". Astronomy and Astrophysics. 515: A83. arXiv:1003.3419. Bibcode:2010A&A...515A..83T. doi:10.1051/0004-6361/200912183. S2CID 118684886.
- Miszalski, B.; et al. (2012). "IC 4663: the first unambiguous [WN] Wolf–Rayet central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 423 (1): 934–947. arXiv:1203.3303. Bibcode:2012MNRAS.423..934M. doi:10.1111/j.1365-2966.2012.20929.x. S2CID 10264296.
- Todt, H.; et al. (2013). "Abell 48 – a rare WN-type central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 430 (3): 2301–2312. arXiv:1301.1944. Bibcode:2013MNRAS.430.2302T. doi:10.1093/mnras/stt056. S2CID 118527324.
- Frew, David J.; et al. (2014). "The planetary nebula Abell 48 and its [WN] nucleus". Monthly Notices of the Royal Astronomical Society. 440 (2): 1345–1364. arXiv:1301.3994. Bibcode:2014MNRAS.440.1345F. doi:10.1093/mnras/stu198. S2CID 118489305.
- Hamann, W.-R. (1997). "Spectra of Wolf–Rayet type central stars and their analysis (Invited Review)". Proceedings of the 180th Symposium of the International Astronomical Union. Kluwer Academic Publishers. p. 91. Bibcode:1997IAUS..180...91H.
- Hamann, Wolf-Rainer (1996). "Spectral analysis and model atmospheres of WR central stars (Invited paper)". Astrophysics and Space Science. 238 (1): 31. Bibcode:1996Ap&SS.238...31H. doi:10.1007/BF00645489 (inactive 31 May 2021).CS1 maint: DOI inactive as of May 2021 (link)
- Liu, Q.-Z.; Hu, J.-Y.; Hang, H.-R.; Qiu, Y.-L.; Zhu, Z.-X.; Qiao, Q.-Y. (2000). "The supernova 1998S in NGC 3877: Another supernova with Wolf–Rayet star features in pre-maximum spectrum" (PDF). Astronomy and Astrophysics Supplement Series. 144 (2): 219–225. Bibcode:2000A&AS..144..219L. doi:10.1051/aas:2000208.
- Groh, Jose H. (2014). "Early-time spectra of supernovae and their precursor winds". Astronomy. 572: L11. arXiv:1408.5397. Bibcode:2014A&A...572L..11G. doi:10.1051/0004-6361/201424852. S2CID 118935040.
- Crowther, Paul A.; Walborn, Nolan R. (2011). "Spectral classification of O2-3.5 If*/WN5-7 stars". Monthly Notices of the Royal Astronomical Society. 416 (2): 1311. arXiv:1105.4757. Bibcode:2011MNRAS.416.1311C. doi:10.1111/j.1365-2966.2011.19129.x. S2CID 118455138.
- Walborn, N. R. (1982). "The O3 stars". Astrophysical Journal. 254: L15. Bibcode:1982ApJ...254L..15W. doi:10.1086/183747.
- Walborn, N. R. (1982). "Ofpe/WN9 circumstellar shells in the Large Magellanic Cloud". Astrophysical Journal. 256: 452. Bibcode:1982ApJ...256..452W. doi:10.1086/159922.
- Smith, L. J.; Crowther, P. A.; Prinja, R. K. (1994). "A study of the luminous blue variable candidate He 3-519 and its surrounding nebula". Astronomy and Astrophysics. 281: 833. Bibcode:1994A&A...281..833S.
- Crowther, P. A.; Bohannan, B. (1997). "The distinction between OIafpe and WNLha stars. A spectral analysis of HD 151804, HD 152408 and HDE 313846". Astronomy and Astrophysics. 317: 532. Bibcode:1997A&A...317..532C.
- Vamvatira-Nakou, C.; Hutsemékers, D.; Royer, P.; Cox, N. L. J.; Nazé, Y.; Rauw, G.; Waelkens, C.; Groenewegen, M. A. T. (2015). "The Herschel view of the nebula around the luminous blue variable star AG Carinae". Astronomy & Astrophysics. 578: A108. arXiv:1504.03204. Bibcode:2015A&A...578A.108V. doi:10.1051/0004-6361/201425090. S2CID 119160088.
- Neugent, Kathryn F; Massey, Philip; Morrell, Nidia (2018). "A Modern Search for Wolf–Rayet Stars in the Magellanic Clouds. IV. A Final Census". The Astrophysical Journal. 863 (2): 181. arXiv:1807.01209. Bibcode:2018ApJ...863..181N. doi:10.3847/1538-4357/aad17d. S2CID 118988083.
- Roberts, M. S. (1962). "The galactic distribution of the Wolf–Rayet stars". The Astronomical Journal. 67: 79. Bibcode:1962AJ.....67...79R. doi:10.1086/108603.
- Campbell, W. W. (1895). "Stars whose spectra contain both bright and dark hydrogen lines". The Astrophysical Journal. 2: 177. Bibcode:1895ApJ.....2..177C. doi:10.1086/140127.
- Gaposchkin, Cecilia Payne (1930). The stars of high luminosity. Harvard Observatory Monographs. 3. p. 1. Bibcode:1930HarMo...3....1P.
- Fleming, Williamina Paton Stevens; Pickering, Edward Charles (1912). "Stars having peculiar spectra". Annals of the Astronomical Observatory of Harvard College. 56 (6): 165. Bibcode:1912AnHar..56..165F.
- Van Der Hucht, Karel A.; Conti, Peter S.; Lundström, Ingemar; Stenholm, Björn (1981). "The Sixth Catalogue of galactic Wolf–Rayet stars, their past and present". Space Science Reviews. 28 (3): 227–306. Bibcode:1981SSRv...28..227V. doi:10.1007/BF00173260. S2CID 121477300.
- Van Der Hucht, K. A. (2006). "New Galactic Wolf–Rayet stars, and candidates". Astronomy and Astrophysics. 458 (2): 453–459. arXiv:astro-ph/0609008. Bibcode:2006A&A...458..453V. doi:10.1051/0004-6361:20065819. S2CID 119104786.
- Shara, Michael M.; Faherty, Jacqueline K.; Zurek, David; Moffat, Anthony F. J.; Gerke, Jill; Doyon, René; Artigau, Etienne; Drissen, Laurent (2012). "A Near-Infrared Survey of the Inner Galactic Plane for Wolf–Rayet Stars. Ii. Going Fainter: 71 More New W-R Stars". The Astronomical Journal. 143 (6): 149. arXiv:1106.2196. Bibcode:2012AJ....143..149S. doi:10.1088/0004-6256/143/6/149. S2CID 119186111.
- Rosslowe, C. K.; Crowther, P. A. (2015). "Spatial distribution of Galactic Wolf–Rayet stars and implications for the global population". Monthly Notices of the Royal Astronomical Society. 447 (3): 2322–2347. arXiv:1412.0699. Bibcode:2015MNRAS.447.2322R. doi:10.1093/mnras/stu2525. S2CID 28747394.
- Breysacher, J.; Azzopardi, M.; Testor, G. (1999). "The fourth catalogue of Population I Wolf–Rayet stars in the Large Magellanic Cloud". Astronomy and Astrophysics Supplement Series. 137: 117–145. Bibcode:1999A&AS..137..117B. doi:10.1051/aas:1999240.
- Breysacher, J. (1981). "Spectral Classification of Wolf–Rayet Stars in the Large Magellanic Cloud". Astronomy and Astrophysics Supplement. 43: 203. Bibcode:1981A&AS...43..203B.
- Hainich, R.; Rühling, U.; Todt, H.; Oskinova, L. M.; Liermann, A.; Gräfener, G.; Foellmi, C.; Schnurr, O.; Hamann, W.-R. (2014). "The Wolf–Rayet stars in the Large Magellanic Cloud. A comprehensive analysis of the WN class". Astronomy & Astrophysics. 565: A27. arXiv:1401.5474. Bibcode:2014A&A...565A..27H. doi:10.1051/0004-6361/201322696. S2CID 55123954.
- Azzopardi, M.; Breysacher, J. (1979). "A search for new Wolf–Rayet stars in the Small Magellanic Cloud". Astronomy and Astrophysics. 75: 120. Bibcode:1979A&A....75..120A.
- Massey, Philip; Olsen, K. A. G.; Parker, J. Wm. (2003). "The Discovery of a 12th Wolf‐Rayet Star in the Small Magellanic Cloud". Publications of the Astronomical Society of the Pacific. 115 (813): 1265–1268. arXiv:astro-ph/0308237. Bibcode:2003PASP..115.1265M. doi:10.1086/379024. S2CID 15609362.
- Massey, Philip; Duffy, Alaine S. (2001). "A Search for Wolf‐Rayet Stars in the Small Magellanic Cloud". The Astrophysical Journal. 550 (2): 713–723. arXiv:astro-ph/0010420. Bibcode:2001ApJ...550..713M. doi:10.1086/319818. S2CID 1579181.
- Bonanos, A. Z.; Lennon, D. J.; Köhlinger, F.; Van Loon, J. Th.; Massa, D. L.; Sewilo, M.; Evans, C. J.; Panagia, N.; Babler, B. L.; Block, M.; Bracker, S.; Engelbracht, C. W.; Gordon, K. D.; Hora, J. L.; Indebetouw, R.; Meade, M. R.; Meixner, M.; Misselt, K. A.; Robitaille, T. P.; Shiao, B.; Whitney, B. A. (2010). "Spitzersage-Smc Infrared Photometry of Massive Stars in the Small Magellanic Cloud". The Astronomical Journal. 140 (2): 416–429. arXiv:1004.0949. Bibcode:2010AJ....140..416B. doi:10.1088/0004-6256/140/2/416. S2CID 119290443.
- Shara, Michael M.; Moffat, Anthony F. J.; Gerke, Jill; Zurek, David; Stanonik, Kathryn; Doyon, René; Artigau, Etienne; Drissen, Laurent; Villar-Sbaffi, Alfredo (2009). "A Near-Infrared Survey of the Inner Galactic Plane for Wolf–Rayet Stars. I. Methods and First Results: 41 New Wr Stars". The Astronomical Journal. 138 (2): 402–420. arXiv:0905.1967. Bibcode:2009AJ....138..402S. doi:10.1088/0004-6256/138/2/402. S2CID 118370109.
- Neugent, Kathryn F.; Massey, Philip (2011). "The Wolf–Rayet Content of M33". The Astrophysical Journal. 733 (2): 123. arXiv:1103.5549. Bibcode:2011ApJ...733..123N. doi:10.1088/0004-637X/733/2/123. S2CID 118507918.
- Neugent, Kathryn F.; Massey, Philip; Georgy, Cyril (2012). "The Wolf–Rayet Content of M31". The Astrophysical Journal. 759 (1): 11. arXiv:1209.1177. Bibcode:2012ApJ...759...11N. doi:10.1088/0004-637X/759/1/11. S2CID 118620069.
- Bibby, Joanne; Shara, M. (2012). "A Study of the Wolf–Rayet Population of M101 using the Hubble Space Telescope". American Astronomical Society. 219: #242.13. Bibcode:2012AAS...21924213B.
- Schaerer, Daniel; Vacca, William D. (1998). "New Models for Wolf‐Rayet and O Star Populations in Young Starbursts". The Astrophysical Journal. 497 (2): 618–644. arXiv:astro-ph/9711140. Bibcode:1998ApJ...497..618S. doi:10.1086/305487. S2CID 10201971.
- Hamann, W.-R.; Gräfener, G.; Liermann, A. (2006). "The Galactic WN stars". Astronomy and Astrophysics. 457 (3): 1015–1031. arXiv:astro-ph/0608078. Bibcode:2006A&A...457.1015H. doi:10.1051/0004-6361:20065052. S2CID 18714731.
- Barniske, A.; Hamann, W.-R.; Gräfener, G. (2006). "Wolf–Rayet stars of the carbon sequence". ASP Conference Series. 353: 243. Bibcode:2006ASPC..353..243B.
- Sander, A. A. C.; Hamann, W. -R.; Todt, H.; Hainich, R.; Shenar, T.; Ramachandran, V.; Oskinova, L. M. (2019). "The Galactic WC and WO stars. The impact of revised distances from Gaia DR2 and their role as massive black hole progenitors". Astronomy and Astrophysics. 621: A92. arXiv:1807.04293. Bibcode:2019A&A...621A..92S. doi:10.1051/0004-6361/201833712. S2CID 67754788.
- Tylenda, R.; Acker, A.; Stenholm, B. (1993). "Wolf–Rayet Nuclei of Planetary Nebulae - Observations and Classification". Astronomy and Astrophysics Supplement. 102: 595. Bibcode:1993A&AS..102..595T.
- Hainich, R.; Pasemann, D.; Todt, H.; Shenar, T.; Sander, A.; Hamann, W.-R. (2015). "Wolf–Rayet stars in the Small Magellanic Cloud. I. Analysis of the single WN stars". Astronomy & Astrophysics. 581: A21. arXiv:1507.04000. Bibcode:2015A&A...581A..21H. doi:10.1051/0004-6361/201526241. ISSN 0004-6361. S2CID 56230998.
- Toalá, J. A.; Guerrero, M. A.; Ramos-Larios, G.; Guzmán, V. (2015). "WISE morphological study of Wolf–Rayet nebulae". Astronomy & Astrophysics. 578: A66. arXiv:1503.06878. Bibcode:2015A&A...578A..66T. doi:10.1051/0004-6361/201525706. S2CID 55776698.
- Foellmi, C.; Moffat, A. F. J.; Guerrero, M. A. (2003). "Wolf–Rayet binaries in the Magellanic Clouds and implications for massive-star evolution – I. Small Magellanic Cloud". Monthly Notices of the Royal Astronomical Society. 338 (2): 360–388. Bibcode:2003MNRAS.338..360F. doi:10.1046/j.1365-8711.2003.06052.x.
- Frew, David J.; Parker, Quentin A. (2010). "Planetary Nebulae: Observational Properties, Mimics and Diagnostics". Publications of the Astronomical Society of Australia. 27 (2): 129–148. arXiv:1002.1525. Bibcode:2010PASA...27..129F. doi:10.1071/AS09040. S2CID 59429975.
- Conti, Peter S.; Vacca, William D. (1994). "HST UV Imaging of the Starburst Regions in the Wolf–Rayet Galaxy He 2-10: Newly Formed Globular Clusters?". Astrophysical Journal Letters. 423: L97. Bibcode:1994ApJ...423L..97C. doi:10.1086/187245.
- Leitherer, Claus; Vacca, William D.; Conti, Peter S.; Filippenko, Alexei V.; Robert, Carmelle; Sargent, Wallace L. W. (1996). "Hubble Space Telescope Ultraviolet Imaging and Spectroscopy of the Bright Starburst in the Wolf–Rayet Galaxy NGC 4214". Astrophysical Journal. 465: 717. Bibcode:1996ApJ...465..717L. doi:10.1086/177456.
- Campbell, W. W. (1894). "The Wolf–Rayet stars". Astronomy and Astro-Physics. 13: 448. Bibcode:1894AstAp..13..448C.
- Zanstra, H.; Weenen, J. (1950). "On physical processes in Wolf–Rayet stars. Paper 1: Wolf–Rayet stars and Beals' hypothesis of pure recombination (Errata: 11 357)". Bulletin of the Astronomical Institutes of the Netherlands. 11: 165. Bibcode:1950BAN....11..165Z.
- Limber, D. Nelson (1964). "The Wolf–Rayet Phenomenon". The Astrophysical Journal. 139: 1251. Bibcode:1964ApJ...139.1251L. doi:10.1086/147863.
- Underhill, Anne B. (1968). "The Wolf–Rayet Stars". Annual Review of Astronomy and Astrophysics. 6: 39–78. Bibcode:1968ARA&A...6...39U. doi:10.1146/annurev.aa.06.090168.000351.
- Underhill, Anne B. (1960). "A Study of the Wolf–Rayet Stars H. D. 192103 and H. D. 192163". Publications of the Dominion Astrophysical Observatory Victoria. 11: 209. Bibcode:1960PDAO...11..209U.
- Sahade, J. (1958). "On the nature of the Wolf–Rayet stars". The Observatory. 78: 79. Bibcode:1958Obs....78...79S.
- Westerlund, B. E.; Smith, L. F. (1964). "Worlf-Rayet Stars in the Large Magellanic Cloud". Monthly Notices of the Royal Astronomical Society. 128 (4): 311–325. Bibcode:1964MNRAS.128..311W. doi:10.1093/mnras/128.4.311.
- Abbott, David C.; Conti, Peter S. (1987). "Wolf–Rayet stars". Annual Review of Astronomy and Astrophysics. 25: 113–150. Bibcode:1987ARA&A..25..113A. doi:10.1146/annurev.aa.25.090187.000553.
- Paczyński, B. (1967). "Evolution of Close Binaries. V. The Evolution of Massive Binaries and the Formation of the Wolf–Rayet Stars". Acta Astronomica. 17: 355. Bibcode:1967AcA....17..355P.
- Nugis, T.; Lamers, H. J. G. L. M. (2000). "Mass-loss rates of Wolf–Rayet stars as a function of stellar parameters". Astronomy and Astrophysics. 360: 227. Bibcode:2000A&A...360..227N.
- Humphreys, R. M. (1991). "The Wolf–Rayet Connection - Luminous Blue Variables and Evolved Supergiants (review)". Proceedings of the 143rd Symposium of the International Astronomical Union. 143. p. 485. Bibcode:1991IAUS..143..485H.
- Groh, Jose H.; Meynet, Georges; Georgy, Cyril; Ekström, Sylvia (2013). "Fundamental properties of core-collapse supernova and GRB progenitors: Predicting the look of massive stars before death". Astronomy & Astrophysics. 558: A131. arXiv:1308.4681. Bibcode:2013A&A...558A.131G. doi:10.1051/0004-6361/201321906. S2CID 84177572.
- Georges Meynet; Cyril Georgy; Raphael Hirschi; Andre Maeder; Phil Massey; Norbert Przybilla; M-Fernanda Nieva (2011). "Red Supergiants, Luminous Blue Variables and Wolf–Rayet stars: The single massive star perspective". Bulletin de la Société Royale des Sciences de Liège. v1. 80 (39): 266–278. arXiv:1101.5873. Bibcode:2011BSRSL..80..266M.
- Tramper, Frank (2013). "The nature of WO stars: VLT/X-Shooter spectroscopy of DR1". Massive Stars: From α to Ω: 187. arXiv:1312.1555. Bibcode:2013msao.confE.187T.
- Eldridge, John J.; Fraser, Morgan; Smartt, Stephen J.; Maund, Justyn R.; Crockett, R. Mark (2013). "The death of massive stars - II. Observational constraints on the progenitors of Type Ibc supernovae". Monthly Notices of the Royal Astronomical Society. 436 (1): 774–795. arXiv:1301.1975. Bibcode:2013MNRAS.436..774E. doi:10.1093/mnras/stt1612. S2CID 118535155.
- Groh, Jose; Meynet, Georges; Ekstrom, Sylvia; Georgy, Cyril (2014). "The evolution of massive stars and their spectra I. A non-rotating 60 Msun star from the zero-age main sequence to the pre-supernova stage". Astronomy & Astrophysics. 564: A30. arXiv:1401.7322. Bibcode:2014A&A...564A..30G. doi:10.1051/0004-6361/201322573. S2CID 118870118.
- Oberlack, U.; Wessolowski, U.; Diehl, R.; Bennett, K.; Bloemen, H.; Hermsen, W.; Knödlseder, J.; Morris, D.; Schönfelder, V.; von Ballmoos, P. (2000). "COMPTEL limits on 26Al 1.809 MeV line emission from gamma2 Velorum". Astronomy and Astrophysics. 353: 715. arXiv:astro-ph/9910555. Bibcode:2000A&A...353..715O.
- Banerjee, Sambaran; Kroupa, Pavel; Oh, Seungkyung (2012). "The emergence of super-canonical stars in R136-type starburst clusters". Monthly Notices of the Royal Astronomical Society. 426 (2): 1416–1426. arXiv:1208.0826. Bibcode:2012MNRAS.426.1416B. doi:10.1111/j.1365-2966.2012.21672.x. S2CID 119202197.
- Mauerhan, Jon C.; Smith, Nathan; Van Dyk, Schuyler D.; Morzinski, Katie M.; Close, Laird M.; Hinz, Philip M.; Males, Jared R.; Rodigas, Timothy J. (2015). "Multiwavelength Observations of NaSt1 (WR 122): Equatorial Mass Loss and X-rays from an Interacting Wolf–Rayet Binary". Monthly Notices of the Royal Astronomical Society. 1502 (3): 1794. arXiv:1502.01794. Bibcode:2015MNRAS.450.2551M. doi:10.1093/mnras/stv257. S2CID 40573971.
- Dessart, Luc; Hillier, D. John; Livne, Eli; Yoon, Sung-Chul; Woosley, Stan; Waldman, Roni; Langer, Norbert (2011). "Core-collapse explosions of Wolf–Rayet stars and the connection to Type IIb/Ib/Ic supernovae". Monthly Notices of the Royal Astronomical Society. 414 (4): 2985. arXiv:1102.5160. Bibcode:2011MNRAS.414.2985D. doi:10.1111/j.1365-2966.2011.18598.x. S2CID 119257348.
- Groh, Jose H.; Georgy, Cyril; Ekström, Sylvia (2013). "Progenitors of supernova Ibc: A single Wolf–Rayet star as the possible progenitor of the SN Ib iPTF13bvn". Astronomy & Astrophysics. 558: L1. arXiv:1307.8434. Bibcode:2013A&A...558L...1G. doi:10.1051/0004-6361/201322369. S2CID 58911704.
- Cerda-Duran, Pablo; Elias-Rosa, Nancy (2018). "Neutron Stars Formation and Core Collapse Supernovae". The Physics and Astrophysics of Neutron Stars. Astrophysics and Space Science Library. 457. pp. 1–56. arXiv:1806.07267. doi:10.1007/978-3-319-97616-7_1. ISBN 978-3-319-97615-0. S2CID 119340817.
- Milisavljevic, D. (2013). "The Progenitor Systems and Explosion Mechanisms of Supernovae". New Horizons in Astronomy (Bash 2013): 9. Bibcode:2013nha..confE...9M.
- Kilpatrick, Charles D.; Takaro, Tyler; Foley, Ryan J.; Leibler, Camille N.; Pan, Yen-Chen; Campbell, Randall D.; Jacobson-Galan, Wynn V.; Lewis, Hilton A.; Lyke, James E.; Max, Claire E.; Medallon, Sophia A.; Rest, Armin (2018). "A potential progenitor for the Type Ic supernova 2017ein". Monthly Notices of the Royal Astronomical Society. 480 (2): 2072–2084. arXiv:1808.02989. Bibcode:2018MNRAS.480.2072K. doi:10.1093/mnras/sty2022. S2CID 73695137.
- Acker, A.; Neiner, C. (2003). "Quantitative classification of WR nuclei of planetary nebulae". Astronomy and Astrophysics. 403 (2): 659–673. Bibcode:2003A&A...403..659A. doi:10.1051/0004-6361:20030391.
- Peña, M.; Rechy-García, J. S.; García-Rojas, J. (2013). "Galactic kinematics of Planetary Nebulae with [WC] central star". Revista Mexicana de Astronomía y Astrofísica. 49: 87. arXiv:1301.3657. Bibcode:2013RMxAA..49...87P.
- Tuthill, Peter G.; Monnier, John D.; Danchi, William C.; Turner, Nils H. (2003). "High-resolution near-IR imaging of the WCd(+OB) environments: Pinwheels". Proceedings of the 212th International Union of Astronomy Symposium. 212. p. 121. Bibcode:2003IAUS..212..121T.
- Monnier, J. D.; Tuthill, P. G.; Danchi, W. C. (1999). "Pinwheel Nebula around WR 98[CLC]a[/CLC]". The Astrophysical Journal. 525 (2): L97–L100. arXiv:astro-ph/9909282. Bibcode:1999ApJ...525L..97M. doi:10.1086/312352. PMID 10525463. S2CID 2811347.
- Dougherty, S. M.; Beasley, A. J.; Claussen, M. J.; Zauderer, B. A.; Bolingbroke, N. J. (2005). "High-Resolution Radio Observations of the Colliding-Wind Binary WR 140". The Astrophysical Journal. 623 (1): 447–459. arXiv:astro-ph/0501391. Bibcode:2005ApJ...623..447D. doi:10.1086/428494. S2CID 17035675.
|Wikimedia Commons has media related to Wolf-Rayet stars.| | https://en.wikipedia.org/wiki/Wolf-Rayet_star | 21 |
237 | |Other names||Deaf or Hard of hearing; anakusis or anacusis is total deafness|
|The international symbol of deafness and hearing loss|
|Symptoms||Decreased ability to hear|
|Complications||Social isolation, dementia|
|Types||Conductive, sensorineural, and mixed hearing loss, central auditory dysfunction|
|Causes||Genetics, aging, exposure to noise, some infections, birth complications, trauma to the ear, certain medications or toxins|
|Diagnostic method||Hearing tests|
|Prevention||Immunization, proper care around pregnancy, avoiding loud noise, avoiding certain medications|
|Treatment||Hearing aids, sign language, cochlear implants, subtitles|
|Frequency||1.33 billion / 18.5% (2015)|
Hearing loss is a partial or total inability to hear. Hearing loss may be present at birth or acquired at any time afterwards. Hearing loss may occur in one or both ears. In children, hearing problems can affect the ability to acquire spoken language, and in adults it can create difficulties with social interaction and at work. Hearing loss can be temporary or permanent. Hearing loss related to age usually affects both ears and is due to cochlear hair cell loss. In some people, particularly older people, hearing loss can result in loneliness. Deaf people usually have little to no hearing.
Hearing loss may be caused by a number of factors, including: genetics, ageing, exposure to noise, some infections, birth complications, trauma to the ear, and certain medications or toxins. A common condition that results in hearing loss is chronic ear infections. Certain infections during pregnancy, such as cytomegalovirus, syphilis and rubella, may also cause hearing loss in the child. Hearing loss is diagnosed when hearing testing finds that a person is unable to hear 25 decibels in at least one ear. Testing for poor hearing is recommended for all newborns. Hearing loss can be categorized as mild (25 to 40 dB), moderate (41 to 55 dB), moderate-severe (56 to 70 dB), severe (71 to 90 dB), or profound (greater than 90 dB). There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss.
About half of hearing loss globally is preventable through public health measures. Such practices include immunization, proper care around pregnancy, avoiding loud noise, and avoiding certain medications. The World Health Organization recommends that young people limit exposure to loud sounds and the use of personal audio players to an hour a day in an effort to limit exposure to noise. Early identification and support are particularly important in children. For many, hearing aids, sign language, cochlear implants and subtitles are useful. Lip reading is another useful skill some develop. Access to hearing aids, however, is limited in many areas of the world.
As of 2013 hearing loss affects about 1.1 billion people to some degree. It causes disability in about 466 million people (5% of the global population), and moderate to severe disability in 124 million people. Of those with moderate to severe disability 108 million live in low and middle income countries. Of those with hearing loss, it began during childhood for 65 million. Those who use sign language and are members of Deaf culture see themselves as having a difference rather than a disability. Most members of Deaf culture oppose attempts to cure deafness and some within this community view cochlear implants with concern as they have the potential to eliminate their culture. The terms hearing impairment or hearing loss are often viewed negatively as emphasizing what people cannot do, although the terms are still regularly used when referring to deafness in medical contexts.
- Hearing loss is defined as diminished acuity to sounds which would otherwise be heard normally. The terms hearing impaired or hard of hearing are usually reserved for people who have relative inability to hear sound in the speech frequencies. The severity of hearing loss is categorized according to the increase in intensity of sound above the usual level required for the listener to detect it.
- Deafness is defined as a degree of loss such that a person is unable to understand speech, even in the presence of amplification. In profound deafness, even the highest intensity sounds produced by an audiometer (an instrument used to measure hearing by producing pure tone sounds through a range of frequencies) may not be detected. In total deafness, no sounds at all, regardless of amplification or method of production, can be heard.
- Speech perception is another aspect of hearing which involves the perceived clarity of a word rather than the intensity of sound made by the word. In humans, this is usually measured with speech discrimination tests, which measure not only the ability to detect sound, but also the ability to understand speech. There are very rare types of hearing loss that affect speech discrimination alone. One example is auditory neuropathy, a variety of hearing loss in which the outer hair cells of the cochlea are intact and functioning, but sound information is not faithfully transmitted by the auditory nerve to the brain.
Use of the terms "hearing impaired", "deaf-mute", or "deaf and dumb" to describe deaf and hard of hearing people is discouraged by many in the deaf community as well as advocacy organizations, as they are offensive to many deaf and hard of hearing people.
Human hearing extends in frequency from 20 to 20,000 Hz, and in intensity from 0 dB to 120 dB HL or more. 0 dB does not represent absence of sound, but rather the softest sound an average unimpaired human ear can hear; some people can hear down to −5 or even −10 dB. Sound is generally uncomfortably loud above 90 dB and 115 dB represents the threshold of pain. The ear does not hear all frequencies equally well: hearing sensitivity peaks around 3,000 Hz. There are many qualities of human hearing besides frequency range and intensity that cannot easily be measured quantitatively. However, for many practical purposes, normal hearing is defined by a frequency versus intensity graph, or audiogram, charting sensitivity thresholds of hearing at defined frequencies. Because of the cumulative impact of age and exposure to noise and other acoustic insults, 'typical' hearing may not be normal.
Signs and symptoms
- difficulty using the telephone
- loss of sound localization
- difficulty understanding speech, especially of children and women whose voices are of a higher frequency.
- difficulty understanding speech in the presence of background noise (cocktail party effect)
- sounds or speech sounding dull, muffled or attenuated
- need for increased volume on television, radio, music and other audio sources
Hearing loss is sensory, but may have accompanying symptoms:
- pain or pressure in the ears
- a blocked feeling
There may also be accompanying secondary symptoms:
- hyperacusis, heightened sensitivity with accompanying auditory pain to certain intensities and frequencies of sound, sometimes defined as "auditory recruitment"
- tinnitus, ringing, buzzing, hissing or other sounds in the ear when no external sound is present
- vertigo and disequilibrium
- tympanophonia, also known as autophonia, abnormal hearing of one's own voice and respiratory sounds, usually as a result of a patulous (a constantly open) eustachian tube or dehiscent superior semicircular canals
- disturbances of facial movement (indicating a possible tumour or stroke) or in persons with Bell's palsy
Hearing loss is associated with Alzheimer's disease and dementia. The risk increases with the hearing loss degree. There are several hypotheses including cognitive resources being redistributed to hearing and social isolation from hearing loss having a negative effect. According to preliminary data, hearing aid usage can slow down the decline in cognitive functions.
Hearing loss is an increasing concern especially in aging populations, the prevalence of hearing loss increase about two-fold for each decade increase in age after age 40. While the secular trend might decrease individual level risk of developing hearing loss, the prevalence of hearing loss is expected to rise due to the aging population in the US. Another concern about aging process is cognitive decline, which may progress to mild cognitive impairment and eventually dementia. The association between hearing loss and cognitive decline has been studied in various research settings. Despite the variability in study design and protocols, the majority of these studies have found consistent association between age-related hearing loss and cognitive decline, cognitive impairment, and dementia. The association between age-related hearing loss and Alzheimer's disease was found to be nonsignificant, and this finding supports the hypothesis that hearing loss is associated with dementia independent of Alzheimer pathology. There are several hypothesis about the underlying causal mechanism for age-related hearing loss and cognitive decline. One hypothesis is that this association can be explained by common etiology or shared neurobiological pathology with decline in other physiological system. Another possible cognitive mechanism emphasize on individual's cognitive load. As people developing hearing loss in the process of aging, the cognitive load demanded by auditory perception increases, which may lead to change in brain structure and eventually to dementia. One other hypothesis suggests that the association between hearing loss and cognitive decline is mediated through various psychosocial factors, such as decrease in social contact and increase in social isolation. Findings on the association between hearing loss and dementia have significant public health implication, since about 9% of dementia cases can be attributed to hearing loss.
Falls have important health implications, especially for an aging population where they can lead to significant morbidity and mortality. Elderly people are particularly vulnerable to the consequences of injuries caused by falls, since older individuals typically have greater bone fragility and poorer protective reflexes. Fall-related injury can also lead to burdens on the financial and health care systems. In literature, age-related hearing loss is found to be significantly associated with incident falls. There is also a potential dose-response relationship between hearing loss and falls---greater severity of hearing loss is associated with increased difficulties in postural control and increased prevalence of falls. The underlying causal link between the association of hearing loss and falls is yet to be elucidated. There are several hypotheses that indicate that there may be a common process between decline in auditory system and increase in incident falls, driven by physiological, cognitive, and behavioral factors. This evidence suggests that treating hearing loss has potential to increase health-related quality of life in older adults.
Depression is one of the leading causes of morbidity and mortality worldwide. In older adults, the suicide rate is higher than it is for younger adults, and more suicide cases are attributable to depression. Different studies have been done to investigate potential risk factors that can give rise to depression in later life. Some chronic diseases are found to be significantly associated with risk of developing depression, such as coronary heart disease, pulmonary disease, vision loss and hearing loss. Hearing loss can attribute to decrease in health-related quality of life, increase in social isolation and decline in social engagement, which are all risk factors for increased risk of developing depression symptoms.
Spoken language ability
Post-lingual deafness is hearing loss that is sustained after the acquisition of language, which can occur due to disease, trauma, or as a side-effect of a medicine. Typically, hearing loss is gradual and often detected by family and friends of affected individuals long before the patients themselves will acknowledge the disability. Post-lingual deafness is far more common than pre-lingual deafness. Those who lose their hearing later in life, such as in late adolescence or adulthood, face their own challenges, living with the adaptations that allow them to live independently.
Prelingual deafness is profound hearing loss that is sustained before the acquisition of language, which can occur due to a congenital condition or through hearing loss before birth or in early infancy. Prelingual deafness impairs an individual's ability to acquire a spoken language in children, but deaf children can acquire spoken language through support from cochlear implants (sometimes combined with hearing aids). Non-signing (hearing) parents of deaf babies (90–95% of cases) usually go with oral approach without the support of sign language, as these families lack previous experience with sign language and cannot competently provide it to their children without learning it themselves. Unfortunately, this may in some cases (late implantation or not sufficient benefit from cochlear implants) bring the risk of language deprivation for the deaf baby because the deaf baby would not have a sign language if the child is unable to acquire spoken language successfully. The 5–10% of cases of deaf babies born into signing families have the potential of age-appropriate development of language due to early exposure to a sign language by sign-competent parents, thus they have the potential to meet language milestones, in sign language in lieu of spoken language.
Hearing loss has multiple causes, including ageing, genetics, perinatal problems and acquired causes like noise and disease. For some kinds of hearing loss the cause may be classified as of unknown cause.
There is a progressive loss of ability to hear high frequencies with aging known as presbycusis. For men, this can start as early as 25 and women at 30. Although genetically variable it is a normal concomitant of ageing and is distinct from hearing losses caused by noise exposure, toxins or disease agents. Common conditions that can increase the risk of hearing loss in elderly people are high blood pressure, diabetes, or the use of certain medications harmful to the ear. While everyone loses hearing with age, the amount and type of hearing loss is variable.
Noise-induced hearing loss (NIHL), also known as acoustic trauma, typically manifests as elevated hearing thresholds (i.e. less sensitivity or muting). Noise exposure is the cause of approximately half of all cases of hearing loss, causing some degree of problems in 5% of the population globally. The majority of hearing loss is not due to age, but due to noise exposure. Various governmental, industry and standards organizations set noise standards. Many people are unaware of the presence of environmental sound at damaging levels, or of the level at which sound becomes harmful. Common sources of damaging noise levels include car stereos, children's toys, motor vehicles, crowds, lawn and maintenance equipment, power tools, gun use, musical instruments, and even hair dryers. Noise damage is cumulative; all sources of damage must be considered to assess risk. In the US, 12.5% of children aged 6–19 years have permanent hearing damage from excessive noise exposure. The World Health Organization estimates that half of those between 12 and 35 are at risk from using personal audio devices that are too loud. Hearing loss in adolescents may be caused by loud noise from toys, music by headphones, and concerts or events.
Hearing loss can be inherited. Around 75–80% of all these cases are inherited by recessive genes, 20–25% are inherited by dominant genes, 1–2% are inherited by X-linked patterns, and fewer than 1% are inherited by mitochondrial inheritance. Syndromic deafness occurs when there are other signs or medical problems aside from deafness in an individual, such as Usher syndrome, Stickler syndrome, Waardenburg syndrome, Alport's syndrome, and neurofibromatosis type 2. Nonsyndromic deafness occurs when there are no other signs or medical problems associated with the deafness in an individual.
Fetal alcohol spectrum disorders are reported to cause hearing loss in up to 64% of infants born to alcoholic mothers, from the ototoxic effect on the developing fetus plus malnutrition during pregnancy from the excess alcohol intake. Premature birth can be associated with sensorineural hearing loss because of an increased risk of hypoxia, hyperbilirubinaemia, ototoxic medication and infection as well as noise exposure in the neonatal units. Also, hearing loss in premature babies is often discovered far later than a similar hearing loss would be in a full-term baby because normally babies are given a hearing test within 48 hours of birth, but doctors must wait until the premature baby is medically stable before testing hearing, which can be months after birth. The risk of hearing loss is greatest for those weighing less than 1500 g at birth.
Disorders responsible for hearing loss include auditory neuropathy, Down syndrome, Charcot–Marie–Tooth disease variant 1E, autoimmune disease, multiple sclerosis, meningitis, cholesteatoma, otosclerosis, perilymph fistula, Ménière's disease, recurring ear infections, strokes, superior semicircular canal dehiscence, Pierre Robin, Treacher-Collins, Usher Syndrome, Pendred Syndrome, and Turner syndrome, syphilis, vestibular schwannoma, and viral infections such as measles, mumps, congenital rubella (also called German measles) syndrome, several varieties of herpes viruses, HIV/AIDS, and West Nile virus.
Some medications may reversibly affect hearing. These medications are considered ototoxic. This includes loop diuretics such as furosemide and bumetanide, non-steroidal anti-inflammatory drugs (NSAIDs) both over-the-counter (aspirin, ibuprofen, naproxen) as well as prescription (celecoxib, diclofenac, etc.), paracetamol, quinine, and macrolide antibiotics. Others may cause permanent hearing loss. The most important group is the aminoglycosides (main member gentamicin) and platinum based chemotherapeutics such as cisplatin and carboplatin.
In addition to medications, hearing loss can also result from specific chemicals in the environment: metals, such as lead; solvents, such as toluene (found in crude oil, gasoline and automobile exhaust, for example); and asphyxiants. Combined with noise, these ototoxic chemicals have an additive effect on a person's hearing loss. Hearing loss due to chemicals starts in the high frequency range and is irreversible. It damages the cochlea with lesions and degrades central portions of the auditory system. For some ototoxic chemical exposures, particularly styrene, the risk of hearing loss can be higher than being exposed to noise alone. The effects is greatest when the combined exposure include impulse noise. A 2018 informational bulletin by the US Occupational Safety and Health Administration (OSHA) and the National Institute for Occupational Safety and Health (NIOSH) introduces the issue, provides examples of ototoxic chemicals, lists the industries and occupations at risk and provides prevention information.
There can be damage either to the ear, whether the external or middle ear, to the cochlea, or to the brain centers that process the aural information conveyed by the ears. Damage to the middle ear may include fracture and discontinuity of the ossicular chain. Damage to the inner ear (cochlea) may be caused by temporal bone fracture. People who sustain head injury are especially vulnerable to hearing loss or tinnitus, either temporary or permanent.
Sound waves reach the outer ear and are conducted down the ear canal to the eardrum, causing it to vibrate. The vibrations are transferred by the 3 tiny ear bones of the middle ear to the fluid in the inner ear. The fluid moves hair cells (stereocilia), and their movement generates nerve impulses which are then taken to the brain by the cochlear nerve. The auditory nerve takes the impulses to the brainstem, which sends the impulses to the midbrain. Finally, the signal goes to the auditory cortex of the temporal lobe to be interpreted as sound.
Older people may lose their hearing from long exposure to noise, changes in the inner ear, changes in the middle ear, or from changes along the nerves from the ear to the brain.
Identification of a hearing loss is usually conducted by a general practitioner medical doctor, otolaryngologist, certified and licensed audiologist, school or industrial audiometrist, or other audiometric technician. Diagnosis of the cause of a hearing loss is carried out by a specialist physician (audiovestibular physician) or otorhinolaryngologist.
Hearing loss is generally measured by playing generated or recorded sounds, and determining whether the person can hear them. Hearing sensitivity varies according to the frequency of sounds. To take this into account, hearing sensitivity can be measured for a range of frequencies and plotted on an audiogram. Other method for quantifying hearing loss is a hearing test using a mobile application or hearing aid application, which includes a hearing test. Hearing diagnosis using mobile application is similar to the audiometry procedure. Audiogram, obtained using mobile application, can be used to adjust hearing aid application. Another method for quantifying hearing loss is a speech-in-noise test. which gives an indication of how well one can understand speech in a noisy environment. Otoacoustic emissions test is an objective hearing test that may be administered to toddlers and children too young to cooperate in a conventional hearing test. Auditory brainstem response testing is an electrophysiological test used to test for hearing deficits caused by pathology within the ear, the cochlear nerve and also within the brainstem.
A case history (usually a written form, with questionnaire) can provide valuable information about the context of the hearing loss, and indicate what kind of diagnostic procedures to employ. Examinations include otoscopy, tympanometry, and differential testing with the Weber, Rinne, Bing and Schwabach tests. In case of infection or inflammation, blood or other body fluids may be submitted for laboratory analysis. MRI and CT scans can be useful to identify the pathology of many causes of hearing loss.
Hearing loss is categorized by severity, type, and configuration. Furthermore, a hearing loss may exist in only one ear (unilateral) or in both ears (bilateral). Hearing loss can be temporary or permanent, sudden or progressive. The severity of a hearing loss is ranked according to ranges of nominal thresholds in which a sound must be so it can be detected by an individual. It is measured in decibels of hearing loss, or dB HL. There are three main types of hearing loss: conductive hearing loss, sensorineural hearing loss, and mixed hearing loss. An additional problem which is increasingly recognised is auditory processing disorder which is not a hearing loss as such but a difficulty perceiving sound. The shape of an audiogram shows the relative configuration of the hearing loss, such as a Carhart notch for otosclerosis, 'noise' notch for noise-induced damage, high frequency rolloff for presbycusis, or a flat audiogram for conductive hearing loss. In conjunction with speech audiometry, it may indicate central auditory processing disorder, or the presence of a schwannoma or other tumor.
People with unilateral hearing loss or single-sided deafness (SSD) have difficulty in hearing conversation on their impaired side, localizing sound, and understanding speech in the presence of background noise. One reason for the hearing problems these patients often experience is due to the head shadow effect.
It is estimated that half of cases of hearing loss are preventable. About 60% of hearing loss in children under the age of 15 can be avoided. A number of preventive strategies are effective including: immunization against rubella to prevent congenital rubella syndrome, immunization against H. influenza and S. pneumoniae to reduce cases of meningitis, and avoiding or protecting against excessive noise exposure. The World Health Organization also recommends immunization against measles, mumps, and meningitis, efforts to prevent premature birth, and avoidance of certain medication as prevention. World Hearing Day is a yearly event to promote actions to prevent hearing damage.
Noise exposure is the most significant risk factor for noise-induced hearing loss that can be prevented. Different programs exist for specific populations such as school-age children, adolescents and workers. Education regarding noise exposure increases the use of hearing protectors. The use of antioxidants is being studied for the prevention of noise-induced hearing loss, particularly for scenarios in which noise exposure cannot be reduced, such as during military operations.
Workplace noise regulation
Noise is widely recognized as an occupational hazard. In the United States, the National Institute for Occupational Safety and Health (NIOSH) and the Occupational Safety and Health Administration (OSHA) work together to provide standards and enforcement on workplace noise levels. The hierarchy of hazard controls demonstrates the different levels of controls to reduce or eliminate exposure to noise and prevent hearing loss, including engineering controls and personal protective equipment (PPE). Other programs and initiative have been created to prevent hearing loss in the workplace. For example, the Safe-in-Sound Award was created to recognize organizations that can demonstrate results of successful noise control and other interventions. Additionally, the Buy Quiet program was created to encourage employers to purchase quieter machinery and tools. By purchasing less noisy power tools like those found on the NIOSH Power Tools Database and limiting exposure to ototoxic chemicals, great strides can be made in preventing hearing loss.
Companies can also provide personal hearing protector devices tailored to both the worker and type of employment. Some hearing protectors universally block out all noise, and some allow for certain noises to be heard. Workers are more likely to wear hearing protector devices when they are properly fitted.
Often interventions to prevent noise-induced hearing loss have many components. A 2017 Cochrane review found that stricter legislation might reduce noise levels. Providing workers with information on their sound exposure levels was not shown to decrease exposure to noise. Ear protection, if used correctly, can reduce noise to safer levels, but often, providing them is not sufficient to prevent hearing loss. Engineering noise out and other solutions such as proper maintenance of equipment can lead to noise reduction, but further field studies on resulting noise exposures following such interventions are needed. Other possible solutions include improved enforcement of existing legislation and better implementation of well-designed prevention programmes, which have not yet been proven conclusively to be effective. The conclusion of the Cochrane Review was that further research could modify what is now regarding the effectiveness of the evaluated interventions.
The Institute for Occupational Safety and Health of the German Social Accident Insurance has created a hearing impairment calculator based on the ISO 1999 model for studying threshold shift in relatively homogeneous groups of people, such as workers with the same type of job. The ISO 1999 model estimates how much hearing impairment in a group can be ascribed to age and noise exposure. The result is calculated via an algebraic equation that uses the A-weighted sound exposure level, how many years the people were exposed to this noise, how old the people are, and their sex. The model’s estimations are only useful for people without hearing loss due to non-job related exposure and can be used for prevention activities.
- When they enter school
- At ages 6, 8, and 10
- At least once during middle school
- At least once during high school
While the American College of Physicians indicated that there is not enough evidence to determine the utility of screening in adults over 50 years old who do not have any symptoms, the American Language, Speech Pathology and Hearing Association recommends that adults should be screened at least every decade through age 50 and at 3-year intervals thereafter, to minimize the detrimental effects of the untreated condition on quality of life. For the same reason, the US Office of Disease Prevention and Health Promotion included as one of Healthy People 2020 objectives: to increase the proportion of persons who have had a hearing examination.
Management depends on the specific cause if known as well as the extent, type and configuration of the hearing loss. Sudden hearing loss due to and underlying nerve problem may be treated with corticosteroids.
Most hearing loss, that resulting from age and noise, is progressive and irreversible, and there are currently no approved or recommended treatments. A few specific kinds of hearing loss are amenable to surgical treatment. In other cases, treatment is addressed to underlying pathologies, but any hearing loss incurred may be permanent. Some management options include hearing aids, cochlear implants, assistive technology, and closed captioning. This choice depends on the level of hearing loss, type of hearing loss, and personal preference. Hearing aid applications are one of the options for hearing loss management. For people with bilateral hearing loss, it is not clear if bilateral hearing aids (hearing aids in both ears) are better than a unilateral hearing aid (hearing aid in one ear).
Globally, hearing loss affects about 10% of the population to some degree. It caused moderate to severe disability in 124.2 million people as of 2004 (107.9 million of whom are in low and middle income countries). Of these 65 million acquired the condition during childhood. At birth ~3 per 1000 in developed countries and more than 6 per 1000 in developing countries have hearing problems.
Hearing loss increases with age. In those between 20 and 35 rates of hearing loss are 3% while in those 44 to 55 it is 11% and in those 65 to 85 it is 43%.
A 2017 report by the World Health Organization estimated the costs of unaddressed hearing loss and the cost-effectiveness of interventions, for the health-care sector, for the education sector and as broad societal costs. Globally, the annual cost of unaddressed hearing loss was estimated to be in the range of $750–790 billion international dollars.
The International Organization for Standardization (ISO) developed the ISO 1999 standards for the estimation of hearing thresholds and noise-induced hearing impairment. They used data from two noise and hearing study databases, one presented by Burns and Robinson (Hearing and Noise in Industry, Her Majesty's Stationery Office, London, 1970) and by Passchier-Vermeer (1968). As race are some of the factors that can affect the expected distribution of pure-tone hearing thresholds several other national or regional datasets exist, from Sweden, Norway, South Korea, the United States and Spain.
In the United States hearing is one of the health outcomes measure by the National Health and Nutrition Examination Survey (NHANES), a survey research program conducted by the National Center for Health Statistics. It examines health and nutritional status of adults and children in the United States. Data from the United States in 2011-2012 found that rates of hearing loss has declined among adults aged 20 to 69 years, when compared with the results from an earlier time period (1999-2004). It also found that adult hearing loss is associated with increasing age, sex, ethnicity, educational level, and noise exposure. Nearly one in four adults had audiometric results suggesting noise-induced hearing loss. Almost one in four adults who reported excellent or good hearing had a similar pattern (5.5% on both sides and 18% on one side). Among people who reported exposure to loud noise at work, almost one third had such changes.
Social and cultural aspects
People with extreme hearing loss may communicate through sign languages. Sign languages convey meaning through manual communication and body language instead of acoustically conveyed sound patterns. This involves the simultaneous combination of hand shapes, orientation and movement of the hands, arms or body, and facial expressions to express a speaker's thoughts. "Sign languages are based on the idea that vision is the most useful tool a deaf person has to communicate and receive information".
Deaf culture refers to a tight-knit cultural group of people whose primary language is signed, and who practice social and cultural norms which are distinct from those of the surrounding hearing community. This community does not automatically include all those who are clinically or legally deaf, nor does it exclude every hearing person. According to Baker and Padden, it includes any person or persons who "identifies him/herself as a member of the Deaf community, and other members accept that person as a part of the community," an example being children of deaf adults with normal hearing ability. It includes the set of social beliefs, behaviors, art, literary traditions, history, values, and shared institutions of communities that are influenced by deafness and which use sign languages as the main means of communication. Members of the Deaf community tend to view deafness as a difference in human experience rather than a disability or disease. When used as a cultural label especially within the culture, the word deaf is often written with a capital D and referred to as "big D Deaf" in speech and sign. When used as a label for the audiological condition, it is written with a lower case d.
Stem cell transplant and gene therapy
A 2005 study achieved successful regrowth of cochlea cells in guinea pigs. However, the regrowth of cochlear hair cells does not imply the restoration of hearing sensitivity, as the sensory cells may or may not make connections with neurons that carry the signals from hair cells to the brain. A 2008 study has shown that gene therapy targeting Atoh1 can cause hair cell growth and attract neuronal processes in embryonic mice. Some hope that a similar treatment will one day ameliorate hearing loss in humans.
Recent research, reported in 2012 achieved growth of cochlear nerve cells resulting in hearing improvements in gerbils, using stem cells. Also reported in 2013 was regrowth of hair cells in deaf adult mice using a drug intervention resulting in hearing improvement. The Hearing Health Foundation in the US has embarked on a project called the Hearing Restoration Project. Also Action on Hearing Loss in the UK is also aiming to restore hearing.
Researchers reported in 2015 that genetically deaf mice which were treated with TMC1 gene therapy recovered some of their hearing. In 2017, additional studies were performed to treat Usher syndrome and here, a recombinant adeno-associated virus seemed to outperform the older vectors.
Besides research studies seeking to improve hearing, such as the ones listed above, research studies on the deaf have also been carried out in order to understand more about audition. Pijil and Shwarz (2005) conducted their study on the deaf who lost their hearing later in life and, hence, used cochlear implants to hear. They discovered further evidence for rate coding of pitch, a system that codes for information for frequencies by the rate that neurons fire in the auditory system, especially for lower frequencies as they are coded by the frequencies that neurons fire from the basilar membrane in a synchronous manner. Their results showed that the subjects could identify different pitches that were proportional to the frequency stimulated by a single electrode. The lower frequencies were detected when the basilar membrane was stimulated, providing even further evidence for rate coding.
- Elsevier, Dorland's Illustrated Medical Dictionary, Elsevier.
- "Deafness and hearing loss Fact sheet N°300". March 2015. Archived from the original on 16 May 2015. Retrieved 23 May 2015.CS1 maint: unfit URL (link)
- Shearer AE, Hildebrand MS, Smith RJ (2014). "Deafness and Hereditary Hearing Loss Overview". In Adam MP, Ardinger HH, Pagon RA, Wallace SE, Bean LJ, Stephens K, Amemiya A (eds.). GeneReviews [Internet]. Seattle (WA): University of Washington, Seattle. PMID 20301607.
- Global Burden of Disease Study 2013 Collaborators (October 2016). "Global, regional, and national incidence, prevalence, and years lived with disability for 310 diseases and injuries, 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1545–1602. doi:10.1016/S0140-6736(16)31678-6. PMC 5055577. PMID 27733282.
- "Deafness". Encyclopædia Britannica Online. Encyclopædia Britannica Inc. 2011. Archived from the original on 2012-06-25. Retrieved 2012-02-22.
- "Deafness and hearing loss". World Health Organization. 2020-03-01. Retrieved 2020-07-13.
- "Hearing Loss at Birth (Congenital Hearing Loss)". American Speech-Language-Hearing Association. Retrieved 2020-07-13.
- Lasak JM, Allen P, McVay T, Lewis D (March 2014). "Hearing loss: diagnosis and management". Primary Care. 41 (1): 19–31. doi:10.1016/j.pop.2013.10.003. PMID 24439878.
- Schilder, Anne Gm; Chong, Lee Yee; Ftouh, Saoussen; Burton, Martin J. (2017). "Bilateral versus unilateral hearing aids for bilateral hearing impairment in adults". The Cochrane Database of Systematic Reviews. 12: CD012665. doi:10.1002/14651858.CD012665.pub2. ISSN 1469-493X. PMC 6486194. PMID 29256573.
- Fowler KB (December 2013). "Congenital cytomegalovirus infection: audiologic outcome". Clinical Infectious Diseases. 57 Suppl 4 (suppl_4): S182-4. doi:10.1093/cid/cit609. PMC 3836573. PMID 24257423.
- "1.1 billion people at risk of hearing loss WHO highlights serious threat posed by exposure to recreational noise" (PDF). who.int. 27 February 2015. Archived (PDF) from the original on 1 May 2015. Retrieved 2 March 2015.
- Global Burden of Disease Study 2013 Collaborators (August 2015). "Global, regional, and national incidence, prevalence, and years lived with disability for 301 acute and chronic diseases and injuries in 188 countries, 1990-2013: a systematic analysis for the Global Burden of Disease Study 2013". Lancet. 386 (9995): 743–800. doi:10.1016/s0140-6736(15)60692-4. PMC 4561509. PMID 26063472.
- WHO (2008). The global burden of disease: 2004 update (PDF). Geneva, Switzerland: World Health Organization. p. 35. ISBN 9789241563710. Archived (PDF) from the original on 2013-06-24.
- Olusanya BO, Neumann KJ, Saunders JE (May 2014). "The global burden of disabling hearing impairment: a call to action". Bulletin of the World Health Organization. 92 (5): 367–73. doi:10.2471/blt.13.128728. PMC 4007124. PMID 24839326.
- Elzouki AY (2012). Textbook of clinical pediatrics (2 ed.). Berlin: Springer. p. 602. ISBN 9783642022012. Archived from the original on 2015-12-14.
- "Community and Culture - Frequently Asked Questions". nad.org. National Association of the Deaf. Archived from the original on 27 December 2015. Retrieved 31 July 2014.
- "Sound and Fury - Cochlear Implants - Essay". www.pbs.org. PBS. Archived from the original on 2015-07-06. Retrieved 2015-08-01.
- "Understanding Deafness: Not Everyone Wants to Be 'Fixed'". www.theatlantic.com. The Atlantic. 2013-08-09. Archived from the original on 2015-07-30. Retrieved 2015-08-01.
- Williams S (2012-09-13). "Why not all deaf people want to be cured". www.telegraph.co.uk. The Daily Telegraph. Archived from the original on 2015-09-24. Retrieved 2015-08-02.
- Sparrow R (2005). "Defending Deaf Culture: The Case of Cochlear Implants" (PDF). The Journal of Political Philosophy. 13 (2): 135–152. doi:10.1111/j.1467-9760.2005.00217.x. Retrieved 30 November 2014.
- Tidy, Colin (March 2014). "Dealing with Hearing-impaired Patients". patient.info. Retrieved 16 August 2020.
- eBook: Current Diagnosis & Treatment in Otolaryngology: Head & Neck Surgery, Lalwani, Anil K. (Ed.) Chapter 44: Audiologic Testing by Brady M. Klaves, PhD, Jennifer McKee Bold, AuD, Access Medicine
- Bennett, ReBecca (May 2019). "Time for Change". The Hearing Journal. 72 (5): 16. doi:10.1097/01.HJ.0000559500.67179.7d.
- "Community and Culture - Frequently Asked Questions". nad.org. National Association of the Deaf. Archived from the original on 2015-12-27. Retrieved 27 Jan 2016.
- ANSI 7029:2000/BS 6951 Acoustics - Statistical distribution of hearing thresholds as a function of age
- ANSI S3.5-1997 Speech Intelligibility Index (SII)
- Hung SC (Aug 2015). "Hearing Loss is Associated With Risk of Alzheimer's Disease: A Case-Control Study in Older People". Journal of Epidemiology. Journal of Epidemiol. 25 (8): 517–521. doi:10.2188/jea.JE20140147. PMC 4517989. PMID 25986155.
- Thomson RS, Auduong P, Miller AT, Gurgel RK (April 2017). "Hearing loss as a risk factor for dementia: A systematic review". Laryngoscope Investigative Otolaryngology. 2 (2): 69–79. doi:10.1002/lio2.65. PMC 5527366. PMID 28894825.
- Hoppe U, Hesse G (2017-12-18). "Hearing aids: indications, technology, adaptation, and quality control". GMS Current Topics in Otorhinolaryngology, Head and Neck Surgery. 16: Doc08. doi:10.3205/cto000147. PMC 5738937. PMID 29279726.
- Lin FR, Niparko JK, Ferrucci L (November 2011). "Hearing loss prevalence in the United States". Archives of Internal Medicine. 171 (20): 1851–2. doi:10.1001/archinternmed.2011.506. PMC 3564588. PMID 22083573.
- Park HL, O'Connell JE, Thomson RG (December 2003). "A systematic review of cognitive decline in the general elderly population". International Journal of Geriatric Psychiatry. 18 (12): 1121–34. doi:10.1002/gps.1023. PMID 14677145. S2CID 39164724.
- Loughrey DG, Kelly ME, Kelley GA, Brennan S, Lawlor BA (February 2018). "Association of Age-Related Hearing Loss With Cognitive Function, Cognitive Impairment, and Dementia: A Systematic Review and Meta-analysis". JAMA Otolaryngology–Head & Neck Surgery. 144 (2): 115–126. doi:10.1001/jamaoto.2017.2513. PMC 5824986. PMID 29222544.
- Thomson RS, Auduong P, Miller AT, Gurgel RK (April 2017). "Hearing loss as a risk factor for dementia: A systematic review". Laryngoscope Investigative Otolaryngology. 2 (2): 69–79. doi:10.1002/lio2.65. PMC 5527366. PMID 28894825.
- Pichora-Fuller MK, Mick P, Reed M (August 2015). "Hearing, Cognition, and Healthy Aging: Social and Public Health Implications of the Links between Age-Related Declines in Hearing and Cognition". Seminars in Hearing. 36 (3): 122–39. doi:10.1055/s-0035-1555116. PMC 4906310. PMID 27516713.
- Ford AH, Hankey GJ, Yeap BB, Golledge J, Flicker L, Almeida OP (June 2018). "Hearing loss and the risk of dementia in later life". Maturitas. 112: 1–11. doi:10.1016/j.maturitas.2018.03.004. PMID 29704910.
- Dhital A, Pey T, Stanford MR (September 2010). "Visual loss and falls: a review". Eye. 24 (9): 1437–46. doi:10.1038/eye.2010.60. PMID 20448666.
- Jiam NT, Li C, Agrawal Y (November 2016). "Hearing loss and falls: A systematic review and meta-analysis". The Laryngoscope. 126 (11): 2587–2596. doi:10.1002/lary.25927. PMID 27010669. S2CID 28871762.
- Agmon M, Lavie L, Doumas M (June 2017). "The Association between Hearing Loss, Postural Control, and Mobility in Older Adults: A Systematic Review". Journal of the American Academy of Audiology. 28 (6): 575–588. doi:10.3766/jaaa.16044. PMID 28590900. S2CID 3744742.
- Fiske A, Wetherell JL, Gatz M (April 2009). "Depression in older adults". Annual Review of Clinical Psychology. 5 (1): 363–89. doi:10.1146/annurev.clinpsy.032408.153621. PMC 2852580. PMID 19327033.
- Huang CQ, Dong BR, Lu ZC, Yue JR, Liu QX (April 2010). "Chronic diseases and risk for depression in old age: a meta-analysis of published literature". Ageing Research Reviews. Microbes and Ageing. 9 (2): 131–41. doi:10.1016/j.arr.2009.05.005. PMID 19524072. S2CID 13637437.
- Arlinger S (July 2003). "Negative consequences of uncorrected hearing loss--a review". International Journal of Audiology. 42 Suppl 2 (sup2): 2S17–20. doi:10.3109/14992020309074639. PMID 12918624. S2CID 14433959.
- Meyer C, Scarinci N, Ryan B, Hickson L (December 2015). ""This Is a Partnership Between All of Us": Audiologists' Perceptions of Family Member Involvement in Hearing Rehabilitation". American Journal of Audiology. 24 (4): 536–48. doi:10.1044/2015_AJA-15-0026. PMID 26649683. S2CID 13091175.
- Niparko JK, Tobey EA, Thal DJ, Eisenberg LS, Wang NY, Quittner AL, Fink NE (April 2010). "Spoken language development in children following cochlear implantation". JAMA. 303 (15): 1498–506. doi:10.1001/jama.2010.451. PMC 3073449. PMID 20407059.
- Kral A, O'Donoghue GM (October 2010). "Profound deafness in childhood". The New England Journal of Medicine. 363 (15): 1438–50. doi:10.1056/NEJMra0911225. PMID 20925546. S2CID 13639137.
- Hall WC (May 2017). "What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children". Maternal and Child Health Journal. 21 (5): 961–965. doi:10.1007/s10995-017-2287-y. PMC 5392137. PMID 28185206.
- Mayberry R (2007). "When timing is everything: Age of first-language acquisition effects on second-language learning". Applied Psycholinguistics. 28 (3): 537–549. doi:10.1017/s0142716407070294.
- Robinson DW, Sutton GJ (1979). "Age effect in hearing - a comparative analysis of published threshold data". Audiology. 18 (4): 320–34. doi:10.3109/00206097909072634. PMID 475664.
- Worrall L, Hickson LM (2003). "Communication activity limitations". In Worrall LE, Hickson LM (eds.). Communication disability in aging: from prevention to intervention. Clifton Park, NY: Delmar Learning. pp. 141–142.
- Akinpelu OV, Mujica-Mota M, Daniel SJ (March 2014). "Is type 2 diabetes mellitus associated with alterations in hearing? A systematic review and meta-analysis". The Laryngoscope. 124 (3): 767–76. doi:10.1002/lary.24354. PMID 23945844. S2CID 25569962.
- "Hearing Loss and Older Adults" (Last Updated June 3, 2016). National Institute on Deafness and Other Communication Disorders. 2016-01-26. Archived from the original on October 4, 2016. Retrieved September 11, 2016.
- Oishi N, Schacht J (June 2011). "Emerging treatments for noise-induced hearing loss". Expert Opinion on Emerging Drugs. 16 (2): 235–45. doi:10.1517/14728214.2011.552427. PMC 3102156. PMID 21247358.
- "CDC - NIOSH Science Blog – A Story of Impact..." cdc.gov. Archived from the original on 2015-06-13.
- In the United States, United States Environmental Protection Agency, Occupational Safety and Health Administration, National Institute for Occupational Safety and Health, Mine Safety and Health Administration, and numerous state government agencies among others, set noise standards.
- "Noise-Induced Hearing Loss: Promoting Hearing Health Among Youth". CDC Healthy Youth!. CDC. 2009-07-01. Archived from the original on 2009-12-21.
- de Laat JA, van Deelen L, Wiefferink K (September 2016). "Hearing Screening and Prevention of Hearing Loss in Adolescents". The Journal of Adolescent Health. 59 (3): 243–245. doi:10.1016/j.jadohealth.2016.06.017. PMID 27562364.
- Rehm H. "The Genetics of Deafness; A Guide for Patients and Families" (PDF). Harvard Medical School Center For Hereditary Deafness. Harvard Medical School. Archived from the original (PDF) on 2013-10-19.
- "Hearing Loss in Premature Babies". Salus Health. Pennsylvania Ear Institute. 2016. Retrieved 16 August 2020.
- Starr A, Sininger YS, Pratt H (2011). "The varieties of auditory neuropathy". Journal of Basic and Clinical Physiology and Pharmacology. 11 (3): 215–30. doi:10.1515/JBCPP.2000.11.3.215. PMID 11041385. S2CID 31806057.
- Starr A, Picton TW, Sininger Y, Hood LJ, Berlin CI (June 1996). "Auditory neuropathy". Brain. 119 ( Pt 3) (3): 741–53. doi:10.1093/brain/119.3.741. PMID 8673487.
- Rodman R, Pine HS (June 2012). "The otolaryngologist's approach to the patient with Down syndrome". Otolaryngologic Clinics of North America. 45 (3): 599–629, vii–viii. doi:10.1016/j.otc.2012.03.010. PMID 22588039.
- McKusick VA, Kniffen CL (30 January 2012). "# 118300 CHARCOT-MARIE-TOOTH DISEASE AND DEAFNESS". Online Mendelian Inheritance in Man. Retrieved 2 March 2018.
- Byl FM, Adour KK (March 1977). "Auditory symptoms associated with herpes zoster or idiopathic facial paralysis". The Laryngoscope. 87 (3): 372–9. doi:10.1288/00005537-197703000-00010. PMID 557156. S2CID 41226847.
- Jos J. Eggermont (22 February 2017). Hearing Loss: Causes, Prevention, and Treatment. Elsevier Science. pp. 198–. ISBN 978-0-12-809349-8.
- Araújo E, Zucki F, Corteletti LC, Lopes AC, Feniman MR, Alvarenga K (2012). "Hearing loss and acquired immune deficiency syndrome: systematic review". Jornal da Sociedade Brasileira de Fonoaudiologia. 24 (2): 188–92. doi:10.1590/s2179-64912012000200017. PMID 22832689.
- Curhan SG, Shargorodsky J, Eavey R, Curhan GC (September 2012). "Analgesic use and the risk of hearing loss in women". American Journal of Epidemiology. 176 (6): 544–54. doi:10.1093/aje/kws146. PMC 3530351. PMID 22933387.
- Cone B, Dorn P, Konrad-Martin D, Lister J, Ortiz C, Schairer K. "Ototoxic Medications (Medication Effects)". American Speech-Language-Hearing Association.
- Rybak LP, Mukherjea D, Jajoo S, Ramkumar V (November 2009). "Cisplatin ototoxicity and protection: clinical and experimental studies". The Tohoku Journal of Experimental Medicine. 219 (3): 177–86. doi:10.1620/tjem.219.177. PMC 2927105. PMID 19851045.
- Rybak LP, Ramkumar V (October 2007). "Ototoxicity". Kidney International. 72 (8): 931–5. doi:10.1038/sj.ki.5002434. PMID 17653135.
- "Tox Town – Toluene – Toxic chemicals and environmental health risks where you live and work – Text Version". toxtown.nlm.nih.gov. Archived from the original on 2010-06-09. Retrieved 2010-06-09.
- Morata TC. "Addressing the Risk for Hearing Loss from Industrial Chemicals". CDC. Archived from the original on 2009-01-22. Retrieved 2008-06-05.
- Johnson A (2008-09-09). "Occupational exposure to chemicals and hearing impairment – the need for a noise notation" (PDF). Karolinska Institutet: 1–48. Archived from the original (PDF) on 2012-09-06. Retrieved 2009-06-19.
- Venet T, Campo P, Thomas A, Cour C, Rieger B, Cosnier F (March 2015). "The tonotopicity of styrene-induced hearing loss depends on the associated noise spectrum". Neurotoxicology and Teratology. 48: 56–63. doi:10.1016/j.ntt.2015.02.003. PMID 25689156.
- Fuente A, Qiu W, Zhang M, Xie H, Kardous CA, Campo P, Morata TC (March 2018). "Use of the kurtosis statistic in an evaluation of the effects of noise and solvent exposures on the hearing thresholds of workers: An exploratory study" (PDF). The Journal of the Acoustical Society of America. 143 (3): 1704–1710. Bibcode:2018ASAJ..143.1704F. doi:10.1121/1.5028368. PMID 29604694.
- "Preventing Hearing Loss Caused by Chemical (Ototoxicity) and Noise Exposure" (PDF). Retrieved 4 April 2018.
- Oesterle EC (March 2013). "Changes in the adult vertebrate auditory sensory epithelium after trauma". Hearing Research. 297: 91–8. doi:10.1016/j.heares.2012.11.010. PMC 3637947. PMID 23178236.
- Eggermont JJ (January 2017). "Acquired hearing loss and brain plasticity". Hearing Research. 343: 176–190. doi:10.1016/j.heares.2016.05.008. PMID 27233916. S2CID 3568426.
- "How We Hear". American Speech-Language-Hearing Association. Retrieved 2 March 2018.
- "How We Hear". Archived from the original on 1 May 2017.
- "How Do We Hear?". NIDCD. January 3, 2018.
- "What Is Noise-Induced Hearing Loss?". NIH - Noisy Planet. December 27, 2017.
- "CDC - Noise and Hearing Loss Prevention - Preventing Hearing Loss, Risk Factors - NIOSH Workplace Safety and Health Topic". NIOSH/CDC. 5 February 2018. Retrieved 3 March 2018.
- "Age-Related Hearing Loss". NIDCD. 18 August 2015.
- Shojaeemend H, Ayatollahi H (October 2018). "Automated Audiometry: A Review of the Implementation and Evaluation Methods". Healthcare Informatics Research. 24 (4): 263–275. doi:10.4258/hir.2018.24.4.263. PMC 6230538. PMID��30443414.
- Keidser G, Convery E (April 2016). "Self-Fitting Hearing Aids: Status Quo and Future Predictions". Trends in Hearing. 20: 233121651664328. doi:10.1177/2331216516643284. PMC 4871211. PMID 27072929.
- Jansen S, Luts H, Dejonckere P, van Wieringen A, Wouters J (2013). "Efficient hearing screening in noise-exposed listeners using the digit triplet test" (PDF). Ear and Hearing. 34 (6): 773–8. doi:10.1097/AUD.0b013e318297920b. PMID 23782715. S2CID 11858630.
- Lieu JE (May 2004). "Speech-language and educational consequences of unilateral hearing loss in children". Archives of Otolaryngology–Head & Neck Surgery. 130 (5): 524–30. doi:10.1001/archotol.130.5.524. PMID 15148171.
- Graham eb, Baguley DM (2009). Ballantyne's Deafness (7th ed.). Chichester: John Wiley & Sons. p. 16. ISBN 978-0-470-74441-3. Archived from the original on 2017-09-08.CS1 maint: extra text: authors list (link)
- "Childhood hearing loss: act now, here's how!" (PDF). WHO. 2016. p. 6. Archived (PDF) from the original on 6 March 2016. Retrieved 2 March 2016.
Over 30% of childhood hearing loss is caused by diseases such as measles, mumps, rubella, meningitis and ear infections. These can be prevented through immunization and good hygiene practices. Another 17% of childhood hearing loss results from complications at birth, including prematurity, low birth weight, birth asphyxia and neonatal jaundice. Improved maternal and child health practices would help to prevent these complications. The use of ototoxic medicines in expectant mothers and newborns, which is responsible for 4% of childhood hearing loss, could potentially be avoided.
- "Preventing Noise-Induced Hearing Loss". Centers for Disease Control and Prevention. 8 June 2020. Retrieved 13 July 2020.
- Davis A, McMahon CM, Pichora-Fuller KM, Russ S, Lin F, Olusanya BO, Chadha S, Tremblay KL (April 2016). "Aging and Hearing Health: The Life-course Approach". The Gerontologist. 56 Suppl 2 (Suppl_2): S256-67. doi:10.1093/geront/gnw033. PMC 6283365. PMID 26994265.
- El Dib RP, Mathew JL, Martins RH (April 2012). El Dib RP (ed.). "Interventions to promote the wearing of hearing protection". The Cochrane Database of Systematic Reviews. 4 (4): CD005234. doi:10.1002/14651858.CD005234.pub5. PMID 22513929. (Retracted, see doi:10.1002/14651858.cd005234.pub6)
- Stucken EZ, Hong RS (October 2014). "Noise-induced hearing loss: an occupational medicine perspective". Current Opinion in Otolaryngology & Head and Neck Surgery. 22 (5): 388–93. doi:10.1097/moo.0000000000000079. PMID 25188429. S2CID 22846225.
- "Noise and Hearing Loss Prevention". Centers for Disease Control and Prevention: National Institute for Occupational Safety and Health. Archived from the original on July 9, 2016. Retrieved July 15, 2016.
- "Safety and Health Topics: Occupational Noise Exposure". Occupational Safety and Health Administration. Archived from the original on May 6, 2016. Retrieved July 15, 2015.
- "Controls for Noise Exposure". Centers for Disease Control and Prevention: National Institute for Occupational Safety and Health. Archived from the original on July 4, 2016. Retrieved July 15, 2016.
- "Excellence in Hearing Loss Prevention Award". Safe-in-Sound. Archived from the original on May 27, 2016. Retrieved July 15, 2016.
- "Buy Quiet". Centers for Disease Control and Prevention: National Institute for Occupational Safety and Health. Archived from the original on August 8, 2016. Retrieved July 15, 2016.
- "PowerTools Database". Centers for Disease Control and Prevention: National Institute for Occupational Safety and Health. Archived from the original on June 30, 2016. Retrieved July 15, 2016.
- "CDC - NIOSH Publications and Products - Occupationally-Induced Hearing Loss (2010-136)". CDC.gov. 2010. doi:10.26616/NIOSHPUB2010136. Archived from the original on 2016-05-12.
- Tikka C, Verbeek JH, Kateman E, Morata TC, Dreschler WA, Ferrite S (July 2017). "Interventions to prevent occupational noise-induced hearing loss". The Cochrane Database of Systematic Reviews. 7: CD006396. doi:10.1002/14651858.cd006396.pub4. PMC 6353150. PMID 28685503.
- Institute for Occupational Safety and Health of the German Social Accident Insurance. "Hearing impairment calculator".
- Moyer VA (2012-11-06). "Screening for Hearing Loss in Older Adults: U.S. Preventive Services Task Force Recommendation Statement". Annals of Internal Medicine. The American College of Physicians. 157 (9): 655–661. doi:10.7326/0003-4819-157-9-201211060-00526. PMID 22893115. S2CID 29265879. Archived from the original on 2012-10-27. Retrieved 2012-11-06.
- "Who Should be Screened for Hearing Loss". www.asha.org. Archived from the original on 2017-03-17. Retrieved 2017-03-17.
- "Hearing and Other Sensory or Communication Disorders | Healthy People 2020". www.healthypeople.gov. Archived from the original on 2017-03-18. Retrieved 2017-03-17.
- Chandrasekhar SS, Tsai Do BS, Schwartz SR, Bontempo LJ, Faucett EA, Finestone SA, et al. (August 2019). "Clinical Practice Guideline: Sudden Hearing Loss (Update) Executive Summary". Otolaryngology–Head and Neck Surgery. 161 (2): 195–210. doi:10.1177/0194599819859883. PMID 31369349.
- World Health Organization, WHO (2017). Global costs of unaddressed hearing loss and cost-effectiveness of interventions: a WHO report. Geneva: World Health Organization. pp. 5–10. ISBN 978-92-4-151204-6.
- ISO, International Organization for Standardization (2013). Acoustics—Estimation of noise induced hearing loss. Geneva, Switzerland: International Organization for Standardization. p. 20.
- Passchier-Vermeer, W (1969). Hearing loss due to exposure to steady state broadband noise. Delft, Netherlands: TNO, Instituut voor gezondheidstechniek. pp. Report 35 Identifier 473589.
- Johansson M, Arlinger S (2004-07-07). "Reference data for evaluation of occupationally noise-induced hearing loss". Noise & Health. 6 (24): 35–41. PMID 15703139.
- Tambs K, Hoffman HJ, Borchgrevink HM, Holmen J, Engdahl B (May 2006). "Hearing loss induced by occupational and impulse noise: results on threshold shifts by frequencies, age and gender from the Nord-Trøndelag Hearing Loss Study". International Journal of Audiology. 45 (5): 309–17. doi:10.1080/14992020600582166. PMID 16717022. S2CID 35123521.
- Jun HJ, Hwang SY, Lee SH, Lee JE, Song JJ, Chae S (March 2015). "The prevalence of hearing loss in South Korea: data from a population-based study". The Laryngoscope. 125 (3): 690–4. doi:10.1002/lary.24913. PMID 25216153. S2CID 11731976.
- Flamme GA, Deiters K, Needham T (March 2011). "Distributions of pure-tone hearing threshold levels among adolescents and adults in the United States by gender, ethnicity, and age: Results from the US National Health and Nutrition Examination Survey". International Journal of Audiology. 50 Suppl 1: S11-20. doi:10.3109/14992027.2010.540582. PMID 21288063. S2CID 3396617.
- Rodríguez Valiente A, Roldán Fidalgo A, García Berrocal JR, Ramírez Camacho R (August 2015). "Hearing threshold levels for an otologically screened population in Spain". International Journal of Audiology. 54 (8): 499–506. doi:10.3109/14992027.2015.1009643. PMID 25832123.
- Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA (March 2017). "Declining Prevalence of Hearing Loss in US Adults Aged 20 to 69 Years". JAMA Otolaryngology–Head & Neck Surgery. 143 (3): 274–285. doi:10.1001/jamaoto.2016.3527. PMC 5576493. PMID 27978564.
- Carroll YI, Eichwald J, Scinicariello F, Hoffman HJ, Deitchman S, Radke MS, Themann CL, Breysse P (February 2017). "Vital Signs: Noise-Induced Hearing Loss Among Adults - United States 2011-2012". MMWR. Morbidity and Mortality Weekly Report. 66 (5): 139–144. doi:10.15585/mmwr.mm6605e3. PMC 5657963. PMID 28182600.
- "American Sign Language". NIDCD. 2015-08-18. Archived from the original on 15 November 2016. Retrieved 17 November 2016.
- Baker, Charlotte; Carol Padden (1978). American Sign Language: A Look at Its Story, Structure and Community.
- Padden, Carol A.; Humphries, Tom (Tom L.) (2005). Inside Deaf Culture. Cambridge, MA: Harvard University Press. p. 1. ISBN 978-0-674-01506-7.
- Jamie Berke (9 February 2010). "Deaf Culture - Big D Small D". About.com. Retrieved 22 November 2013.
- Ladd, Paddy (2003). Understanding Deaf Culture: In Search of Deafhood. Multilingual Matters. p. 502. ISBN 978-1-85359-545-5.
- Lane, Harlan L.; Richard Pillard; Ulf Hedberg (2011). The People of the Eye: Deaf Ethnicity and Ancestry. Oxford University Press. p. 269. ISBN 978-0-19-975929-3.
- Coghlan A (2005-02-14). "Gene therapy is first deafness 'cure'". NewScientist.com News Service. Archived from the original on 2008-09-14.
- Gubbels SP, Woessner DW, Mitchell JC, Ricci AJ, Brigande JV (September 2008). "Functional auditory hair cells produced in the mammalian cochlea by in utero gene transfer". Nature. 455 (7212): 537–41. Bibcode:2008Natur.455..537G. doi:10.1038/nature07265. PMC 2925035. PMID 18754012.
- Gewin V (2012-09-12). "Human embryonic stem cells restore gerbil hearing". Nature News. doi:10.1038/nature.2012.11402. S2CID 87417776. Archived from the original on 2012-12-14. Retrieved 2013-01-22.
- Ander D. "Drug may reverse permanent deafness by regenerating cells of inner ear: Harvard study". National Post. National Post. Archived from the original on 2013-02-16.
- "Hearing Health Foundation". HHF. Archived from the original on 2013-01-27. Retrieved 2013-01-22.
- "Biomedical research – Action On Hearing Loss". RNID. Archived from the original on 2013-01-23. Retrieved 2013-01-22.
- Gallacher J (9 July 2015). "Deafness could be treated by virus, say scientists". UK: BBC. Archived from the original on 9 July 2015. Retrieved 9 July 2015.
- Askew C, Rochat C, Pan B, Asai Y, Ahmed H, Child E, Schneider BL, Aebischer P, Holt JR (July 2015). "Tmc gene therapy restores auditory function in deaf mice". Science Translational Medicine. 7 (295): 295ra108. doi:10.1126/scitranslmed.aab1996. PMC 7298700. PMID 26157030.
- Isgrig K, Shteamer JW, Belyantseva IA, Drummond MC, Fitzgerald TS, Vijayakumar S, Jones SM, Griffith AJ, Friedman TB, Cunningham LL, Chien WW (March 2017). "Gene Therapy Restores Balance and Auditory Functions in a Mouse Model of Usher Syndrome". Molecular Therapy. 25 (3): 780–791. doi:10.1016/j.ymthe.2017.01.007. PMC 5363211. PMID 28254438.
- Landegger LD, Pan B, Askew C, Wassmer SJ, Gluck SD, Galvin A, Taylor R, Forge A, Stankovic KM, Holt JR, Vandenberghe LH (March 2017). "A synthetic AAV vector enables safe and efficient gene transfer to the mammalian inner ear". Nature Biotechnology. 35 (3): 280–284. doi:10.1038/nbt.3781. PMC 5340646. PMID 28165475.
- Pan B, Askew C, Galvin A, Heman-Ackah S, Asai Y, Indzhykulian AA, Jodelka FM, Hastings ML, Lentz JJ, Vandenberghe LH, Holt JR, Géléoc GS (March 2017). "Gene therapy restores auditory and vestibular function in a mouse model of Usher syndrome type 1c". Nature Biotechnology. 35 (3): 264–272. doi:10.1038/nbt.3801. PMC 5340578. PMID 28165476.
- Carlson NR (2010). Physiology of behavior (11th ed.). Upper Saddle River, New Jersey: Pearson Education, Inc.
|Wikimedia Commons has media related to Auditory loss.|
|Wikiquote has quotations related to: Hearing loss|
- Hearing loss at Curlie
- National Institute for the Prevention of Deafness and other Communication Disorders
- World Health Organization Global Costs of unaddressed hearing loss and cost-effectiveness of interventions, 2017
- World Health Organization, Deafness and Hearing Loss
- "Hearing Loss in Children". Hearing Loss in Children Home. Retrieved 17 March 2017.
- Occupational Noise and Hearing Loss Prevention (NIOSH)
- OSHA-NIOSH 2018. Preventing Hearing Loss Caused by Chemical (Ototoxicity) and Noise Exposure Safety and Health Information Bulletin (SHIB), Occupational Safety and Health Administration and the National Institute for Occupational Safety and Health. SHIB 03-08-2018. DHHS (NIOSH) Publication No. 2018-124.
- Centers for Disease Control and Prevention Vital Signs- Hearing Loss Loud Noises Damage Hearing
- "Using Total Worker Health Concepts to Address Hearing Health (2019-155)". 27 September 2019. doi:10.26616/NIOSHPUB2019155revised (inactive 31 May 2021). Retrieved 4 March 2020. Cite journal requires
|journal=(help)CS1 maint: DOI inactive as of May 2021 (link) | https://wiki-offline.jakearchibald.com/wiki/Hearing_impairment | 21 |
64 | History of the Han dynasty
The Han dynasty (206 BCE – 220 CE), founded by the peasant rebel leader Liu Bang (known posthumously as Emperor Gaozu of Han),[note 1] was the second imperial dynasty of China. It followed the Qin dynasty (221–206 BCE), which had unified the Warring States of China by conquest. Interrupted briefly by the Xin dynasty (9–23 CE) of Wang Mang, the Han dynasty is divided into two periods: the Western Han (206 BCE – 9 CE) and the Eastern Han (25–220 CE). These appellations are derived from the locations of the capital cities Chang'an and Luoyang, respectively. The third and final capital of the dynasty was Xuchang, where the court moved in 196 CE during a period of political turmoil and civil war.
The Han dynasty ruled in an era of Chinese cultural consolidation, political experimentation, relative economic prosperity and maturity, and great technological advances. There was unprecedented territorial expansion and exploration initiated by struggles with non-Chinese peoples, especially the nomadic Xiongnu of the Eurasian Steppe. The Han emperors were initially forced to acknowledge the rival Xiongnu Chanyus as their equals, yet in reality the Han was an inferior partner in a tributary and royal marriage alliance known as heqin. This agreement was broken when Emperor Wu of Han (r. 141–87 BCE) launched a series of military campaigns which eventually caused the fissure of the Xiongnu Federation and redefined the borders of China. The Han realm was expanded into the Hexi Corridor of modern Gansu province, the Tarim Basin of modern Xinjiang, modern Yunnan and Hainan, modern northern Vietnam, modern North Korea, and southern Outer Mongolia. The Han court established trade and tributary relations with rulers as far west as the Arsacids, to whose court at Ctesiphon in Mesopotamia the Han monarchs sent envoys. Buddhism first entered China during the Han, spread by missionaries from Parthia and the Kushan Empire of northern India and Central Asia.
From its beginning, the Han imperial court was threatened by plots of treason and revolt from its subordinate kingdoms, the latter eventually ruled only by royal Liu family members. Initially, the eastern half of the empire was indirectly administered through large semi-autonomous kingdoms which pledged loyalty and a portion of their tax revenues to the Han emperors, who ruled directly over the western half of the empire from Chang'an. Gradual measures were introduced by the imperial court to reduce the size and power of these kingdoms, until a reform of the middle 2nd century BCE abolished their semi-autonomous rule and staffed the kings' courts with central government officials. Yet much more volatile and consequential for the dynasty was the growing power of both consort clans (of the empress) and the eunuchs of the palace. In 92 CE, the eunuchs entrenched themselves for the first time in the issue of the emperors' succession, causing a series of political crises which culminated in 189 CE with their downfall and slaughter in the palaces of Luoyang. This event triggered an age of civil war as the country became divided by regional warlords vying for power. Finally, in 220 CE, the son of an imperial chancellor and king accepted the abdication of the last Han emperor, who was deemed to have lost the Mandate of Heaven according to Dong Zhongshu's (179–104 BCE) cosmological system that intertwined the fate of the imperial government with Heaven and the natural world. Following the Han, China was split into three states: Cao Wei, Shu Han, and Eastern Wu; these were reconsolidated into one empire by the Jin dynasty (266–420 CE).
Fall of Qin and Chu–Han contentionEdit
Collapse of QinEdit
The Zhou dynasty (c. 1050–256 BCE) had made the State of Qin in Western China as an outpost to breed horses and act as a defensive buffer against nomadic armies of the Rong, Qiang, and Di peoples. After conquering six Warring States (i.e. Han, Zhao, Wei, Chu, Yan, and Qi) by 221 BCE, the King of Qin, Ying Zheng, unified China under one empire divided into 36 centrally-controlled commanderies. With control over much of China proper, he affirmed his enhanced prestige by taking the unprecedented title huangdi, or 'emperor', known thereafter as Qin Shi Huang (i.e. the first emperor of Qin). Han-era historians would accuse his regime of employing ruthless methods to preserve his rule.
Qin Shi Huang died of natural causes in 210 BCE. In 209 BCE the conscription officers Chen Sheng and Wu Guang, leading 900 conscripts through the rain, failed to meet an arrival deadline; the Standard Histories claim that the Qin punishment for this delay would have been execution. To avoid this, Chen and Wu started a rebellion against Qin, known as the Dazexiang Uprising, but they were thwarted by the Qin general Zhang Han in 208 BCE; both Wu and Chen were subsequently assassinated by their own soldiers. Yet by this point others had rebelled, among them Xiang Yu (d. 202 BCE) and his uncle Xiang Liang, men from a leading family of the Chu aristocracy. They were joined by Liu Bang, a man of peasant origin and supervisor of convicts in Pei County. Mi Xin, grandson of King Huai I of Chu, was declared King Huai II of Chu at his powerbase of Pengcheng (modern Xuzhou) with the support of the Xiangs, while other kingdoms soon formed in opposition to Qin. Despite this, in 208 BCE Xiang Liang was killed in a battle with Zhang Han, who subsequently attacked Zhao Xie the King of Zhao at his capital of Handan, forcing him to flee to Julu, which Zhang put under siege. However, the new kingdoms of Chu, Yan, and Qi came to Zhao's aid; Xiang Yu defeated Zhang at Julu and in 207 BCE forced Zhang to surrender.
While Xiang was occupied at Julu, King Huai II sent Liu Bang to capture the Qin heartland of Guanzhong with an agreement that the first officer to capture this region would become its king. In late 207 BCE, the Qin ruler Ziying, who had claimed the reduced title of King of Qin, had his chief eunuch Zhao Gao killed after Zhao had orchestrated the deaths of Chancellor Li Si in 208 BCE and the second Qin emperor Qin Er Shi in 207 BCE. Liu Bang gained Ziying's submission and secured the Qin capital of Xianyang; persuaded by his chief advisor Zhang Liang (d. 189 BCE) not to let his soldiers loot the city, he instead sealed up its treasury.
Contention with ChuEdit
The Standard Histories allege that when Xiang Yu arrived at Xianyang two months later in early 206 BCE, he looted it, burned it to the ground, and had Ziying executed. In that year, Xiang Yu offered King Huai II the title of Emperor Yi of Chu and sent him to a remote frontier where he was assassinated; Xiang Yu then assumed the title Hegemon-King of Western Chu (西楚霸王) and became the leader of a confederacy of 18 kingdoms. At the Feast at Hong Gate, Xiang Yu considered having Liu Bang assassinated, but Liu, realizing that Xiang was considering killing him, escaped during the middle of the feast. In a slight towards Liu Bang, Xiang Yu carved Guanzhong into three kingdoms with former Qin general Zhang Han and two of his subordinates as kings; Liu Bang was granted the frontier Kingdom of Han in Hanzhong, where he would pose less of a political challenge to Xiang Yu.
In the summer of 206 BCE, Liu Bang heard of Emperor Yi's fate and decided to rally some of the new kingdoms to oppose Xiang Yu, leading to a four-year war known as the Chu–Han Contention. Liu initially made a direct assault against Pengcheng and captured it while Xiang was battling another king who resisted him—Tian Guang (田廣) the King of Qi—but his forces collapsed upon Xiang's return to Pengcheng; he was saved by a storm which delayed the arrival of Chu's troops, although his father Liu Zhijia and wife Lü Zhi were captured by Chu forces. Liu barely escaped another defeat at Xingyang, but Xiang was unable to pursue him because Liu Bang induced Ying Bu, the King of Huainan, to rebel against Xiang. After Liu occupied Chenggao along with a large Qin grain storage, Xiang threatened to kill Liu's father if he did not surrender, but Liu did not give in to Xiang's threats.
With Chenggao and his food supplies lost, and with Liu's general Han Xin (d. 196 BCE) having conquered Zhao and Qin to Chu's north, in 203 BCE Xiang offered to release Liu's relatives from captivity and split China into political halves: the west would belong to Han and the east to Chu. Although Liu accepted the truce, it was short-lived, and in 202 BCE at Gaixia in modern Anhui, the Han forces forced Xiang to flee from his fortified camp in the early morning with only 800 cavalry, pursued by 5,000 Han cavalry. After several bouts of fighting, Xiang became surrounded at the banks of the Yangzi River, where he committed suicide. Liu took the title of emperor, and is known to posterity as Emperor Gaozu of Han (r. 202–195 BCE).
Reign of GaozuEdit
Consolidation, precedents, and rivalsEdit
Emperor Gaozu initially made Luoyang his capital, but then moved it to Chang'an (near modern Xi'an, Shaanxi) due to concerns over natural defences and better access to supply routes. Following Qin precedent, Emperor Gaozu adopted the administrative model of a tripartite cabinet (formed by the Three Excellencies) along with nine subordinate ministries (headed by the Nine Ministers). Despite Han statesmen's general condemnation of Qin's harsh methods and Legalist philosophy, the first Han law code compiled by Chancellor Xiao He in 200 BCE seems to have borrowed much from the structure and substance of the Qin code (excavated texts from Shuihudi and Zhangjiashan in modern times have reinforced this suspicion).
From Chang'an, Gaozu ruled directly over 13 commanderies (increased to 16 by his death) in the western portion of the empire. In the eastern portion, he established 10 semi-autonomous kingdoms (Yan, Dai, Zhao, Qi, Liang, Chu, Huai, Wu, Nan, and Changsha) that he bestowed to his most prominent followers to placate them. Due to alleged acts of rebellion and even alliances with the Xiongnu—a northern nomadic people—by 196 BCE Gaozu had replaced nine of them with members of the royal family.
According to Michael Loewe, the administration of each kingdom was "a small-scale replica of the central government, with its chancellor, royal counsellor, and other functionaries." The kingdoms were to transmit census information and a portion of their taxes to the central government. Although they were responsible for maintaining an armed force, kings were not authorized to mobilize troops without explicit permission from the capital.
Wu Rui (吳芮), King of Changsha, was the only remaining king not of the Liu clan. When Wu Rui's great-grandson Wu Zhu (吳著) or Wu Chan (吳產) died heirless in 157 BCE, Changsha was transformed into an imperial commandery and later a Liu family principality. South of Changsha, Gaozu sent Lu Jia (陸賈) as ambassador to the court of Zhao Tuo to acknowledge the latter's sovereignty over Nanyue (Vietnamese: Triệu Dynasty; in modern Southwest China and northern Vietnam).
Xiongnu and HeqinEdit
The Qin general Meng Tian had forced Toumen, the Chanyu of the Xiongnu, out of the Ordos Desert in 215 BCE, but Toumen's son and successor Modu Chanyu built the Xiongnu into a powerful empire by subjugating many other tribes. By the time of Modu's death in 174 BCE, the Xiongnu domains stretched from what is now northeast China and Mongolia to the Altai and Tian Shan mountain ranges in Central Asia. The Chinese feared incursions by the Xiongnu under the guise of trade and were concerned that Han-manufactured iron weapons would fall into Xiongnu hands. Gaozu thus enacted a trade embargo against the Xiongnu. To compensate the Chinese border merchants of the northern kingdoms of Dai and Yan for lost trade, he made them government officials with handsome salaries. Outraged by this embargo, Modu Chanyu planned to attack Han. When the Xiongnu invaded Taiyuan in 200 BCE and were aided by the defector King Xin of Hán (韓/韩, not to be confused with the ruling Hàn 漢 dynasty, or the general Han Xin), Gaozu personally led his forces through the snow to Pingcheng (near modern Datong, Shanxi). In the ensuing Battle of Baideng, Gaozu's forces were heavily surrounded for seven days; running short of supplies, he was forced to flee.
After this defeat, the court adviser Liu Jing (劉敬, originally named Lou Jing [婁敬]) convinced the emperor to create a peace treaty and marriage alliance with the Xiongnu Chanyu called the heqin agreement. By this arrangement established in 198 BCE, the Han hoped to modify the Xiongnu's nomadic values with Han luxury goods given as tribute (silks, wine, foodstuffs, etc.) and to make Modu's half-Chinese successor a subordinate to grandfather Gaozu. The exact amounts of annual tribute as promised by Emperor Gaozu given to the Xiongnu in the 2nd century BCE shortly after the defeat are unknown. In 89 BCE, however, Hulugu Chanyu (狐鹿姑) (r. 95–85 BCE) requested a renewal of the heqin agreement with the increased amount of annual tribute at 400,000 L (11,350 U.S. bu) of wine, 100,000 L (2,840 U.S. bu) of grain, and 10,000 bales of silk; thus previous amounts would have been less than these figures.
Although the treaty acknowledged both huangdi and chanyu as equals, Han was in fact the inferior partner since it was forced to pay tribute to appease the militarily powerful Xiongnu. Emperor Gaozu was initially set to give his only daughter to Modu, but under the opposition of Empress Lü, Emperor Gaozu made a female relative princess and married her to Modu. Until the 130s BCE, the offering of princess brides and tributary items scarcely satisfied the Xiongnu, who often raided Han's northern frontiers and violated the 162 BCE treaty that established the Great Wall as the border between Han and Xiongnu.
Empress Dowager Lü's ruleEdit
When Ying Bu rebelled in 195 BCE, Emperor Gaozu personally led the troops against Ying and received an arrow wound which allegedly led to his death the following year. His heir apparent Liu Ying took the throne and is posthumously known as Emperor Hui of Han (r. 195–188 BCE). Shortly afterwards Gaozu's widow Lü Zhi, now empress dowager, had Liu Ruyi, a potential claimant to the throne, poisoned and his mother, the Consort Qi, brutally mutilated. When the teenage Emperor Hui discovered the cruel acts committed by his mother, Loewe says that he "did not dare disobey her."
Hui's brief reign saw the completion of the defensive city walls around the capital Chang'an in 190 BCE; these brick and rammed earth walls were originally 12 m (40 ft) tall and formed a rough rectangular ground plan (with some irregularities due to topography); their ruins still stand today. This urban construction project was completed by 150,000 conscript laborers. Emperor Hui's reign saw the repeal of old Qin laws banning certain types of literature and was characterized by a cautious approach to foreign policy, including the renewal of the heqin agreement with the Xiongnu and Han's acknowledgment of the independent sovereignty of the Kings of Donghai and Nanyue.
Regency and downfall of the Lü clanEdit
Since Emperor Hui did not sire any children with his empress Zhang Yan, after his death in 188 BCE, Lü Zhi, now grand empress dowager and regent, chose his successor from among his sons with other consorts. She first placed Emperor Qianshao of Han (r. 188–184 BCE) on the throne, but then removed him for another puppet ruler Emperor Houshao of Han (r. 184–180 BCE). She not only issued imperial edicts during their reigns, but she also appointed members of her own clan as kings against Emperor Gaozu's explicit prohibition; other clan members became key military officers and civil officials.
The court under Lü Zhi was not only unable to deal with a Xiongnu invasion of Longxi Commandery (in modern Gansu) in which 2,000 Han prisoners were taken, but it also provoked a conflict with Zhao Tuo, King of Nanyue, by imposing a ban on exporting iron and other trade items to his southern kingdom. Proclaiming himself Emperor Wu of Nanyue (南越武帝) in 183 BCE, Zhao Tuo attacked the Han Kingdom of Changsha in 181 BCE. He did not rescind his rival imperial title until the Han ambassador Lu Jia again visited Nanyue's court during the reign of Emperor Wen.
After Empress Dowager Lü's death in 180 BCE, it was alleged that the Lü clan plotted to overthrow the Liu dynasty, and Liu Xiang the King of Qi (Emperor Gaozu's grandson) rose against the Lüs. Before the central government and Qi forces engaged each other, the Lü clan was ousted from power and destroyed by a coup led by the officials Chen Ping and Zhou Bo at Chang'an. Although Liu Xiang had resisted the Lüs, he was passed over to become emperor because he had mobilized troops without permission from the central government and because his mother 's family possessed the same ambitious attitude as the Lüs. Consort Bo, the mother of Liu Heng, King of Dai, was considered to possess a noble character, so her son was chosen as successor to the throne; he is known posthumously as Emperor Wen of Han (r. 180–157 BCE).
Reign of Wen and JingEdit
Reforms and policiesEdit
During the "Rule of Wen and Jing" (the era named after Emperor Wen and his successor Emperor Jing (r. 157–141 BCE)), the Han Empire witnessed greater economic and dynastic stability, while the central government assumed more power over the realm. In an attempt to distance itself from the harsh rule of Qin, the court under these rulers abolished legal punishments involving mutilation in 167 BCE, declared eight widespread amnesties between 180–141 BCE, and reduced the tax rate on households' agricultural produce from one-fifteenth to one-thirtieth in 168 BCE. It was abolished altogether the following year, but reinstated at the rate of one-thirtieth in 156 BCE.
Government policies were influenced by the proto-Daoist Huang-Lao ideology, a mix of political and cosmological precepts given patronage by Wen's wife Empress Dou (d. 135 BCE), who was empress dowager during Jing's reign and grand empress dowager during the early reign of his successor Emperor Wu (r. 141–87 BCE). Huang-Lao, named after the mythical Yellow Emperor and the 6th-century-BCE philosopher Laozi, viewed the former as the founder of ordered civilization; this was unlike the Confucians, who gave that role to legendary sage kings Yao and Shun. Han imperial patrons of Huang-Lao sponsored the policy of "nonaction" or wuwei (a central concept of Laozi's Daodejing), which claimed that rulers should interfere as little as possible if administrative and legal systems were to function smoothly. The influence of Huang-Lao doctrines on state affairs became eclipsed with the formal adoption of Confucianism as state ideology during Wu's reign and the later view that Laozi, not the Yellow Emperor, was the originator of Daoist practices.
From 179–143 BCE, the number of kingdoms was increased from eleven to twenty-five and the number of commanderies from nineteen to forty. This was not due to a large territorial expansion, but because kingdoms that had rebelled against Han rule or failed to produce an heir were significantly reduced in size or even abolished and carved into new commanderies or smaller kingdoms.
Rebellion of Seven StatesEdit
When Liu Xian (劉賢), the heir apparent of Wu, once made an official visit to the capital during Wen's reign, he played a board game called liubo with then crown prince Liu Qi, the future Emperor Jing. During a heated dispute, Liu Qi threw the game board at Liu Xian, killing him. This outraged his father Liu Pi (劉濞), the King of Wu and a nephew of Emperor Gaozu's, who was nonetheless obliged to claim allegiance to Liu Qi once he took the throne.
Still bitter over the death of his son and fearful that he would be targeted in a wave of reduction of kingdom sizes that Emperor Jing carried out under the advice of Imperial Counselor Chao Cuo (d. 154 BCE), the King of Wu led a revolt against Han in 154 BCE as the head of a coalition with six other rebelling kingdoms: Chu, Zhao, Jiaoxi, Jiaodong, Zichuan, and Jinan, which also feared such reductions. However, Han forces commanded by Zhou Yafu were ready and able to put down the revolt, destroying the coalition of seven states against Han. Several kingdoms were abolished (although later reinstated) and others significantly reduced in size. Emperor Jing issued an edict in 145 BCE which outlawed the independent administrative staffs in the kingdoms and abolished all their senior offices except for the chancellor, who was henceforth reduced in status and appointed directly by the central government. His successor Emperor Wu would diminish their power even further by abolishing the kingdoms' tradition of primogeniture and ordering that each king had to divide up his realm between all of his male heirs.
Relations with the XiongnuEdit
In 177 BCE, the Xiongnu Wise King of the Right raided the non-Chinese tribes living under Han protection in the northwest (modern Gansu). In 176 BCE, Modu Chanyu sent a letter to Emperor Wen informing him that the Wise King, allegedly insulted by Han officials, acted without the Chanyu's permission and so he punished the Wise King by forcing him to conduct a military campaign against the nomadic Yuezhi. Yet this event was merely part of a larger effort to recruit nomadic tribes north of Han China, during which the bulk of the Yuezhi were expelled from the Hexi Corridor (fleeing west into Central Asia) and the sedentary state of Loulan in the Lop Nur salt marsh, the nomadic Wusun of the Tian Shan range, and twenty-six other states east of Samarkand (Sogdia) were subjugated to Xiongnu hegemony. Modu Chanyu's implied threat that he would invade China if the heqin agreement was not renewed sparked a debate in Chang'an; although officials such as Chao Cuo and Jia Yi (d. 169 BCE) wanted to reject the heqin policy, Emperor Wen favored renewal of the agreement. Modu Chanyu died before the Han tribute reached him, but his successor Laoshang Chanyu (174–160 BCE) renewed the heqin agreement and negotiated the opening of border markets. Lifting the ban on trade significantly reduced the frequency and size of Xiongnu raids, which had necessitated tens of thousands of Han troops to be stationed at the border. However, Laoshang Chanyu and his successor Junchen Chanyu (r. 160–126 BCE) continued to violate Han's territorial sovereignty by making incursions despite the treaty. While Laoshang Chanyu continued the conquest of his father by driving the Yuezhi into the Ili River valley, the Han quietly built up its strength in cavalry forces to later challenge the Xiongnu.
Reign of WuEdit
Confucianism and government recruitmentEdit
Although Emperor Gaozu did not ascribe to the philosophy and system of ethics attributed to Confucius (fl. 6th century BCE), he did enlist the aid of Confucians such as Lu Jia and Shusun Tong; in 196 BCE he established the first Han regulation for recruiting men of merit into government service, which Robert P. Kramer calls the "first major impulse toward the famous examination system." Emperors Wen and Jing appointed Confucian academicians to court, yet not all academicians at their courts specialized in what would later become orthodox Confucian texts. For several years after Liu Che took the throne in 141 BCE (known posthumously as Emperor Wu), the Grand Empress Dowager Dou continued to dominate the court and did not accept any policy which she found unfavorable or contradicted Huang-Lao ideology. After her death in 135 BCE, a major shift occurred in Chinese political history.
After Emperor Wu called for the submission of memorial essays on how to improve the government, he favored that of the official Dong Zhongshu (179–104 BCE), a philosopher whom Kramers calls the first Confucian "theologian". Dong's synthesis fused together the ethical ideas of Confucius with the cosmological beliefs in yin and yang and Five Elements or Wuxing by fitting them into the same holistic, universal system which governed heaven, earth, and the world of man. Moreover, it justified the imperial system of government by providing it its place within the greater cosmos. Reflecting the ideas of Dong Zhongshu, Emperor Wu issued an edict in 136 BCE that abolished academic chairs other than those focused on the Confucian Five Classics. In 124 BCE Emperor Wu established the Imperial University, at which the academicians taught 50 students; this was the incipient beginning of the civil service examination system refined in later dynasties. Although sons and relatives of officials were often privileged with nominations to office, those who did not come from a family of officials were not barred from entry into the bureaucracy. Rather, education in the Five Classics became the paramount prerequisite for gaining office; as a result, the Imperial University was expanded dramatically by the 2nd century CE when it accommodated 30,000 students. With Cai Lun's (d. 121 CE) invention of the papermaking process in 105 CE, the spread of paper as a cheap writing medium from the Eastern Han period onwards increased the supply of books and hence the number of those who could be educated for civil service.
War against the XiongnuEdit
The death of Empress Dou also marked a significant shift in foreign policy. In order to address the Xiongnu threat and renewal of the heqin agreement, Emperor Wu called a court conference into session in 135 BCE where two factions of leading ministers debated the merits and faults of the current policy; Emperor Wu followed the majority consensus of his ministers that peace should be maintained. A year later, while the Xiongnu were busy raiding the northern border and waiting for Han's response, Wu had another court conference assembled. The faction supporting war against the Xiongnu was able to sway the majority opinion by making a compromise for those worried about stretching financial resources on an indefinite campaign: in a limited engagement along the border near Mayi, Han forces would lure Junchen Chanyu over with gifts and promises of defections in order to quickly eliminate him and cause political chaos for the Xiongnu. When the Mayi trap failed in 133 BCE (Junchen Chanyu realized he was about to fall into a trap and fled back north), the era of heqin-style appeasement was broken and the Han court resolved to engage in full-scale war.
Leading campaigns involving tens of thousands of troops, in 127 BCE the Han general Wei Qing (d. 106 BCE) recaptured the Ordos Desert region from the Xiongnu and in 121 BCE Huo Qubing (d. 117 BCE) expelled them from the Qilian Mountains, gaining the surrender of many Xiongnu aristocrats. At the Battle of Mobei in 119 BCE, generals Wei and Huo led the campaign to the Khangai Mountains where they forced the chanyu to flee north of the Gobi Desert. The maintenance of 300,000 horses by government slaves in thirty-six different pasture lands was not enough to satisfy the cavalry and baggage trains needed for these campaigns, so the government offered exemption from military and corvée labor for up to three male members of each household who presented a privately bred horse to the government.
Expansion and colonizationEdit
After Xiongnu's King Hunye surrendered to Huo Qubing in 121 BCE, the Han acquired a territory stretching from the Hexi Corridor to Lop Nur, thus cutting the Xiongnu off from their Qiang allies. New commanderies were established in the Ordos as well as four in the Hexi Corridor—Jiuquan, Zhangyi, Dunhuang, and Wuwei—which were populated with Han settlers after a major Qiang-Xiongnu allied force was repelled from the region in 111 BCE. By 119 BCE, Han forces established their first garrison outposts in the Juyan Lake Basin of Inner Mongolia, with larger settlements built there after 110 BCE. Roughly 40% of the settlers at Juyan came from the Guandong region of modern Henan, western Shandong, southern Shanxi, southern Hebei, northwestern Jiangsu, and northwestern Anhui. After Hunye's surrender, the Han court moved 725,000 people from the Guandong region to populate the Xinqinzhong (新秦中) region south of the bend of the Yellow River. In all, Emperor Wu's forces conquered roughly 4.4 million km2 (1.7 million mi2) of new land, by far the largest territorial expansion in Chinese history. Self-sustaining agricultural garrisons were established in these frontier outposts to support military campaigns as well as secure trade routes leading into Central Asia, the eastern terminus of the Silk Road. The Han-era Great Wall was extended as far west as Dunhuang and sections of it still stand today in Gansu, including thirty Han beacon towers and two fortified castles.
Exploration, foreign trade, war and diplomacyEdit
Starting in 139 BCE, the Han diplomat Zhang Qian traveled west in an unsuccessful attempt to secure an alliance with the Da Yuezhi (who were evicted from Gansu by the Xiongnu in 177 BCE); however, Zhang's travels revealed entire countries which the Chinese were unaware of, the remnants of the conquests of Alexander the Great (r. 336–323 BCE). When Zhang returned to China in 125 BCE, he reported on his visits to Dayuan (Fergana), Kangju (Sogdiana), and Daxia (Bactria, formerly the Greco-Bactrian Kingdom which was subjugated by the Da Yuezhi). Zhang described Dayuan and Daxia as agricultural and urban countries like China, and although he did not venture there, described Shendu (the Indus River valley of northern India) and Anxi (Arsacid territories) further west. Envoys sent to these states returned with foreign delegations and lucrative trade caravans; yet even before this, Zhang noted that these countries were importing Chinese silk. After interrogating merchants, Zhang also discovered a southwestern trade route leading through Burma and on to India. The earliest known Roman glassware found in China (but manufactured in the Roman Empire) is a glass bowl found in a Guangzhou tomb dating to the early 1st century BCE and perhaps came from a maritime route passing through the South China Sea. Likewise, imported Chinese silk attire became popular in the Roman Republic by the time of Julius Caesar (100–44 BCE).
After the heqin agreement broke down, the Xiongnu were forced to extract more crafts and agricultural foodstuffs from the subjugated Tarim Basin urban centers. From 115–60 BCE the Han and Xiongnu battled for control and influence over these states, with the Han gaining, from 108–101 BCE tributary submission of Loulan, Turpan, Bügür, Dayuan (Fergana), and Kangju (Sogdiana). The farthest-reaching and most expensive invasion was Li Guangli's four-year campaign against Fergana in the Syr Darya and Amu Darya valleys (modern Uzbekistan and Kyrgyzstan). Historian Laszlo Torday (1997) asserts that Fergana threatened to cut off Han's access to the Silk Road, yet historian Sima Qian (d. 86 BCE) downplayed this threat by asserting that Li's mission was really a means to punish Dayuan for not providing tribute of prized Central Asian stallions.
To the south, Emperor Wu assisted King Zhao Mo in fending off an attack by Minyue (in modern Fujian) in 135 BCE. After a pro-Han faction was overthrown at the court of Nanyue, Han naval forces conquered Nanyue in 111 BCE during the Han–Nanyue War, bringing areas of modern Guangdong, Guangxi, Hainan Island, and northern Vietnam under Han control. Emperor Wu also launched an invasion into the Dian Kingdom of Yunnan in 109 BCE, subjugating its king as a tributary vassal, while later Dian rebellions in 86 BCE and 83 BCE, 14 CE (during Wang Mang's rule), and 42–45 CE were quelled by Han forces. Wu sent an expedition into what is now North Korea in 128 BCE, but this was abandoned two years later. In 108 BCE, another expedition against Gojoseon in northern Korea established four commanderies there, only two of which (i.e. Xuantu Commandery and Lelang Commandery) remained after 82 BCE. Although there was some violent resistance in 108 BCE and irregular raids by Goguryeo and Buyeo afterwards, Chinese settlers conducted peaceful trade relations with native Koreans who lived largely independent of (but were culturally influenced by) the sparse Han settlements.
To fund his prolonged military campaigns and colonization efforts, Emperor Wu turned away from the "nonaction" policy of earlier reigns by having the central government commandeer the private industries and trades of salt mining and iron manufacturing by 117 BCE. Another government monopoly over liquor was established in 98 BCE, but the majority consensus at a court conference in 81 BCE led to the breaking up of this monopoly. The mathematician and official Sang Hongyang (d. 80 BCE), who later became Imperial Counselor and one of many former merchants drafted into the government to help administer these monopolies, was responsible for the 'equable transportation' system that eliminated price variation over time from place to place. This was a government means to interfere in the profitable grain trade by eliminating speculation (since the government stocked up on grain when cheap and sold it to the public at a low price when private merchants demanded higher ones). This along with the monopolies were criticized even during Wu's reign as bringing unnecessary hardships for merchants' profits and farmers forced to rely on poor-quality government-made goods and services; the monopolies and equable transportation did not last into the Eastern Han Era (25–220 CE).
During Emperor Wu's reign, the poll tax for each minor aged three to fourteen was raised from 20 to 23 coins; the rate for adults remained at 120. New taxes exacted on market transactions, wheeled vehicles, and properties were meant to bolster the growing military budget. In 119 BCE a new bronze coin weighing five shu (3.2 g/0.11 oz)—replacing the four shu coin—was issued by the government (remaining the standard coin of China until the Tang dynasty), followed by a ban on private minting in 113 BCE. Earlier attempts to ban private minting took place in 186 and 144 BCE, but Wu's monopoly over the issue of coinage remained in place throughout the Han (although its stewardship changed hands between different government agencies). From 118 BCE to 5 CE, the Han government minted 28 billion coins, an average of 220 million a year.
Latter half of Western HanEdit
Regency of Huo GuangEdit
Emperor Wu's first wife, Empress Chen Jiao, was deposed in 130 BCE after allegations that she attempted witchcraft to help her produce a male heir. In 91 BCE, similar allegations were made against Emperor Wu's Crown Prince Liu Ju, the son of his second wife Empress Wei Zifu. Liu Ju, in fear of Emperor Wu's believing the false allegations, began a rebellion in Chang'an which lasted for five days, while Emperor Wu was away at his quiet summer retreat of Ganquan (甘泉; in modern Shaanxi),. After Liu Ju's defeat, he and his mother committed suicide.
Eventually, due to his good reputation, Huo Qubing's half-brother Huo Guang was entrusted by Wu to form a triumvirate regency alongside ethnically Xiongnu Jin Midi (d. 86 BCE) and Shangguan Jie (d. 80 BCE) over the court of his successor, the child Liu Fuling, known posthumously as Emperor Zhao of Han (r. 87–74 BCE). Jin Midi died a year later and by 80 BCE Shangguan Jie and Imperial Counselor Sang Hongyang were executed when they were accused of supporting Emperor Zhao's older brother Liu Dan (劉旦) the King of Yan as emperor; this gave Huo unrivaled power. However, he did not abuse his power in the eyes of the Confucian establishment and gained popularity for reducing Emperor Wu's taxes.
Emperor Zhao died in 74 BCE without a successor, while the one chosen to replace him on 18 July, his nephew Prince He of Changyi, was removed on 14 August after displaying a lack of character or capacity to rule. Prince He's removal was secured with a memorial signed by all the leading ministers and submitted to Empress Dowager Shangguan for approval. Liu Bingyi (Liu Ju's grandson) was named Emperor Xuan of Han (r. 74–49 BCE) on 10 September. Huo Guang remained in power as regent over Emperor Xuan until he died of natural causes in 68 BCE. Yet in 66 BCE the Huo clan was charged with conspiracy against the throne and eliminated. This was the culmination of Emperor Xuan's revenge after Huo Guang's wife had poisoned his beloved Empress Xu Pingjun in 71 BCE only to have her replaced by Huo Guang's daughter Empress Huo Chengjun (the latter was deposed in September 66 BCE). Liu Shi, son of Empress Xu, succeeded his father as Emperor Yuan of Han (r. 49–33 BCE).
Reforms and frugalityEdit
During Emperor Wu's reign and Huo Guang's regency, the dominant political faction was the Modernist Party. This party favored greater government intervention in the private economy with government monopolies over salt and iron, higher taxes exacted on private business, and price controls which were used to fund an aggressive foreign policy of territorial expansion; they also followed the Qin dynasty approach to discipline by meting out more punishments for faults and less rewards for service. After Huo Guang's regency, the Reformist Party gained more leverage over state affairs and policy decisions. This party favored the abolishment of government monopolies, limited government intervention in the private economy, a moderate foreign policy, limited colonization efforts, frugal budget reform, and a return to the Zhou dynasty ideal of granting more rewards for service to display the dynasty's magnanimity. This party's influence can be seen in the abolition of the central government's salt and iron monopolies in 44 BCE, yet these were reinstated in 41 BCE, only to be abolished again during the 1st century CE and transferred to local administrations and private entrepreneurship. By 66 BCE the Reformists had many of the lavish spectacles, games, and entertainments installed by Emperor Wu to impress foreign dignitaries cancelled on the grounds that they were excessive and ostentatious.
Spurred by alleged signs from Heaven warning the ruler of his incompetence, a total of eighteen general amnesties were granted during the combined reigns of Emperor Yuan (Liu Shi) and Emperor Cheng of Han (r. 37–3 BCE, Liu Ao 劉驁). Emperor Yuan reduced the severity of punishment for several crimes, while Cheng reduced the length of judicial procedures in 34 BCE since they were disrupting the lives of commoners. While the Modernists had accepted sums of cash from criminals to have their sentences commuted or even dropped, the Reformists reversed this policy since it favored the wealthy over the poor and was not an effective deterrent against crime.
Emperor Cheng made major reforms to state-sponsored religion. The Qin dynasty had worshipped four main legendary deities, with another added by Emperor Gaozu in 205 BCE; these were the Five Powers, or Wudi. In 31 BCE Emperor Cheng, in an effort to gain Heaven's favor and bless him with a male heir, halted all ceremonies dedicated to the Five Powers and replaced them with ceremonies for the supreme god Shangdi, who the kings of Zhou had worshipped.
Foreign relations and warEdit
The first half of the 1st century BCE witnessed several succession crises for the Xiongnu leadership, allowing Han to further cement its control over the Western Regions. The Han general Fu Jiezi assassinated the pro-Xiongnu King of Loulan in 77 BCE. The Han formed a coalition with the Wusun, Dingling, and Wuhuan, and the coalition forces inflicted a major defeat against the Xiongnu in 72 BCE. The Han regained its influence over the Turpan Depression after defeating the Xiongnu at the Battle of Jushi in 67 BCE. In 65 BCE Han was able to install a new King of Kucha (a state north of the Taklamakan Desert) who would be agreeable to Han interests in the region. The office of the Protectorate of the Western Regions, first given to Zheng Ji (d. 49 BCE), was established in 60 BCE to supervise colonial activities and conduct relations with the small kingdoms of the Tarim Basin.
After Zhizhi Chanyu (r. 56–36 BCE) had inflicted a serious defeat against his rival brother and royal contender Huhanye Chanyu (呼韓邪) (r. 58–31 BCE), Huhanye and his supporters debated whether to request Han aid and become a Han vassal. He decided to do so in 52 BCE. Huhanye sent his son as a hostage to Han and personally paid homage to Emperor Xuan during the 51 BCE Chinese New Year celebration. Under the advocacy of the Reformists, Huhanye was seated as a distinguished guest of honor and rich rewards of 5 kg (160 oz t) of gold, 200,000 cash coins, 77 suits of clothes, 8,000 bales of silk fabric, 1,500 kg (3,300 lb) of silk floss, and 15 horses, in addition to 680,000 L (19,300 U.S. bu) of grain sent to him when he returned home.
Huhanye Chanyu and his successors were encouraged to pay further trips of homage to the Han court due to the increasing amount of gifts showered on them after each visit; this was a cause for complaint by some ministers in 3 BCE, yet the financial consequence of pampering their vassal was deemed superior to the heqin agreement. Zhizhi Chanyu initially attempted to send hostages and tribute to the Han court in hopes of ending the Han support of Huhanye, but eventually turned against Han. Subsequently, the Han general Chen Tang and Protector General Gan Yanshou (甘延壽/甘延寿), acting without explicit permission from the Han court, killed Zhizhi at his capital of Shanyu City (in modern Taraz, Kazakhstan) in 36 BCE. The Reformist Han court, reluctant to award independent missions let alone foreign interventionism, gave Chen and Gan only modest rewards. Despite the show of favor, Huhanye was not given a Han princess; instead, he was given the Lady Wang Zhaojun, one of the Four Beauties of ancient China. This marked a departure from the earlier heqin agreement, where a Chinese princess was handed over to the Chanyu as his bride.
Wang Mang's usurpationEdit
Wang Mang seizes controlEdit
The long life of Empress Wang Zhengjun (71 BCE–13 CE), wife of Emperor Yuan and mother to Emperor Cheng, ensured that her male relatives would be appointed one after another to the role of regent, officially known as Commander-in-Chief. Emperor Cheng, who was more interested in cockfighting and chasing after beautiful women than administering the empire, left much of the affairs of state to his relatives of the Wang clan. On 28 November 8 BCE Wang Mang (45 BCE–23 CE), a nephew of Empress Dowager Wang, became the new General-in-Chief. However, when Emperor Ai of Han (r. 7–1 BCE, Liu Xin) took the throne, his grandmother Consort Fu (Emperor Yuan's concubine) became the leading figure in the palace and forced Wang Mang to resign on 27 August 7 BCE, followed by his forced departure from the capital to his marquessate in 5 BCE.
Due to pressure from Wang's supporters, Emperor Ai invited Wang Mang back to the capital in 2 BCE. A year later Emperor Ai died of illness without a son. Wang Mang was reinstated as regent over Emperor Ping of Han (r. 1 BCE – 6 CE, Liu Jizi), a first cousin of the former emperor. Although Wang had married his daughter to Emperor Ping, the latter was still a child when he died in 6 CE. In July of that year, Grand Empress Dowager Wang confirmed Wang Mang as acting emperor (jiahuangdi 假皇帝) and the child Liu Ying as his heir to succeed him, despite the fact that a Liu family marquess had revolted against Wang a month earlier, followed by others who were outraged that he was assuming greater power than the imperial Liu family. These rebellions were quelled and Wang Mang promised to hand over power to Liu Ying when he reached his majority. Despite promises to relinquish power, Wang initiated a propaganda campaign to show that Heaven was sending signals that it was time for Han's rule to end. On 10 January 9 CE he announced that Han had run its course and accepted the requests that he proclaim himself emperor of the Xin dynasty (9–23 CE).
Wang Mang had a grand vision to restore China to a fabled golden age achieved in the early Zhou dynasty, the era which Confucius had idealized. He attempted sweeping reforms, including the outlawing of slavery and institution of the King's Fields system in 9 CE, nationalizing land ownership and allotting a standard amount of land to each family. Slavery was reestablished and the land reform regime was cancelled in 12 CE due to widespread protest.
The historian Ban Gu (32–92 CE) wrote that Wang's reforms led to his downfall, yet aside from slavery and land reform, historian Hans Bielenstein points out that most of Wang's reforms were in line with earlier Han policies. Although his new denominations of currency introduced in 7 CE, 9 CE, 10 CE, and 14 CE debased the value of coinage, earlier introductions of lighter-weight currencies resulted in economic damage as well. Wang renamed all the commanderies of the empire as well as bureaucratic titles, yet there were precedents for this as well. The government monopolies were rescinded in 22 CE because they could no longer be enforced during a large-scale rebellion against him (spurred by massive flooding of the Yellow River).
Foreign relations under WangEdit
The half-Chinese, half-Xiongnu noble Yituzhiyashi (伊屠智牙師), son of Huhanye Chanyu and Wang Zhaojun, became a vocal partisan for Han China within the Xiongnu realm; Bielenstein claims that this led conservative Xiongnu nobles to anticipate a break in the alliance with Han. The moment came when Wang Mang assumed the throne and demoted the Chanyu to a lesser rank; this became a pretext for war. During the winter of 10–11 CE, Wang amassed 300,000 troops along the northern border of Han China, a show of force which led the Xiongnu to back down. Yet when raiding continued, Wang Mang had the princely Xiongnu hostage held by Han authorities executed. Diplomatic relations were repaired when Xian (咸) (r. 13–18 CE) became the chanyu, only to be soiled again when Huduershi Chanyu (呼都而尸) (r. 18–46 CE) took the throne and raided Han's borders in 19 CE.
The Tarim Basin kingdom of Yanqi (Karasahr, located east of Kucha, west of Turpan) rebelled against Xin authority in 13 CE, killing Han's Protector General Dan Qin (但欽). Wang Mang sent a force to retaliate against Karasahr in 16 CE, quelling their resistance and ensuring that the region would remain under Chinese control until the widespread rebellion against Wang Mang toppled his rule in 23 CE. Wang also extended Chinese influence over Tibetan tribes in the Kokonor region and fended off an attack in 12 CE by Goguryeo (an early Korean state located around the Yalu River) in the Korean peninsula. However, as the widespread rebellion in China mounted from 20–23 CE, the Koreans raided Lelang Commandery and Han did not reassert itself in the region until 30 CE.
Restoration of the HanEdit
Natural disaster and civil warEdit
Before 3 CE, the course of the Yellow River had emptied into the Bohai Sea at Tianjin, but the gradual buildup of silt in its riverbed—which raised the water level each year—overpowered the dikes built to prevent flooding and the river split in two, with one arm flowing south of the Shandong Peninsula and into the East China Sea. A second flood in 11 CE changed the course of the northern branch of the river so that it emptied slightly north of the Shandong Peninsula, yet far south of Tianjin. With much of the southern North China Plain inundated following the creation of the Yellow River's southern branch, thousands of starving peasants who were displaced from their homes formed groups of bandits and rebels, most notably the Red Eyebrows. Wang Mang's armies tried to quell these rebellions in 18 and 22 CE but failed.
Liu Yan (d. 23 CE), a descendant of Emperor Jing, led a group of rebelling gentry groups from Nanyang who had Yan's third cousin Liu Xuan (劉玄) accept the title Gengshi Emperor (r. 23–25) on 11 March 23 CE. Liu Xiu, a brother of Liu Yan and future Emperor Guangwu of Han (r. 25–57 CE), distinguished himself at the Battle of Kunyang on 7 July 23 CE when he relieved a city sieged by Wang Mang's forces and turned the tide of the war. Soon afterwards, Gengshi Emperor had Liu Yan executed on grounds of treason and Liu Xiu, fearing for his life, resigned from office as Minister of Ceremonies and avoided public mourning for his brother; for this, the emperor gave Liu Xiu a marquessate and a promotion as general.
Gengshi's forces then targeted Chang'an, but a local insurgency broke out in the capital, sacking the city on 4 October. From 4–6 October Wang Mang made a last stand at the Weiyang Palace only to be killed and decapitated; his head was sent to Gengshi's headquarters at Wan (i.e., Nanyang) before Gengshi's armies even reached Chang'an on 9 October. Gengshi Emperor settled Luoyang as his new capital where he invited Red Eyebrows leader Fan Chong (樊崇) to stay, yet Gengshi granted him only honorary titles, so Fan decided to flee once his men began to desert him. Gengshi moved the capital back to Chang'an in 24 CE, yet in the following year the Red Eyebrows defeated his forces, appointed their own puppet ruler Liu Penzi, entered Chang'an and captured the fleeing Gengshi who they demoted as King of Changsha before killing him.
Reconsolidation under GuangwuEdit
While acting as a commissioner under Gengshi Emperor, Liu Xiu gathered a significant following after putting down a local rebellion (in what is now Hebei province). He claimed the Han throne himself on 5 August 25 CE and occupied Luoyang as his capital on 27 November. Before he would eventually unify the empire, there were 11 others who claimed the title of emperor. With the efforts of his officers Deng Yu and Feng Yi, Guangwu forced the wandering Red Eyebrows to surrender on 15 March 27 CE, resettling them at Luoyang, yet had their leader Fan Chong executed when a plot of rebellion was revealed.
From 26–30 CE, Guangwu defeated various warlords and conquered the Central Plain and Shandong Peninsula in the east. Allying with the warlord Dou Rong (竇融) of the distant Hexi Corridor in 29 CE, Guangwu nearly defeated the Gansu warlord Wei Xiao (隗囂/隗嚣) in 32 CE, seizing Wei's domain in 33 CE. The last adversary standing was Gongsun Shu, whose "Chengjia" regime was based at Chengdu in modern Sichuan. Although Guangwu's forces successfully burned down Gongsun's fortified pontoon bridge stretching across the Yangzi River, Guangwu's commanding general Cen Peng (岑彭) was killed in 35 CE by an assassin sent by Gongsun Shu. Nevertheless, Han General Wu Han (d. 44 CE) resumed Cen's campaign along the Yangzi and Min rivers and destroyed Gongsun's forces by December 36 CE.
Since Chang'an is located west of Luoyang, the names Western Han (202 BCE – 9 CE) and Eastern Han (25–220 CE) are accepted by historians. Luoyang's 10 m (32 ft) tall eastern, western, and northern walls still stand today, although the southern wall was destroyed when the Luo River changed its course. Within its walls it had two prominent palaces, both of which existed during Western Han, but were expanded by Guangwu and his successors. While Eastern Han Luoyang is estimated to have held roughly 500,000 inhabitants, the first known census data for the whole of China, dated 2 CE, recorded a population of nearly 58 million. Comparing this to the census of 140 CE (when the total population was registered at roughly 48 million), there was a significant migratory shift of up to 10 million people from northern to southern China during Eastern Han, largely because of natural disasters and wars with nomadic groups in the north. Population size fluctuated according to periodically updated Eastern-Han censuses, but historian Sadao Nishijima notes that this does not reflect a dramatic loss of life, but rather government inability at times to register the entire populace.
Policies under Guangwu, Ming, Zhang, and HeEdit
Scrapping Wang Mang's denominations of currency, Emperor Guangwu reintroduced Western Han's standard five shu coin in 40 CE. Making up for lost revenue after the salt and iron monopolies were canceled, private manufacturers were heavily taxed while the government purchased its armies' swords and shields from private businesses. In 31 CE he allowed peasants to pay a military substitution tax to avoid conscription into the armed forces for a year of training and year of service; instead he built a volunteer force which lasted throughout Eastern Han. He also allowed peasants to avoid the one-month corvée duty with a commutable tax as hired labor became more popular. Wang Mang had demoted all Han marquesses to commoner status, yet Guangwu made an effort from 27 CE onwards to find their relatives and restore abolished marquessates.
Emperor Ming of Han (r. 57–75 CE, Liu Yang) reestablished the Office for Price Adjustment and Stabilization and the price stabilization system where the government bought grain when cheap and sold it to the public when private commercial prices were high due to limited stocks. However, he canceled the prize stabilization scheme in 68 CE when he became convinced that government hoarding of grain only made wealthy merchants even richer. With the renewed economic prosperity brought about by his father's reign, Emperor Ming addressed the flooding of the Yellow River by repairing various dams and canals. On 8 April 70 CE, an edict boasted that the southern branch of the Yellow River emptying south of the Shandong Peninsula was finally cut off by Han engineering. A patron of scholarship, Emperor Ming also established a school for young nobles aside from the Imperial University.
Emperor Zhang of Han (r. 75–88 CE, Liu Da) faced an agrarian crisis when a cattle epidemic broke out in 76 CE. In addition to providing disaster relief, Zhang also made reforms to legal procedures and lightened existing punishments with the bastinado, since he believed that this would restore the seasonal balance of yin and yang and cure the epidemic. To further display his benevolence, in 78 CE he ceased the corvée work on canal works of the Hutuo River running through the Taihang Mountains, believing it was causing too much hardship for the people; in 85 CE he granted a three-year poll tax exemption for any woman who gave birth and exempted their husbands for a year. Unlike other Eastern Han rulers who sponsored the New Texts tradition of the Confucian Five Classics, Zhang was a patron of the Old Texts tradition and held scholarly debates on the validity of the schools. Rafe de Crespigny writes that the major reform of the Eastern Han period was Zhang's reintroduction in 85 CE of an amended Sifen calendar, replacing Emperor Wu's Taichu calendar of 104 BCE which had become inaccurate over two centuries (the former measured the tropical year at 365.25 days like the Julian Calendar, while the latter measured the tropical year at 365385⁄1539 days and the lunar month at 2943⁄81 days).
Emperor He of Han (r. 88–105 CE, Liu Zhao) was tolerant of both New Text and Old Text traditions, though orthodox studies were in decline and works skeptical of New Texts, such as Wang Chong's (27 – c. 100 CE) Lunheng, disillusioned the scholarly community with that tradition. He also showed an interest in history when he commissioned the Lady Ban Zhao (45–116 CE) to use the imperial archives in order to complete the Book of Han, the work of her deceased father and brother. This set an important precedent of imperial control over the recording of history and thus was unlike Sima Qian's far more independent work, the Records of the Grand Historian (109–91 BCE). When plagues of locusts, floods, and earthquakes disrupted the lives of commoners, Emperor He's relief policies were to cut taxes, open granaries, provide government loans, forgive private debts, and resettle people away from disaster areas. Believing that a severe drought in 94 CE was the cosmological result of injustice in the legal system, Emperor He personally inspected prisons. When he found that some had false charges levelled against them, he sent the Prefect of Luoyang to prison; rain allegedly came soon afterwards.
Foreign relations and split of the Xiongnu realmEdit
The Vietnamese Trưng Sisters led an uprising in the Red River Delta of Jiaozhi Commandery in 40 CE. Guangwu sent the elderly general Ma Yuan (~14 BCE – 49 CE), who defeated them in 42–43 CE. The sisters' native Dong Son drums were melted down and recast into a large bronze horse statue presented to Guangwu at Luoyang.
Meanwhile, Huduershi Chanyu was succeeded by his son Punu (蒲奴) in 46 CE, thus breaking Huhanye's orders that only a Xiongnu ruler's brother was a valid successor; Huduershi's nephew Bi (比) was outraged and in 48 CE was proclaimed a rival Chanyu. This split created the Northern Xiongnu and Southern Xiongnu, and like Huhanye before him, Bi turned to the Han for aid in 50 CE. When Bi came to pay homage to the Han court, he was given 10,000 bales of silk fabrics, 2,500 kg (5,500 lb) of silk, 500,000 L (14,000 U.S. bu) of rice, and 36,000 head of cattle. Unlike in Huhanye's time, however, the Southern Xiongnu were overseen by a Han Prefect who not only acted as an arbiter in Xiongnu legal cases, but also monitored the movements of the Chanyu and his followers who were settled in Han's northern commanderies in Shanxi, Gansu, and Inner Mongolia. Northern Xiongnu attempts to enter Han's tributary system were rejected.
Following Xin's loss of the Western Territories, the Kingdom of Yarkand looked after the Chinese officials and families stranded in the Tarim Basin and fought the Xiongnu for control over it. Emperor Guangwu, preoccupied with civil wars in China, simply granted King Kang of Yarkand an official title in 29 CE and in 41 CE made his successor King Xian a Protector General (later reduced to the honorary title of "Great General of Han"). Yarkand overtaxed its subjects of Khotan, Turpan, Kucha, and Karasahr, all of which decided to ally with the Northern Xiongnu. By 61 CE Khotan had conquered Yarkand, yet this led to a war among the kingdoms to decide which would be the next hegemon. The Northern Xiongnu took advantage of the infighting, conquered the Tarim Basin, and used it as a base to stage raids into Han's Hexi Corridor by 63 CE. In that year, the Han court opened border markets for trade with the Northern Xiongnu in hopes to appease them.
Yet Han sought to reconquer the Tarim Basin. At the Battle of Yiwulu in 73 CE, Dou Gu (d. 88 CE) reached as far as Lake Barkol when he defeated a Northern Xiongnu chanyu and established an agricultural garrison at Hami. Although Dou Gu was able to evict the Xiongnu from Turpan in 74 CE, when the Han appointed Chen Mu (d. 75 CE) as the new Protector General of the Western Regions, the Northern Xiongnu invaded the Bogda Mountains while their allies Karasarh and Kucha killed Chen Mu and his troops. The Han garrison at Hami was forced to withdraw in 77 CE (and was not reestablished until 91 CE). The next Han expedition against the Northern Xiongnu was led in 89 CE by Dou Xian (d. 92 CE); at the Battle of Ikh Bayan, Dou's forces chased the Northern Chanyu into the Altai Mountains, allegedly killing 13,000 Xiongnu and accepting the surrender of 200,000 Xiongnu from 81 tribes.
After Dou sent 2,000 cavalry to attack the Northern Xiongnu base at Hami, he was followed by the initiative of the general Ban Chao (d. 102 CE), who earlier installed a new king of Kashgar as a Han ally. When this king turned against him and enlisted the aid of Sogdiana in 84 CE, Ban Chao arranged an alliance with the Kushan Empire (of modern North India, Pakistan, Afghanistan, and Tajikistan), which put political pressure on Sogdiana to back down; Ban later assassinated King Zhong of Kashgar. Since Kushan provided aid to Ban Chao in quelling Turpan and sent tribute and hostages to Han, its ruler Vima Kadphises (r. c. 90 – c. 100 CE) requested a Chinese princess bride; when this was rejected in 90 CE, Kushan marched 70,000 troops to Wakhan against Ban Chao. Ban used scorched earth tactics against Kushan, forcing them to request food supplies from Kucha. When Kushan messengers were intercepted by Ban, Kushan was forced to withdraw. In 91 CE, Ban was appointed as Protector General of the Western Regions, an office he filled until 101 CE.
Tributary gifts and emissaries from the Arsacid Empire, then under Pacorus II of Parthia (r. 78–105 CE), came to the Han in 87 CE, 89 CE, and 101 CE bringing exotic animals such as ostriches and lions. When Ban Chao dispatched his emissary Gan Ying in 97 CE to reach Daqin (the Roman Empire), he did not reach farther than a "large sea", perhaps the Persian Gulf. However, from oral accounts Gan was able to describe Rome as having hundreds of walled cities, a postal delivery network, the submission of dependent states, and a system of government where the Roman "king" (i.e. consul) is "not a permanent figure but is chosen as the man most worthy." Elephants and rhinoceroses were also presented as gifts to the Han court in 94 CE and 97 CE by a king in what is now Burma. The first known diplomatic mission from a ruler in Japan came in 57 CE (followed by another in 107 CE); a golden seal of Emperor Guangwu's was even discovered in 1784 in Chikuzen Province. The first mentioning of Buddhism in China was made in 65 CE, when the Chinese clearly associated it with Huang-Lao Daoism. Emperor Ming had the first Buddhist temple of China—the White Horse Temple—built at Luoyang in honor of two foreign monks: Jiashemoteng (Kāśyapa Mātanga) and Zhu Falan (Dharmaratna the Indian). These monks allegedly translated the Sutra of Forty-two Chapters from Sanskrit into Chinese, although it is now proven that this text was not translated into Chinese until the 2nd century CE.
Court, kinsmen, and consort clansEdit
Besides his divorcing Guo Shengtong in 41 CE to install his original wife Empress Yin Lihua as empress instead, there was little drama with imperial kinsmen at Guangwu's court, as Empress Guo was made a queen dowager and her son, the former heir apparent, was demoted to the status of a king. However, trouble with imperial kinsmen turned violent during Ming's reign. In addition to exiling his half-brother Liu Ying (d. 71 CE, committed suicide) after Liu Ying allegedly used witchcraft to curse him, Emperor Ming also targeted hundreds of others with similar charges (of using occult omens and witchcraft) resulting in exile, torture for gaining confessions, and execution. This trend of persecution did not end until Emperor Zhang took the throne, who was for the most part generous towards his brothers and called back many to the capital who had been exiled by Ming.
Of greater consequence for the dynasty, however, was Emperor He's coup of 92 CE in which eunuchs made their first significant involvement in court politics of Eastern Han. Emperor Zhang had upheld a good relationship with his titular mother and Ming's widow, the humble Empress Dowager Ma (d. 79 CE), but Empress Dowager Dou (d. 97 CE), the widow of Emperor Zhang, was overbearing towards Emperor He (son of Emperor Zhang and Consort Liang) in his early reign and, concealing the identity of his natural mother from him, raised He as her own after purging the Liang family from power. In order to put He on the throne, Empress Dowager Dou had even demoted the crown prince Liu Qing (78–106 CE) as a king and forced his mother, Consort Song (d. 82 CE) to commit suicide. Unwilling to yield his power to the Dou clan any longer, Emperor He enlisted the aid of palace eunuchs led by Zheng Zhong (d. 107 CE) to overthrow the Dou clan on charges of treason, stripping them of titles, exiling them, forcing many to commit suicide, and had the Empress Dowager placed under house arrest.
Middle age of Eastern HanEdit
Empress Deng Sui, consort families, and eunuchsEdit
Empress Deng Sui (d. 121 CE), widow to Emperor He, became empress dowager in 105 CE and thus had the final say in appointing He's successor (since he had appointed none); she placed his infant son Liu Long on the throne, later known as Emperor Shang of Han (r. 105–106). When the latter died at only age one, she placed his young nephew Liu Hu (Liu Qing's son) on the throne, known posthumously as Emperor An of Han (r. 106–125 CE), bypassing Emperor He's other son Liu Sheng (劉勝). With a young ruler on the throne, Empress Deng was the de facto ruler until her death, since her brother Deng Zhi's (鄧騭) brief occupation as the General-in-Chief (大將軍) from 109–110 CE did not in fact make him the ruling regent. With her death on 17 April 121 CE, Emperor An accepted the charge of eunuchs Li Run (李閏) and Jiang Jing (江京) that she had plotted to overthrow him; on 3 June he charged the Deng clan with treason and had them dismissed from office, stripped of title, reduced to commoner status, exiled to remote areas, and drove many to commit suicide.
The Yan clan of Empress Yan Ji (d. 126 CE), wife of Emperor An, and the eunuchs Jiang Jing and Fan Feng (樊豐) pressured Emperor An to demote his nine-year-old heir apparent Liu Bao to the status of a king on 5 October 124 CE on charges of conspiracy, despite protests from senior government officials. When Emperor An died on 30 April 125 CE the Empress Dowager Yan was free to choose his successor, Liu Yi (grandson of Emperor Zhang), who is known as Emperor Shao of Han. After the child died suddenly in 125 CE, the eunuch Sun Cheng (d. 132 CE) made a palace coup, slaughtering the opposing eunuchs, and thrust Liu Bao on the throne, later to be known as Emperor Shun of Han (r. 125–144 CE); Sun then put Empress Dowager Yan under house arrest, had her brothers killed, and the rest of her family exiled to Vietnam.
Emperor Shun had no sons with Empress Liang Na (d. 150 CE), yet when his son Liu Bing briefly took the throne in 145 CE, the mother of the latter, Consort Yu, was in no position of power to challenge Empress Dowager Liang. After the child Emperor Zhi of Han (r. 145–146 CE) briefly sat on the throne, Empress Dowager Liang and her brother Liang Ji (d. 159 CE), now regent General-in-Chief, decided that Liu Zhi, known posthumously as Emperor Huan of Han (r. 146–168 CE), should take the throne, as he was betrothed to their sister Liang Nüying. When the younger Empress Liang died in 159 CE, Liang Ji attempted to control Emperor Huan's new favorite Consort Deng Mengnü (later empress) (d. 165 CE). When she resisted Liang Ji had her brother-in-law killed, prompting Emperor Huan to use eunuchs to oust Liang Ji from power; the latter committed suicide when his residence was surrounded by imperial guards. Emperor Huan died with no official heir, so his third wife Empress Dou Miao (d. 172 CE), now the empress dowager, had Liu Hong, known posthumously as Emperor Ling of Han (r. 168–189 CE), take the throne.
Reforms and policies of middle Eastern HanEdit
To mitigate the damage caused by a series of natural disasters, Empress Dowager Deng's government attempted various relief measures of tax remissions, donations to the poor, and immediate shipping of government grain to the most hard-hit areas. Although some water control works were repaired in 115 CE and 116 CE, many government projects became underfunded due to these relief efforts and the armed response to the large-scale Qiang people's rebellion of 107–118 CE. Aware of her financial constraints, the Empress Dowager limited the expenses at banquets, the fodder for imperial horses who weren't pulling carriages, and the amount of luxury goods manufactured by the imperial workshops. She approved the sale of some civil offices and even secondary marquess ranks to collect more revenue; the sale of offices was continued by Emperor Huan and became extremely prevalent during Emperor Ling's reign.
Emperor An continued similar disaster relief programs that Empress Dowager Deng had implemented, though he reversed some of her decisions, such as a 116 CE edict requiring officials to leave office for three years of mourning after the death of a parent (an ideal Confucian more). Since this seemed to contradict Confucian morals, Emperor An's sponsorship of renowned scholars was aimed at shoring up popularity among Confucians. Xu Shen (58–147 CE), although an Old Text scholar and thus not aligned with the New Text tradition sponsored by Emperor An, enhanced the emperor's Confucian credentials when he presented his groundbreaking dictionary to the court, the Shuowen Jiezi.
Financial troubles only worsened in Emperor Shun's reign, as many public works projects were handled at the local level without the central government's assistance. Yet his court still managed to supervise the major efforts of disaster relief, aided in part by a new invention in 132 CE of a seismometer by the court astronomer Zhang Heng (78–139 CE) who used a complex system of a vibration-sensitive swinging pendulum, mechanical gears, and falling metal balls to determine the direction of earthquakes hundreds of kilometers (miles) away. Shun's greatest patronage of scholarship was repairing the now dilapidated Imperial University in 131 CE, which still operated as a pathway for young gentrymen to enter civil service. Officials protested against the enfeoffment of eunuch Sun Cheng and his associates as marquesses, with further protest in 135 CE when Shun allowed the sons of eunuchs to inherit their fiefs, yet the larger concern was over the rising power of the Liang faction.
To abate the unseemly image of placing child emperors on the throne, Liang Ji attempted to paint himself as a populist by granting general amnesties, awarding people with noble ranks, reducing the severity of penalties (the bastinado was no longer used), allowing exiled families to return home, and allowing convicts to settle on new land in the frontier. Under his stewardship, the Imperial University was given a formal examination system whereby candidates would take exams on different classics over a period of years in order to gain entrance into public office. Despite these positive reforms, Liang Ji was widely accused of corruption and greed. Yet when Emperor Huan overthrew Liang by using eunuch allies, students of the Imperial University took to streets in the thousands chanting the names of the eunuchs they opposed in one of the earliest student protests in history.
After Liang Ji was overthrown, Huan distanced himself from the Confucian establishment and instead sought legitimacy through a revived imperial patronage of Huang-Lao Daoism; this renewed patronage of Huang-Lao was not continued after his reign. As the economy worsened, Huan built new hunting parks, imperial gardens, palace buildings, and expanded his harem to house thousands of concubines. The gentry class became alienated by Huan's corrupt government dominated by eunuchs and many refused nominations to serve in office, since current Confucian beliefs dictated that morality and personal relationships superseded public service. Emperor Ling hosted much less concubines than Huan, yet Ling left much of the affairs of state to his eunuchs. Instead, Ling busied himself play-acting as a traveling salesman with concubines dressed as market vendors or dressing in military costume as the 'General Supreme' for his parading Army of the Western Garden.
Foreign relations and war of middle Eastern HanEdit
The Eastern Han court colonized and periodically reasserted the Chinese military presence in the Western Regions only as a means to combat the Northern Xiongnu. Han forces were expelled from the Western Regions first by the Xiongnu between 77–90 CE and then by the Qiang between 107–122 CE. In both of these periods, the financial burdens of reestablishing and expanding western colonies, as well as the liability of sending financial aid requested by Tarim Basin tributary states, were viewed by the court as reasons to forestall the reopening of foreign relations in the region.
At the beginning of Empress Dowager Deng's regency, the Protector General of the Western Regions Ren Shang (d. 118 CE) was besieged at Kashgar. Although he was able to break the siege, he was recalled and replaced before the Empress Dowager began to withdraw forces from the Western Regions in 107 CE. However, a transitional force was still needed. The Qiang people, who had been settled by the Han government in southeastern Gansu since Emperor Jing's reign, would aid Han in this withdrawal. Throughout Eastern Han, the Qiang often revolted against Han authority after Han border officials robbed them of goods and even women and children. A group of Qiang people conscripted to reinforce the Protector General during his withdrawal decided instead to mutiny against him. Their revolt in the northwestern province of Liang (涼州) was put down in 108 CE, but it spurred a greater Qiang rebellion that would last until 118 CE, cutting off Han's access to Central Asia. The Qiang problem was exacerbated in 109 CE by a combined Southern Xiongnu, Xianbei, and Wuhuan rebellion in the northeast. The total monetary cost for putting down the Qiang rebellion in Liang province was 24 million cash (out of an average of 220 million cash minted annually), while the people of three entire commanderies within eastern Liang province and one commandery within Bing province were temporarily resettled in 110 CE.
Following general Ban Yong's reopening of relations with the Western Regions in 123 CE, two of the Liang province commanderies were reestablished in 129 CE, only to be withdrawn again a decade later. Even after eastern Liang province (comprising modern southeastern Gansu and Ningxia) was resettled, there was another massive rebellion there in 184 CE, instigated by Han Chinese, Qiang, Xiongnu, and Yuezhi rebels. Yet the Tarim-Basin states continued to offer tribute and hostages to China into the final decade of Han, while the agricultural garrison at Hami was not gradually abandoned until after 153 CE.
Of perhaps greater consequence for the Han dynasty and future dynasties was the ascendance of the Xianbei people. They filled the vacuum of power on the vast northern steppe after the Northern Xiongnu were defeated by Han and fled to the Ili River valley (in modern Kazakhstan) in 91 CE. The Xianbei quickly occupied the deserted territories and incorporated some 100,000 remnant Xiongnu families into their new federation, which by the mid 2nd century CE stretched from the western borders of the Buyeo Kingdom in Jilin, to the Dingling in southern Siberia, and all the way west to the Ili River valley of the Wusun people. Although they raided Han in 110 CE to force a negotiation of better trade agreements, the later leader Tanshihuai (檀石槐) (d. 180 CE) refused kingly titles and tributary arrangements offered by Emperor Huan and defeated Chinese armies under Emperor Ling. When Tanshihuai died in 180 CE, the Xianbei Federation largely fell apart, yet it grew powerful once more during the 3rd century CE.
After being introduced in the 1st century CE, Buddhism became more popular in China during the 2nd century CE. The Parthian monk An Shigao traveled from Parthia to China in 148 CE and made translations of Buddhists works on the Hinayana and yoga practices which the Chinese associated with Daoist exercises. The Kushan monk Lokaksema from Gandhara was active in China from 178–198 CE, translated the Perfection of Wisdom, Shurangama Sutra, and Pratyutpanna Sutra, and introduced to China the concepts of Akshobhya Buddha, Amitābha Buddha (of Pure Land Buddhism), and teachings about Manjusri. In 166 CE, Emperor Huan made sacrifices to Laozi and the Buddha. In that same year, the Book of Later Han records that Romans reached China from the maritime south and presented gifts to Huan's court, claiming they represented Roman emperor Marcus Aurelius Antoninus (Andun 安敦) (r. 161–180 CE). Crespigny speculates that they were Roman merchants, not diplomats. Archaeological findings at Óc Eo (near Ho Chi Minh City) in the Mekong Delta, which was once part of the Kingdom of Funan bordering the Chinese province of Jiaozhi (in northern Vietnam), have revealed Mediterranean goods such as Roman gold medallions made during the reigns of Antoninus Pius and Marcus Aurelius. Óc Eo may have been the same Southeast Asian seaport city mentioned in the 2nd-century Geography by the Greco-Roman writer Ptolemy (as well as in the work of Marinus of Tyre), a city called Cattigara where a Greek sailor named Alexandros allegedly visited by sailing northeast of the Golden Peninsula (i.e. the Malay Peninsula) into the Magnus Sinus (i.e. Gulf of Thailand and South China Sea).
Decline of Eastern HanEdit
In 166 CE, the official Li Ying (李膺) was accused by palace eunuchs of plotting treason with students at the Imperial University and associates in the provinces who opposed the eunuchs. Emperor Huan was furious, arresting Li and his followers, who were only released from prison the following year due to pleas from the General-in-Chief Dou Wu (d. 168 CE) (Emperor Huan's father-in-law). However, Li Ying and hundreds of his followers were proscribed from holding any offices and were branded as partisans (黨人).
After Emperor Huan's death, at the urging of the Grand Tutor (太傅) Chen Fan (陳蕃) (d. 168 CE), Dou Wu presented a memorial to the court in June 168 CE denouncing the leading eunuchs as corrupt and calling for their execution, but Empress Dowager Dou refused the proposal. This was followed by a memorial presented by Chen Fan calling for the heads of Hou Lan (d. 172 CE) and Cao Jie (d. 181 CE), and when this too was refused Dou Wu took formal legal action which could not be ignored by the court. When Shan Bing, a eunuch associate of Chen and Dou's, gained a forced confession from another eunuch that Cao Jie and Wang Fu (王甫) plotted treason, he prepared another damning written memorial on the night of 24–25 October which the opposing eunuchs secretly opened and read. Cao Jie armed Emperor Ling with a sword and hid him with his wet nurse, while Wang Fu had Shan Bing killed and Empress Dowager Dou incarcerated so that the eunuchs could use the authority of her seal.
Chen Fan entered the palace with eighty followers and engaged in a shouting match with Wang Fu, yet Chen was gradually surrounded, detained, and later trampled to death in prison that day (his followers were unharmed). At dawn, the general Zhang Huan (張奐), misled by the eunuchs into believing that Dou Wu was committing treason, engaged in a shouting match with Dou Wu at the palace gates, but as Dou's followers slowly deserted him and trickled over to Zhang's side, Dou was forced to commit suicide. In neither of these confrontations did any actual physical fighting break out.
With Dou Wu eliminated and the Empress Dowager under house arrest, the eunuchs renewed the proscriptions against Li Ying and his followers; in 169 CE they had hundreds more officials and students prohibited from serving office, sent their families into exile, and had Li Ying executed. The eunuchs barred potential enemies from court, sold and bartered offices, and infiltrated the military command. Emperor Ling even referred to eunuchs Zhao Zhong and Zhang Rang as his "mother" and "father"; the latter two had so much influence over the emperor that they convinced him not to ascend to the top floors of tall towers in the capital, which was an effort to conceal from him the enormous mansions that the eunuchs built for themselves. Although the partisan prohibitions were extended to hundreds more in 176 CE (including the distant relatives of those earlier proscribed), they were abolished in 184 CE with the outbreak of the Yellow Turban Rebellion, largely because the court feared the gentry—bitter from their banishment from office—would join the rebel cause.
Yellow Turban RebellionEdit
In the Han dynasty's later decades, a growing number of heterodox sects appeared across the empire. These sects generally challenged the state ideology of Confucianism, and although most were peaceful, some eventually began to stage rebellions against the Han dynasty. One of the most influential sects was founded by Zhang Daolingin 142 CE, namely the Five Pecks of Rice religious society in Sichuan. After claiming to have seen the deified Laozi as a holy prophet who appointed him as his earthly representative known as the Celestial Master, Zhang created a highly organized, hierarchical Daoist movement which accepted only pecks of rice and no money from its lay followers. In 184 CE, the Five Pecks of Rice under Zhang Lu staged a rebellion in Sichuan and set up a theocratic Daoist state that endured until 215 CE.
Other religious movements included the sect of Xu Chang that waged a rebellion from 172 to 174 in eastern China. The most successful movement belonged to the Yellow Turban Daoists of the Yellow and Huai River regions. They built a hierarchical church and believed that illness was the result of personal sins needing confessions. The Yellow Turbans became a militant organization that challenged Han authority by claiming they would bring about a utopian era of peace. Zhang Jue, renowned faith-healer and leader of the Yellow Turbans, and his hundreds of thousands of followers, designated by the yellow cloth that they wrapped around their foreheads, led a rebellion across eight provinces in 184 CE. They had early successes against imperial troops but by the end of 184 CE the Yellow Turban leadership—including Zhang—had been killed. Smaller groups of Yellow Turbans continued to revolt in the following years (until the last large group was incorporated into the forces of Chancellor Cao Cao in 192 CE), yet Crespigny asserts that the rebellion's impact on the fall of Han was less consequential than events which transpired in the capital following the death of Emperor Ling on 13 May 189 CE. However, Patricia Ebrey points out that many of the generals who raised armies to quell the rebellion never disbanded their forces and used them to amass their own power outside of imperial authority.
Downfall of the eunuchsEdit
He Jin (d. 189 CE), half-brother to Empress He (d. 189 CE), was given authority over the standing army and palace guards when appointed as General-in-Chief during the Yellow Turban Rebellion. Shortly after Empress He's son Liu Bian, known later as Emperor Shao of Han, was put on the throne, the eunuch Jian Shi plotted against He Jin, was discovered, and executed on 27 May 189 CE; He Jin thus took over Jian's Army of the Western Garden. Yuan Shao (d. 202 CE), then an officer in the Army of the Western Garden, plotted with He Jin to overthrow the eunuchs by secretly ordering several generals to march towards the capital and forcefully persuade the Empress Dowager He to hand over the eunuchs. Yuan had these generals send in petition after petition to the Empress Dowager calling for the eunuchs' dismissal; Mansvelt Beck states that this "psychological war" finally broke the Empress Dowager's will and she consented. However, the eunuchs discovered this and used Empress Dowager He's mother Lady Wuyang and her brother He Miao (何苗), both of whom were sympathetic to the eunuchs, to have the order rescinded. On 22 September, the eunuchs learned that He Jin had a private conversation with the Empress Dowager about executing them. They sent message to He Jin that the Empress Dowager had more words to share with him; once he sat down in the hall to meet her, eunuchs rushed out of hiding and beheaded He Jin. When the eunuchs ordered the imperial secretaries to draft an edict dismissing Yuan Shao, the former asked for He Jin's permission, so the eunuchs showed them He Jin's severed head.
However, the eunuchs became besieged when Yuan Shao attacked the Northern Palace and his brother Yuan Shu (d. 199 CE) attacked the Southern Palace, breaching the gate and forcing the eunuchs to flee to the Northern Palace by the covered passageway connecting both. Zhao Zhong was killed on the first day and the fighting lasted until 25 September when Yuan Shao finally broke into the Northern Palace and purportedly slaughtered two thousand eunuchs. However, Zhang Rang managed to flee with Emperor Shao and his brother Liu Xie to the Yellow River, where he was chased down by the Yuan family troops and committed suicide by jumping into the river and drowning.
Coalition against Dong ZhuoEdit
Dong Zhuo (d. 192 CE), General of the Vanguard (under Huangfu Song) who marched on to Luoyang under Yuan Shao's request, saw the capital in flames from a distance and heard that Emperor Shao was wandering in the hills nearby. When Dong approached Emperor Shao, the latter became frightened and unresponsive yet his brother Liu Xie explained to Dong what had happened. The ambitious Dong took over effective control of Luoyang and forced Yuan Shao to flee the capital on 26 September. Dong was made Excellency of Works (司空), one of the Three Excellencies. Despite protests, Dong had Emperor Shao demoted as the Prince of Hongnong on 28 September while elevating his brother Liu Xie as emperor, later known as Emperor Xian of Han (r. 189–220 CE). Empress Dowager He was poisoned to death by Dong Zhuo on 30 September, followed by Liu Bian on 3 March 190 CE.
Yuan Shao, once he left the capital, led a coalition of commanders, former officials, and soldiers of fortune to challenge Dong Zhuo. No longer viewing Luoyang as a safehaven, Dong burned the city to the ground and forced the imperial court to resettle at Chang'an in May 191 CE. In a conspiracy headed by the Minister over the Masses, Wang Yun (d. 192 CE), Dong was killed by his adopted son Lü Bu (d. 198 CE). Dong's subordinates then killed Wang and forced Lü to flee, throwing Chang'an into chaos.
Emperor Xian fled Chang'an in 195 CE and returned to Luoyang by August 196 CE. Meanwhile, the empire was being carved into eight spheres of influence, each ruled by powerful commanders or officials: in the northeast there was Yuan Shao and Cao Cao (155–220 CE); south of them was Yuan Shu, located just southeast of the capital; south of this was Liu Biao (d. 208 CE) in Jing; Sun Ce (d. 200 CE) controlled the southeast; in the southwest there was Liu Zhang (d. 219 CE) and Zhang Lu (d. 216 CE) located just north of him in Hanzhong; the southern Liang Province was inhabited by the Qiang people and various rebel groups. Although prognostication fueled speculation over the dynasty's fate, these warlords still claimed loyalty to Han, since the emperor was still at the pinnacle of a cosmic-religious system which ensured his political survival.
Rise of Cao CaoEdit
Cao Cao, a Commandant of Cavalry during the Yellow Turban Rebellion and then Colonel in the Army of the Western Garden by 188 CE, was Governor of Yan Province (modern western Shandong and eastern Henan) in 196 CE when he took the emperor from Luoyang to his headquarters at Xuchang. Yuan Shu declared his own Zhong Dynasty (仲朝) in 197 CE, yet this bold move earned him the desertion of many of his followers, dying penniless in 199 CE after attempting to offer his title to Yuan Shao. Gaining more power after defeating Gongsun Zan (d. 199), Yuan Shao regretted not seizing the emperor when he had the chance and decided to act against Cao. The confrontation culminated in Cao Cao's victory at the Battle of Guandu in 200 CE, forcing Yuan to retreat to his territory. After Yuan Shao died in 202 CE, his sons fought over his inheritance, allowing Cao Cao to eliminate Yuan Tan (173–205 CE) and drive his brothers Yuan Shang and Yuan Xi to seek refuge with the Wuhuan people. Cao Cao asserted his dominance over the northeast when he defeated the Wuhuan led by Tadun at the Battle of White Wolf Mountain in 207 CE; the Yuan brothers fled to Gongsun Kang (d. 221 CE) in Liaodong, but the latter killed them and sent their heads to Cao Cao in submission.
When there was speculation that Liu Bei (161–223 CE), a scion of the imperial family who was formerly in the service of Cao Cao, was planning to take over the territory of the now ill Liu Biao in 208 CE, Cao Cao forced Liu Biao's son to surrender his father's land. Expecting Cao Cao to turn on him next, Sun Quan (182–252 CE), who inherited the territory of his brother Sun Ce in 200 CE, allied with Liu Bei and faced Cao Cao's naval force in 208 CE at the Battle of Chibi. This was a significant defeat for Cao Cao which ensured the continued disunity of China during the Three Kingdoms (220–265 CE).
Fall of the HanEdit
When Cao Cao moved Emperor Xian to Xuchang in 196 CE, he took the title of Excellency of Works as Dong Zhuo had before him. In 208 CE, Cao abolished the three most senior offices, the Three Excellencies, and instead recreated two offices, the Imperial Counselor and Chancellor; he occupied the latter post. Cao was enfeoffed as the Duke of Wei in 213 CE, had Emperor Xian divorce Empress Fu Shou in 214 CE, and then had him marry his daughter as Empress Cao Jie in 215 CE. Finally, Cao took the title King of Wei in 216 CE, violating the rule that only Liu family members could become kings, yet he never deposed Emperor Xian. After Cao Cao died in 220 CE, his son Cao Pi (186–226 CE) inherited the title King of Wei and gained the uneasy allegiance of Sun Quan (while Liu Bei at this point had taken over Liu Zhang's territory of Yi Province). With debates over prognostication and signs from heaven showing the Han had lost the Mandate of Heaven, Emperor Xian agreed that the Han dynasty had reached its end and abdicated to Cao Pi on 11 December 220 CE, thus creating the state of Cao Wei, soon to oppose Shu Han in 221 CE and Eastern Wu in 229 CE.
- Endymion Wilkinson. Chinese History (1998), pp. 106–107.
- Ebrey (1999), 60.
- Ebrey (1999), 61.
- Cullen (2006), 1–2.
- Ebrey (1999), 63.
- Loewe (1986), 112–113.
- Loewe (1986), 112–113; Zizhi Tongjian, vol. 8.
- Loewe (1986), 113.
- Loewe (1986), 114.
- Zizhi Tongjian, vol. 8.
- Loewe (1986), 114–115; Loewe (2000), 254.
- Loewe (1986), 115.
- Loewe (2000), 255.
- Loewe (1986), 115; Davis (2001), 44.
- Loewe (1986), 116.
- Loewe (2000), 255; Loewe (1986), 117; Zizhi Tongjian, vol. 9.
- Davis (2001), 44; Loewe (1986), 116.
- Davis (2001), 44–45.
- Davis (2001), 44–45; Zizhi Tongjian, vol. 9.
- Davis (2001), 45; Zizhi Tongjian, vol. 9.
- Zizhi Tongjian, vol. 9.
- Davis (2001), 45.
- Davis (2001), 45–46.
- Davis (2001), 46.
- Loewe (1986), 122.
- Loewe (1986), 120.
- Hulsewé (1986), 526; Csikszentmihalyi (2006), 23–24; Hansen (2000), 110–112.
- Tom (1989), 112–113.
- Shi (2003), 63–65.
- Loewe (1986), 122–128.
- Hinsch (2002), 20.
- Loewe (1986), 126.
- Loewe (1986), 122–128; Zizhi Tongjian, vol. 15; Book of Han, vol. 13.
- Loewe (1986), 127–128.
- Di Cosmo (2002), 174–176; Torday (1997), 71–73.
- Di Cosmo (2001), 175–189.
- Torday (1997), 75–77.
- Di Cosmo (2002), 190–192; Torday (1997), 75–76.
- Di Cosmo (2002), 192; Torday (1997), 75–76
- Di Cosmo (2002), 192–193; Yü (1967), 9–10; Morton & Lewis (2005), 52
- Di Cosmo (2002), 193; Morton & Lewis (2005), 52.
- Yu (1986) 397; Book of Han, vol. 94a.
- Di Cosmo (2002), 193–195.
- Zizhi Tongjian, vol. 12.
- Di Cosmo (2002), 195–196; Torday (1997), 77; Yü (1967), 10–11.
- Loewe (1986), 130.
- Loewe (1986), 130–131; Wang (1982), 2.
- Loewe (1986), 130–131.
- Loewe (1986), 135.
- Loewe (1986), 135; Hansen (2000), 115–116.
- Zizhi Tongjian, vol. 13.
- Loewe (1986), 135–136; Hinsch (2002), 21.
- Loewe (1986), 136.
- Loewe (1986), 152.
- Torday (1997), 78.
- Loewe (1986), 136; Zizhi Tongjian, vol. 13.
- Loewe (1986), 136; Torday (1997), 78; Morton & Lewis (2005), 51–52; Zizhi Tongjian, vol. 13.
- Loewe (1986), 136–137.
- Hansen (2000), 117–119.
- Loewe (1986), 137–138.
- Loewe (1986), 149–150.
- Loewe (1986), 137–138; Loewe (1994), 128–129.
- Loewe (1994), 128–129.
- Csikszentmihalyi (2006), 25–27.
- Hansen (2000), 124–126; Loewe (1994), 128–129
- Loewe (1986), 139.
- Loewe (1986), 140–144.
- Loewe (1986), 141.
- Zizhi Tongjian, vol. 16.
- Loewe (1986), 141; Zizhi Tongjian, vol. 16.
- Loewe (1986), 141–142.
- Loewe (1986), 144.
- Ebrey (1999), 64.
- Torday (1997), 80–81.
- Torday (1997), 80–81; Yü (1986), 387–388; Di Cosmo (2002), 196–198.
- Di Cosmo (2002), 201–203.
- Torday (1997), 82–83; Yü (1986), 388–389.
- Di Cosmo (2002), 199–201 & 204–205; Torday (1997), 83–84.
- Yü (1986), 388–389.
- Yü (1986), 388–389; Di Cosmo (2002), 199–200.
- Kramers (1986), 752–753.
- Kramer (1986), 754–755.
- Kramers (1986), 753–754.
- Kramers (1986), 754.
- Kramers (1986), 754–756.
- Kramers (1986), 754–756; Morton & Lewis (2005), 53.
- Ebrey (1999), 77.
- Ebrey (1999), 77–78.
- Tom (1989), 99.
- Ebrey (1999), 80.
- Torday (1997), 91.
- Torday (1997), 83–84; Yü (1986), 389–390.
- Di Cosmo (2002), 211–214; Yü (1986) 389–390.
- Yü (1986) 389–390; Di Cosmo (2002), 214; Torday (1997), 91–92.
- Yü (1986), 390; Di Cosmo (2002), 237–239.
- Yü (1986), 390; Di Cosmo (2002), 240.
- Di Cosmo (2002), 232.
- Yü (1986), 391; Di Cosmo (2002), 241–242; Chang (2007), 5–6.
- Yü (1986), 391; Chang (2007), 8.
- Chang (2007), 23–33.
- Chang (2007), 53–56.
- Chang (2007), 6.
- Chang (2007), 173.
- Di Cosmo (2002), 241–244, 249–250.
- Morton & Lewis (2005), 56.
- An (2002), 83.
- Di Cosmo (2002), 247–249; Yü (1986), 407; Torday (1997), 104; Morton & Lewis (2005), 54–55.
- Torday (1997), 105–106.
- Torday (1997), 108–112.
- Torday (1997), 114–117.
- Ebrey (1999), 69.
- Torday (1997), 112–113.
- Ebrey (1999), 70.
- Di Cosmo (2002), 250–251.
- Yü (1986), 390–391.
- Chang (2007), 174; Yü (1986), 409–411.
- Yü (1986), 409–411.
- Torday (1997), 119–120.
- Yü (1986), 452.
- Yü (1986) 451–453.
- Ebrey (1999), 83.
- Yü (1986), 448.
- Yü (1986), 448–449.
- Pai (1992), 310–315.
- Hinsch (2002), 21–22; Wagner (2001), 1–2.
- Wagner (2001), 13–14.
- Wagner (2001), 13.
- Ebrey (1999), 75; Morton & Lewis (2005), 57.
- Wagner (2001), 13–17; Nishijima (1986), 576.
- Loewe (1986), 160–161.
- Loewe (1986), 160–161; Nishijima (1986), 581–582.
- Nishijima (1986), 586–588.
- Nishijima (1986), 588.
- Ebrey (1999), 66.
- Wang (1982), 100.
- Loewe (1986), 173–174.
- Loewe (1986), 175–177; Loewe (2000), 275.
- Zizhi Tongjian, vol. 22; Loewe (2000), 275; Loewe (1986), 178.
- Loewe (1986), 178.
- Huang (1988), 44; Loewe (1986), 180–182; Zizhi tongjian, vol. 23.
- Huang (1988), 45.
- Huang (1988), 44; Loewe (1986), 183–184.
- Loewe (1986), 183–184.
- Loewe (1986), 184.
- Huang (1988), 46; Loewe (1986), 185.
- Huang (1988), 46.
- Loewe (1986), 185–187.
- Loewe (1986), 187–197; Chang (2007), 175–176.
- Loewe (1986), 187–197.
- Loewe (1986), 187–206.
- Wagner (2001), 16–19.
- Loewe (1986), 196.
- Loewe (1986), 201.
- Loewe (1986), 201–202.
- Loewe (1986), 208.
- Loewe (1986), 208; Csikszentmihalyi (2006), xxv–xxvi
- Loewe (1986), 196–198; Yü (1986), 392–394.
- Yü (1986), 409.
- Yü (1986), 410–411.
- Loewe (1986), 197.
- Yü (1986), 410–411; Loewe (1986), 198.
- Yü (1986), 394; Morton & Lewis (2005), 55.
- Yü (1986), 395.
- Yü (1986), 395–396; Loewe (1986), 196–197.
- Yü (1986), 396–397.
- Yü (1986), 396–398; Loewe (1986), 211–213; Zizhi Tongjian, vol. 29.
- Yü (1986) 396–398; Loewe (1986), 211–213
- Yü (1986), 398.
- Bielenstein (1986), 225–226; Huang (1988), 46–48.
- Bielenstein (1986), 225–226; Loewe (1986), 213.
- Bielenstein (1986), 225–226.
- Bielenstein (1986), 227; Zizhi Tongjian, vol. 33; Zizhi Tongjian, vol. 34.
- Bielenstein (1986), 227–228.
- Bielenstein (1986), 228–229.
- Bielenstein (1986), 229–230.
- Bielenstein (1986), 230–231; Hinsch (2002), 23–24.
- Bielenstein (1986), 230–231; Hinsch (2002), 23 24; Ebrey (1999), 66.
- Hansen (2000), 134; Lewis (2007), 23.
- Hansen (2000), 134; Bielenstein (1986), 232; Lewis (2007), 23.
- Lewis (2007), 23; Bielenstein (1986), 234; Morton & Lewis (2005), 58.
- Bielenstein (1986), 232–233.
- Bielenstein (1986), 232–233; Morton & Lewis (2005), 57.
- Bielenstein (1986), 233.
- Bielenstein (1986), 234; Hinsch (2002), 24.
- Bielenstein (1986), 236.
- Bielenstein (1986), 237.
- Bielenstein (1986), 238.
- Bielenstein (1986), 238–239; Yü (1986), 450.
- Yü (1986), 450.
- Hansen (2000), 135; Bielenstein (1986), 241–242; de Crespigny (2007), 196.
- Hansen (2000), 135; Bielenstein (1986), 241–242"
- Hansen (2000), 135; de Crespigny (2007), 196; Bielenstein (1986), 243–244.
- de Crespigny (2007), 196; Bielenstein (1986), 243–244
- Bielenstein (1986), 246; de Crespigny (2007), 558; Zizhi Tongjian, vol. 38.
- de Crespigny (2007), 558–559; Bielenstein (1986), 247.
- de Crespigny (2007), 558–559.
- Bielenstein (1986), 248; de Crespigny (2007), 568.
- Robert Hymes (2000). John Stewart Bowman (ed.). Columbia Chronologies of Asian History and Culture. Columbia University Press. p. 13. ISBN 978-0-231-11004-4.
- Bielenstein (1986), 248–249; de Crespigny (2007), 197.
- de Crespigny (2007), 197, 560, & 569; Bielenstein (1986), 249–250.
- de Crespigny (2007), 559–560.
- de Crespigny (2007), 560; Bielenstein (1986), 251.
- de Crespigny (2007), 197–198 & 560; Bielenstein (1986), 251–254.
- de Crespigny (2007), 560–561; Bielenstein (1986), 254.
- Bielenstein (1986), 254; Crespigny (2007), 561.
- Bielenstein (1986), 254; de Crespigny (2007), 269 & 561.
- Bielenstein (1986), 255.
- de Crespigny (2007), 54–55.
- Bielenstein (1986), 255; de Crespigny (2007), 270.
- Hinsch (2002), 24–25; Cullen (2006), 1.
- Wang (1982), 29–30; Bielenstein (1986), 262.
- Wang (1982), 30–33.
- Hansen (2000), 135–136.
- Ebrey (1999), 73.
- Nishijima (1986), 595–596.
- Ebrey (1999), 82.
- Wang (1982), 55–56.
- Ebrey (1986), 609.
- de Crespigny (2007), 564–565.
- Ebrey (1986), 613.
- Bielenstein (1986), 256.
- de Crespigny (2007), 605.
- de Crespigny (2007), 606.
- Bielenstein (1986), 243.
- de Crespigny (2007), 608–609.
- de Crespigny (2007), 496.
- de Crespigny (2007), 498.
- de Crespigny (2007), 498; Deng (2005), 67.
- de Crespigny (2007), 591.
- de Crespigny (2007), 591; Hansen (2000), 137–138.
- Hansen (2000), 137–138.
- de Crespigny (2007), 592.
- de Crespigny (2007), 562 & 660; Yü (1986), 454.
- Yü (1986), 399–400.
- Yü (1986), 401.
- Yü (1986), 403.
- Torday (1997), 390–391.
- Yü (1986), 413–414.
- Yü (1986), 404.
- Yü (1986), 414–415.
- de Crespigny (2007), 73.
- Yü (1986), 415 & 420.
- Yü (1986), 415; de Crespigny (2007), 171.
- Yü (1986), 415.
- de Crespigny (2007), 5.
- de Crespigny (2007), 6; Torday (1997), 393.
- Yü (1986), 415–416.
- de Crespigny (2007), 497 & 590.
- Yü (1986), 460–461; de Crespigny (2007), 239–240.
- Wood (2002), 46–47; Morton & Lewis (2005), 59.
- Yü (1986), 450–451.
- Demiéville (1986), 821–822.
- Demiéville (1986), 823.
- Demieville (1986), 823; Akira (1998), 247–248.
- Beilenstein (1986), 278; Zizhi Tongjian, vol. 40; Zizhi Tongjian, vol. 43.
- Bielenstein (1986), 257–258; de Crespigny (2007), 607–608.
- de Crespigny (2007), 499.
- Hansen (2000), 136.
- de Crespigny (2007), 499 & 588–589.
- Bielenstein (1986), 280–281.
- de Crespigny (2007), 589; Bielenstein (1986), 282–283.
- de Crespigny (2007), 531; Bielenstein (1986), 283.
- Bielenstein (1986), 283; de Crespigny (2007), 122–123; Zizhi Tongjian, vol. 49.
- de Crespigny (2007), 122–123; Bielenstein (1986), 283–284.
- Bielenstein (1986), 284; de Crespigny (2007), 128 & 580.
- Bielenstein (1986), 284–285; de Crespigny (2007), 582–583.
- Bielenstein (1986), 284–285; de Crespigny (2007), 473–474.
- Bielenstein (1986), 285; de Crespigny (2007), 477–478, 595–596.
- Bielenstein (1986) 285; de Crespigny (2007), 477–478, 595–596; Zizhi Tongjian, vol. 53.
- Bielenstein (1986), 285–286; de Crespigny (1986), 597–598.
- de Crespigny (2007), 510; Beck (1986), 317–318.
- Loewe (1994), 38–52.
- de Crespigny (2007), 126.
- de Crespigny (2007), 126–127.
- de Crespigny (2007), 581–582.
- de Crespigny (2007), 475.
- de Crespigny (2007), 474–475 & 1049–1051; Minford & Lau (2002), 307; Needham (1965), 30, 484, 632, 627–630.
- de Crespigny (2007), 477.
- de Crespigny (2007), 475; Bielenstein (1986), 287–288.
- de Crespigny (2007), 596–597.
- de Crespigny (2007), 596.
- de Crespigny (2007), 597.
- Hansen (2000), 141.
- de Crespigny (2007), 597, 601–602.
- de Crespigny (2007), 599.
- de Crespigny (2007), 601–602; Hansen (2000), 141–142.
- de Crespigny (2007), 513–514.
- Yü (1986), 421; Chang (2007), 22.
- Yü (1986), 421.
- de Crespigny (2007), 123.
- Yü (1986), 422 & 425–426.
- Zizhi Tongjian, vol. 49; Book of Later Han, vol. 47.
- Yü (1986), 425–426.
- de Crespigny (2007), 123–124; Zizhi Tongjian, vol. 49; Book of Later Han, vol. 47, vol. 87 ; see also Yü (1986), 429–430.
- de Crespigny (2007) 123–124.
- de Crespigny (2007), 123–124; Yü (1986), 430–432.
- Yü (1986), 432.
- Yü (1986), 433–435.
- Yü (1986), 416–417 & 420.
- Yü (1986), 405 & 443–444.
- Yü (1986), 443–444.
- Yü (1986), 444–445.
- Yü (1986), 445–446.
- Demieville (1986), 823; Akira (1998), 248; Zhang (2002), 75.
- Akira (1998), 248 & 251.
- Demieville (1986), 825–826.
- de Crespigny (2007), 600; Yü (1986), 460–461.
- de Crespigny (2007), 600.
- Young (2001), p. 29.
- Mawer (2013), p. 38.
- O'Reilly (2007), p. 97.
- Suárez (1999), p. 92.
- Ball (2016), p. 153.
- de Crespigny (2007), 513; Barbieri-Low (2007), 207; Huang (1988), 57.
- de Crespigny (2007), 602.
- Beck (1986), 319–320.
- Beck (1986), 320–321.
- Beck (1986), 321–322.
- Beck (1986), 322.
- Beck (1986), 322; Zizhi Tongjian, vol. 56.
- de Crespigny (2007), 511.
- Beck (1986), 323; Hinsch (2002), 25–26.
- de Crespigny (2016), pp. 402–407.
- Hansen (2000), 144–145.
- Hendrischke (2000), 140–141.
- de Crespigny (2016), pp. 402–403.
- Hansen (2000), 145–146.
- Hansen (2000), 145–146; de Crespigny (2007), 514–515; Beck (1986), 339–340.
- de Crespigny (2007), 515.
- Ebrey (1999), 84.
- Beck (1986), 339; Huang (1988), 59–60.
- Beck (1986), 341–342.
- Beck (1986), 343.
- Beck (1986), 344.
- Beck (1986), 344; Zizhi Tongjian, vol. 59.
- Beck (1986), 345.
- Beck (1986), 345; Hansen (2000), 147; Morton & Lewis (2005), 62.
- Beck (1986), 345–346.
- Beck (1986), 346.
- Beck (1986), 346–347.
- Beck (1986), 347.
- Beck (1986), 347–349.
- de Crespigny (2007), 158.
- Zizhi Tongjian, vol. 60.
- Beck (1986), 349.
- Beck (1986), 350–351.
- de Crespigny (2007), 35–36.
- de Crespigny (2007), 36.
- Beck (1986), 351.
- Zizhi Tongjian, vol. 63.
- de Crespigny (2007), 37.
- de Crespigny (2007), 37; Beck (1986), 352.
- Beck (1986), 352.
- Beck (1986), 353–354.
- Beck (1986), 352–353.
- Beck (1986), 354–355.
- Beck (1986), 355–366.
- Beck (1986), 356–357; Hinsch (2002), 26.
- Akira, Hirakawa. (1998). A History of Indian Buddhism: From Sakyamani to Early Mahayana. Translated by Paul Groner. New Delhi: Jainendra Prakash Jain At Shri Jainendra Press. ISBN 978-81-208-0955-0.
- An, Jiayao. (2002). "When Glass Was Treasured in China," in Silk Road Studies VII: Nomads, Traders, and Holy Men Along China's Silk Road, 79–94. Edited by Annette L. Juliano and Judith A. Lerner. Turnhout: Brepols Publishers. ISBN 978-2-503-52178-7.
- Ball, Warwick (2016). Rome in the East: Transformation of an Empire, 2nd edition. London & New York: Routledge, ISBN 978-0-415-72078-6.
- Beck, Mansvelt. (1986). "The Fall of Han," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 317–376. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Barbieri-Low, Anthony J. (2007). Artisans in Early Imperial China. Seattle & London: University of Washington Press. ISBN 978-0-295-98713-2.
- Bielenstein, Hans. (1986). "Wang Mang, the Restoration of the Han Dynasty, and Later Han," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 223–290. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Chang, Chun-shu. (2007). The Rise of the Chinese Empire: Volume II; Frontier, Immigration, & Empire in Han China, 130 B.C. – A.D. 157. Ann Arbor: University of Michigan Press. ISBN 978-0-472-11534-1.
- Csikszentmihalyi, Mark. (2006). Readings in Han Chinese Thought. Indianapolis and Cambridge: Hackett Publishing Company, Inc. ISBN 978-0-87220-710-3.
- Cullen, Christoper. (2006). Astronomy and Mathematics in Ancient China: The Zhou Bi Suan Jing. Cambridge: Cambridge University Press. ISBN 978-0-521-03537-8.
- Davis, Paul K. (2001). 100 Decisive Battles: From Ancient Times to the Present. New York: Oxford University Press. ISBN 978-0-19-514366-9.
- de Crespigny, Rafe. (2007). A Biographical Dictionary of Later Han to the Three Kingdoms (23–220 AD). Leiden: Koninklijke Brill. ISBN 978-90-04-15605-0.
- de Crespigny, Rafe (2016). Fire over Luoyang: A History of the Later Han Dynasty 23–220 AD. Leiden, Boston: Brill. ISBN 9789004324916.
- Demiéville, Paul. (1986). "Philosophy and religion from Han to Sui," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 808–872. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Deng, Yingke. (2005). Ancient Chinese Inventions. Translated by Wang Pingxing. Beijing: China Intercontinental Press (五洲传播出版社). ISBN 978-7-5085-0837-5.
- Di Cosmo, Nicola. (2002). Ancient China and Its Enemies: The Rise of Nomadic Power in East Asian History. Cambridge: Cambridge University Press. ISBN 978-0-521-77064-4.
- Ebrey, Patricia. (1986). "The Economic and Social History of Later Han," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 608–648. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Ebrey, Patricia (1999). The Cambridge Illustrated History of China. Cambridge: Cambridge University Press. ISBN 978-0-521-66991-7.
- Hansen, Valerie. (2000). The Open Empire: A History of China to 1600. New York & London: W.W. Norton & Company. ISBN 978-0-393-97374-7.
- Hendrischke, Barbara. (2000). "Early Daoist Movements" in Daoism Handbook, ed. Livia Kohn, 134–164. Leiden: Brill. ISBN 978-90-04-11208-7.
- Hinsch, Bret. (2002). Women in Imperial China. Lanham: Rowman & Littlefield Publishers, Inc. ISBN 978-0-7425-1872-8.
- Huang, Ray. (1988). China: A Macro History. Armonk & London: M.E. Sharpe Inc., an East Gate Book. ISBN 978-0-87332-452-6.
- Hulsewé, A.F.P. (1986). "Ch'in and Han law," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 520–544. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Kramers, Robert P. (1986). "The Development of the Confucian Schools," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 747–756. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Lewis, Mark Edward. (2007). The Early Chinese Empires: Qin and Han. Cambridge: Harvard University Press. ISBN 978-0-674-02477-9.
- Loewe, Michael. (1986). "The Former Han Dynasty," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 103–222. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Loewe, Michael. (1994). Divination, Mythology and Monarchy in Han China. Cambridge, New York, and Melbourne: Cambridge University Press. ISBN 978-0-521-45466-7.
- Loewe, Michael. (2000). A Biographical Dictionary of the Qin, Former Han, and Xin Periods (221 BC — AD 24). Leiden, Boston, Koln: Koninklijke Brill NV. ISBN 978-90-04-10364-1.
- Mawer, Granville Allen (2013). "The Riddle of Cattigara" in Robert Nichols and Martin Woods (eds), Mapping Our World: Terra Incognita to Australia, 38–39, Canberra: National Library of Australia. ISBN 978-0-642-27809-8.
- Minford, John and Joseph S.M. Lau. (2002). Classical Chinese literature: an anthology of translations. New York: Columbia University Press. ISBN 978-0-231-09676-8.
- Morton, William Scott and Charlton M. Lewis. (2005). China: Its History and Culture: Fourth Edition. New York City: McGraw-Hill. ISBN 978-0-07-141279-7.
- Needham, Joseph (1965). Science and Civilization in China: Volume 4, Physics and Physical Technology, Part II: Mechanical Engineering. Cambridge: Cambridge University Press. Reprint from Taipei: Caves Books, 1986. ISBN 978-0-521-05803-2.
- Nishijima, Sadao. (1986). "The Economic and Social History of Former Han," in Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 545–607. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- O'Reilly, Dougald J.W. (2007). Early Civilizations of Southeast Asia. Lanham, New York, Toronto, Plymouth: AltaMira Press, Division of Rowman and Littlefield Publishers. ISBN 0-7591-0279-1.
- Pai, Hyung Il. "Culture Contact and Culture Change: The Korean Peninsula and Its Relations with the Han Dynasty Commandery of Lelang," World Archaeology, Vol. 23, No. 3, Archaeology of Empires (Feb., 1992): 306–319.
- Shi, Rongzhuan. "The Unearthed Burial Jade in the Tombs of Han Dynasty's King and Marquis and the Study of Jade Burial System", Cultural Relics of Central China, No. 5 (2003): 62–72. ISSN 1003-1731.
- Suárez, Thomas (1999). Early Mapping of Southeast Asia. Singapore: Periplus Editions. ISBN 962-593-470-7.
- Tom, K.S. (1989). Echoes from Old China: Life, Legends, and Lore of the Middle Kingdom. Honolulu: The Hawaii Chinese History Center of the University of Hawaii Press. ISBN 978-0-8248-1285-0.
- Torday, Laszlo. (1997). Mounted Archers: The Beginnings of Central Asian History. Durham: The Durham Academic Press. ISBN 978-1-900838-03-0.
- Wagner, Donald B. (2001). The State and the Iron Industry in Han China. Copenhagen: Nordic Institute of Asian Studies Publishing. ISBN 978-87-87062-83-1.
- Wang, Zhongshu. (1982). Han Civilization. Translated by K.C. Chang and Collaborators. New Haven and London: Yale University Press. ISBN 978-0-300-02723-5.
- Wilkinson, Endymion. (1998). Chinese History: A Manual. Cambridge and London: Harvard University Asia Center of the Harvard University Press. ISBN 978-0-674-12377-9.
- Wood, Frances. (2002). The Silk Road: Two Thousand Years in the Heart of Asia. Berkeley and Los Angeles: University of California Press. ISBN 978-0-520-24340-8.
- Young, Gary K. (2001), Rome's Eastern Trade: International Commerce and Imperial Policy, 31 BC – AD 305, London & New York: Routledge, ISBN 0-415-24219-3.
- Yü, Ying-shih. (1967). Trade and Expansion in Han China: A Study in the Structure of Sino-Barbarian Economic Relations. Berkeley: University of California Press.
- Yü, Ying-shih. (1986). "Han Foreign Relations," in The Cambridge History of China: Volume I: the Ch'in and Han Empires, 221 B.C. – A.D. 220, 377–462. Edited by Denis Twitchett and Michael Loewe. Cambridge: Cambridge University Press. ISBN 978-0-521-24327-8.
- Zhang, Guanuda. (2002). "The Role of the Sogdians as Translators of Buddhist Texts," in Silk Road Studies VII: Nomads, Traders, and Holy Men Along China's Silk Road, 75–78. Edited by Annette L. Juliano and Judith A. Lerner. Turnhout: Brepols Publishers. ISBN 978-2-503-52178-7.
- Dubs, Homer H. (trans.) The History of the Former Han Dynasty. 3 vols. Baltimore: Waverly Press, 1938–
- Hill, John E. (2009) Through the Jade Gate to Rome: A Study of the Silk Routes during the Later Han Dynasty, 1st to 2nd Centuries CE. John E. Hill. BookSurge, Charleston, South Carolina. ISBN 978-1-4392-2134-1.
- Media related to Han Dynasty at Wikimedia Commons | https://en.m.wikipedia.org/wiki/Fall_of_Qin | 21 |
37 | Students often consider politicians as somehow irrelevant to the political system itself. In fact, of course, those who serve in office shape the system as much as written laws or constitutions do. Students should know that government in the United States was not always the province of politicians. It was once the avocation of “gentlemen” and became the business of professionals at a certain time only because of particular circumstances.
There were no professional politicians in the 1700s. People like Madison, Jefferson, Hamilton, and John Adams could be political, but they were not politicians in our sense of the term. They did not derive an appreciable part of their income from public office, nor did they spend much time campaigning for votes. By contrast, the leading public figures of the early nineteenth century, Martin Van Buren, Henry Clay, Daniel Webster, and John C. Calhoun, were hardly ever out of office and spent most of their time devising ways of advancing themselves politically. Unlike Jefferson or Washington, who suffered financially from serving in government, successful public officials in the later period tended to leave office richer than when they had entered.
The growing federal and state bureaucracies made it possible for ambitious young men to make politics a career. By the 1830s, the Democrats and Whigs rewarded their workers with civil servant jobs. In return, these bureaucrats “kicked back” a part of their income to the party, which used the funds to finance other campaigns. At the center of each political party, there was a corps of professionals, usually living off the public payroll, whose careers were inextricably tied to the success of the party. As one New York politician confessed, he would vote for a dog if his party nominated it.
Coincident with this development was the disappearance of all real issues from American politics. In the 1790s, politics was intensely ideological, partly because of the influence of the French Revolution and partly because party leaders were intellectuals. The second-party system emerged in a nation where it seemed the white, Protestant, small farmer and his family made up the soul of society and that only their interests should be protected and advanced. There were differences of opinion about how this was to be done, but these were disputes about means rather than ends.
Because politicians must campaign on something that resembles an issue in order to distinguish themselves from their opponents, they created issues. The ideal issue was one that everyone agreed on so that endorsing it would not lose votes. Unfortunately, it was hard to get votes by being for motherhood and apple pie, because any opponent would be just as enthusiastic about them. Nevertheless, then, as now, politicians would suddenly proclaim undying devotion to common verities, which always seemed to be in danger of extinction whenever an election took place. The second best issue was one that was too complicated for the average person to understand. The tariff fitted this qualification. In his autobiography, Van Buren recorded an instance of how artfully he used the complexity of the tariff question to befuddle an audience. After his speech on the subject, he mingled with the audience and overheard the following conversation:
“Mr. Knower! that was a very able speech!”
“Yes, very able,” was the reply.
“Mr. Knower! on which side of the Tariff question was it?”
Van Buren was infamous for evasion and was accused by his contemporaries of having raised the art of double-talk to a true philosophy, called “noncommitalism,” but even the plain-speaking Andrew Jackson found the tariff an excellent opportunity for his own species of political hedging. Jackson never budged from his support of a “judicious” tariff, nor did he ever explain what that meant.
To say that there were no real political issues does not mean that there were no real issues. Slavery clearly violated the fundamental ideals on which the nation had been founded, and slavery was an issue that would not go away. Because divisive, controversial issues were avoided at all costs by professional politicians, the second-party system closed the political forum to the question of slavery. Emancipation, when it came, had to come from outside the normal political process.
The second-party system extended the reality of democracy in America. Parties eagerly enlisted young men of talent and financed their political careers, enabling sons of average families to seek high public office. The parties made politics what it remains today, an exciting spectator sport full of sound and fury, even if it often signifies nothing.
RELIVING THE PAST
Andrew Jackson dominated the political arena in the 1830s. His forcefulness was illustrated at the annual Jefferson Day dinner on April 15, 1830, in the midst of the nullification controversy. When the time for giving toasts arrived, Jackson stared at the South Carolinians present and offered, “Our Union, it must be preserved!” John C. Calhoun replied, “The Union, next to our liberty most dear! May we all remember that it can only be preserved by respecting the rights of the states and distributing equally the benefit and burden of the Union!” Contrast the two toasts and you begin to realize that Jackson’s pithiness, in an oratorical age, confirmed his reputation as a man of action. Martin Van Buren reported the above incident in his autobiography, an immensely valuable source that was edited and first published by John C. Fitzpatrick in 1920 and more recently reprinted by the Da Capo Press in 1973.
Van Buren’s great rival in New York and national politics was Thurlow Weed. He too wrote an autobiography, now out of print and not easy to find. It is a good supplement to Van Buren’s because it gives us the Whig version of events. One of the characteristics that made Weed a superb politician was his ability to face reality. When asked by a political ally to agree that the
Democrats could never answer Daniel Webster’s attack on Jackson’s veto of the Bank bill, Weed correctly predicted that, “two sentences in the veto message would carry ten electors against the bank for every one that Mr. Webster’s arguments and eloquence secured in favor of it.” Weed’s autobiography was edited by his daughter, Harriet Weed, and was published by Houghton Mifflin and Company in 1883.
DEMOCRATIC SPACE: THE NEW HOTELS
The author uses the “hotel culture” of the early nineteenth century to exemplify the democratic culture of the new republic. The hotel welcomed all white males who could pay their way in, but excluded the poor, women alone, and blacks.
DEMOCRACY IN THEORY AND PRACTICE
Americans in the 1820s and 1830s no longer feared that democracy would lead to anarchy. Each individual was to be given an equal start in life, but equality of opportunity did not mean equality of result. The American people were happy to accept a society of winners and losers.
A. Democracy and Society
Despite persistent and growing economic inequality, Americans generally believed they had created an egalitarian society, and in many ways they had. Political equality for all white males was a radical achievement, and Americans came to prefer the “self-made” man to one who had inherited wealth and refinement. The egalitarian spirit carried over into an attack on the licensed professions, and it was believed that any white male should have a chance to practice law or medicine, whether or not he was trained.
B. Democratic Culture
The democratic ethos also affected the arts in this period. Artists no longer worked for an aristocratic elite, but for a mass audience. Many writers and painters pleased the public by turning out Gothic horror stories, romantic women’s fiction, melodramas, or genre paintings that lovingly depicted the American way of life. More serious artists sought to inspire the masses with neoclassical sculpture, or landscapes of untamed nature. Only a few individuals, like Edgar Allan Poe, were truly avant-garde, romantic artists.
C. Democratic Political Institutions
Democratic ideals had a real impact on the American political system. Nearly all adult white males gained the right to vote whether or not they had property. Offices that had been appointive, such as judgeships or the Electoral College, were made elective. The greatest change took place in the style of politics. Professional politicians emerged, actively seeking votes and acting as servants of the people.
Men such as Martin Van Buren in New York extolled the public benefits of a two-party system, and political machines began to develop on the state level. National parties eventually developed, the Democrats and the Whigs. Although political parties often served special economic interests, it should be remembered that American politics always retained a strong republican ideology and that all parties sought to preserve equality of opportunity. The Whigs and Democrats differed on whether this could be done best with or without active intervention by the national government, but neither party gave much thought to extending rights to anyone other than adult white males. It was left to other, more radical, parties to argue the cause of African Americans, women, and working people.
D. Economic Issues
The Panic of 1819 made economic issues a matter of great concern, but there was no consensus on what should be done. Some wanted to retreat to simpler times to avoid the boom and bust associated with a growing market economy, while others wanted the government to subsidize the growth of that sort of economy. These demands for what seemed like favors aroused fears that a “money power” had become a threat to liberty.
The growth of economic inequality prompted the formation of working men’s parties, who agitated for a ten-hour working day, among other things. The same dismay at the rise of great wealth made abolitionists and advocates of women’s rights to organize in order to preserve liberty and democracy. All of these movements, however, were fatally flawed because they shared the pervasive racism that would deny to blacks the rights being demanded for whites.
JACKSON AND THE POLITICS OF DEMOCRACY
The period from the 1820s to the 1840s is with some justice called “the Age of Jackson.” This section explains why.
A. The Election of 1824 and J. Q. Adams’s Administration
The election of 1824 furthered Jackson’s political career even though he lost the election. The election began as a scramble between five men, John Quincy Adams, William Crawford, Henry Clay, John C. Calhoun, and Andrew Jackson. Because no one received a majority of the electoral votes, the House of Representatives had to decide the election, and its choice came down to Adams or Jackson. When Clay gave his support to Adams, the House elected him president. Adams began his administration under a cloud of suspicion because it was widely believed that he had “bought” the presidency. By 1826, it was apparent that Adams had failed as a president. The Jackson forces took control of Congress by simply giving every special interest whatever it wanted.
B. Jackson Comes to Power
The Jackson people, who became the Democratic party, were well organized for the election of 1828. The Democrats appealed to sectional self-interest and pioneered the art of making politics exciting to the average man, but the greatest asset the Democrats had was Jackson himself. Rigid and forceful, Jackson was accepted as a true man of the people, and he defeated Adams easily, especially in the slaveholding states. Jackson’s triumph was a personal one; he stood on no political platform. As President, he democratized the office by firing at will whatever officeholders he did not like, defending the practice by asserting the right of all men to a government post.
C. Indian Removal
Jackson inherited the Indian removal policy from previous administrations but carried it to its harshest conclusion. He agreed with the southern states that the federal government had not pushed the Indians hard enough. He urged Congress to speed up the relocation of the Indians living east of the Mississippi, and when the Cherokees resisted, Jackson sent the army in 1830 to evict them from their homes and herd them over the Mississippi. Some 4,000 Cherokees died along that “Trail of Tears.”
D. The Nullification Crisis
The South had reason to fear a strong national government that might some day decide to do something about slavery. Led by John C. Calhoun, southern intellectuals began working out a defense of state sovereignty. The first major controversy between federal authority and states’‘ rights came when South Carolina objected to the high tariff of 1828. The South, however, trusted Jackson to be sympathetic, and South Carolina took no action on the 1828 tariff. By 1832, the Carolinians had come to distrust Jackson, partly as a result of a personal feud between Jackson and Calhoun, but mainly because South Carolina feared a forceful president and Jackson rejected the idea of state sovereignty.
When in 1832 a new tariff was passed, South Carolina, still unhappy with the rates, nullified it. Jackson responded by threatening to send the army into South Carolina. Both sides eventually retreated; South Carolina got a lower tariff, but Jackson had demonstrated the will of the federal government to rule the states, by force if necessary.
One of the most important actions taken by Jackson was his destruction of the Bank of the United States. “The Bank War” was a symbolic defense of democratic values and led to two important results, economic disruption and a two-party system.
A. Mr. Biddle’s Bank
Although the Bank of the United States contributed to the economic growth and stability of the United States, it had never been very popular. In a democratic era, it was open to charges of giving special privileges to a few. Its manager, Nicholas Biddle, was a competent man who looked and behaved like an aristocrat. Also, in an era of rising democracy, the Bank possessed great power and privilege without accountability to the public.
B. The Bank Veto and the Election of 1832
Jackson came into office suspecting the Bank of the United States and made vague threats against it. Biddle overreacted and asked Congress to recharter the Bank in 1832, four years before the old charter was due to expire. Henry Clay took up the Bank’s cause, hoping that congressional approval of the Bank would embarrass Jackson.
When Congress passed the new charter, Jackson vetoed it on the grounds that the Bank was unconstitutional, despite a Supreme Court decision to the contrary. Jackson claimed he vetoed the Bank charter because it violated equality of opportunity and Congress upheld the veto. Clay and Jackson took their argument to the public in the election of 1832 where Jackson’s victory spelled doom for the Bank.
C. Killing the Bank
Jackson showed his opponents no mercy and proceeded to destroy the Bank by withdrawing the government’s money and depositing it into selected state banks (the “pet banks”). Biddle then used his powers as a central banker to bring on a nationwide recession, which he hoped would be blamed on Jackson. That ploy failed, but Jackson’s destruction of the Bank cost him support in Congress, especially in the Senate, where fears of a dictatorship began to emerge.
D. The Emergence of the Whigs
Opposition to Jackson formed the Whig party. Along the way, the Whigs absorbed the Anti-Masonic party, which had suddenly flourished after 1826 when it attacked the Masons as a secret, privileged elite. The Anti-Masons brought with them to the Whig party a disgust of “loose” living and a willingness to use government powers to enforce “decency.” The Democratic party was also weakened by the defection of working-class spokesmen who criticized Jackson for not destroying all banks. Furthermore, Jackson’s financial policies led to a runaway inflation, followed by an abrupt depression.
E. The Rise and Fall of Van Buren
Jackson chose his friend and advisor, Martin Van Buren, as his successor. The Whigs, still unorganized, presented Van Buren with little opposition in the election of 1836, but Van Buren’s inauguration coincided with the arrival of the depression of 1836, for which the Democrats were blamed.
Van Buren felt no responsibility to save individuals and businesses that were going bankrupt, but he did want to save the government funds in the state banks by placing them in “independent subtreasuries.” It was a sign of the growing strength of the Whigs that they could frustrate Van Buren in this aim for three years. Economic historians today conclude that the Panic of 1837 was international in scope, reflecting complex changes in the world economy beyond the control of American policy makers, but the Whigs blamed Van Buren for the mess.
In 1840 the Whigs were fully organized and had learned the art of successful politicking. They nominated William Henry Harrison, a non-controversial war hero, and built his image as a common man who had been born in a log cabin. As his running mate, the Whigs picked John Tyler, a former Jacksonian, because he would attract some votes from states’‘-rights Democrats. Harrison and Tyler beat Van Buren, although the popular vote was close.
HEYDAY OF THE SECOND-PARTY SYSTEM
The election of 1840 signaled the emergence of a permanent two-party system in the United States. For the next decade, Whigs and Democrats evenly divided the electorate. Although there was much overlapping, both parties attracted distinct constituencies and offered voters a clear choice of programs. The Whigs stood for a “positive liberal state,” which meant active government involvement in society. The Democrats stood for a “negative liberal state,” which meant that the government should intervene only to destroy special privileges. Both parties shared a broad democratic ideology, but the Democrats were the party of the individual, while the Whigs were the party of the community.
CONCLUSION: TOCQUEVILLE’S WISDOM
Alexis de Tocqueville, the French visitor who made so many astute observations about life in Jacksonian America, praised most aspects of American democracy, but warned of disaster in the future if while males refused to extend the liberties they enjoyed to women, African Americans, and Indians. | https://essaydocs.org/toward-discussion-the-sport-of-politics.html | 21 |
83 | What Is A Central Bank?
A central bank manages a nation's currency, money supply and interest rates and acts as a lender of last resort to the country's banks. Many are also responsible for regulating and supervising their country's banks. Many are set up to be independent from their government, although their directors are usually appointed by that country's chief executive or leader of government.
Most countries have their own central bank, but the following eight have influence beyond their borders:
- U.S. Federal Reserve
- European Central Bank
- Bank of England
- Bank of Japan
- Bank of China
- Swiss National Bank
- Bank of Canada
- Reserve Bank of Australia
- Reserve Bank of New Zealand
The Swedish Central Bank, Sveriges Riksbank, says it is the oldest central bank in the world, with operations beginning in 1668.
The Bank of England was founded shortly after that, in 1694, as a private bank to act as banker to the government. It was nationalised in 1946 and then was granted independence from the government in 1997. It is responsible for setting monetary policy and interest rates, maintaining financial stability, and regulating the country's banks and insurance companies.
The goal of central banks is to ensure economic and financial stability, which usually means low inflation, low to moderate interest rates, high levels of employment and sustainable economic growth. They do this through monetary policy, which they try to achieve through public pronouncements of their intentions plus open market operations to manage the money supply and the level of interest rates.
The main mechanism available to central banks for implementing monetary policy is by influencing the level of interest rates and the amount of money banks have available to lend.
An accommodative monetary policy means the central bank wants low interest rates and more money available in the economy to encourage economic activity. However, if the central bank determines that the economy is growing too fast and may ignite inflation, it tightens monetary policy by raising interest rates and restricting the amount of money available in the financial system.
In the U.S., for example, the fed's monetary policymaking unit is the Federal Open Market Committee (FOMC). It sets the federal funds rate, which is the interest rate that banks pay to borrow and lend money to each other overnight that they hold on deposit at the Fed. This rate influences to a great deal the general level of all interest rates throughout the economy, both short- and long-term.
Open Market Operations
In addition to setting key interest rates, central banks have other tools they use to keep interest rates within the desired range.
Through open market operations, they buy and sell securities with banks and other financial institutions. By selling securities, they drain money from the financial system, reducing the amount available for lending and therefore putting upward pressure on interest rates. Buying securities has the opposite effect, making more money available for banks to lend and therefore lowering rates.
Central banks also dictate the amount of money commercial banks must keep on deposit in cash. By lowering these requirements, banks have more money to make loans, while raising them has the contrary effect.
The Money Supply
Central banks also have some level of control over the money supply, which is the amount of money circulating in the economy. This can also affect interest rates. A plentiful supply of money generally means low rates, while less money available makes borrowing more expensive and therefore discourages business activity. The central bank can increase or reduce the size of the money supply by buying and selling securities.
Since the late 1980s, however, many central banks in the developed world have moved away from trying to manipulate the size of the money supply and instead focus on trying to target an optimum level of inflation, usually about 2% a year. According to the International Monetary Fund, which provides policy advice and technical assistance to countries, many low-income nations are also transitioning away from trying to target the size of the money supply in favour of targeting inflation.
In the wake of the global financial crisis in 2008, many central banks eased monetary policy by cutting short-term interest rates to zero. In some cases, they took the unprecedented step of lowering rates below zero, as in the case of the European Central Bank, the Swiss National Bank and others.
When that failed to achieve the desired result of raising inflation and boosting economic growth, many of them undertook "unconventional monetary policies" including quantitative easing. This policy consists of buying long-term bonds, both issued by the government and from private corporations, in order to try to lower long-term rates and to encourage investors to take on more risk by buying stocks instead of bonds.
A central bank manages a nation's currency, money supply and interest rates and serves as a lender of last resort to the country's banks. Many of them also regulate the banks in the country. Central banks try to achieve their goals, which usually consist of promoting economic growth, job creation and low inflation and interest rates, through their monetary policy. They conduct the policy through a combination of raising and lowering interest rates, and open market operations, buying and selling securities to increase or decrease the amount of money available for lending.
Senior Market Specialist
Russell Shor (MSTA, CFTe, MFTA) is a Senior Market Specialist at FXCM. He joined the firm in October 2017 and has an Honours Degree in Economics from the University of South Africa and holds the coveted Certified Financial Technician and Master of Financial Technical Analysis qualifications from the International Federation… | https://www.fxcm.com/markets/insights/central-banks/ | 21 |
99 | Climate change includes both global warming driven by human-induced emissions of greenhouse gases and the resulting large-scale shifts in weather patterns. Though there have been previous periods of climatic change, humans have since the mid-20th century had an unprecedented impact on Earth's climate system and have caused change on a global scale.
The largest driver of warming is the
emission of gases that create a greenhouse effect, of which more than 90% are
carbon dioxide (CO
2) and methane. Fossil fuel burning ( coal, oil, and natural gas) for energy consumption is the main source of these emissions, with additional contributions from agriculture, deforestation, and manufacturing. The human cause of climate change is not disputed by any scientific body of national or international standing. Temperature rise is accelerated or tempered by climate feedbacks, such as loss of sunlight-reflecting snow and ice cover, increased water vapour (a greenhouse gas itself), and changes to land and ocean carbon sinks.
Temperature rise on land is about twice the global average increase, leading to desert expansion and more common heat waves and wildfires. Temperature rise is also amplified in the Arctic, where it has contributed to melting permafrost, glacial retreat and sea ice loss. Warmer temperatures are increasing rates of evaporation, causing more intense storms and weather extremes. Impacts on ecosystems include the relocation or extinction of many species as their environment changes, most immediately in coral reefs, mountains, and the Arctic. Climate change threatens people with food insecurity, water scarcity, flooding, infectious diseases, extreme heat, economic losses, and displacement. These impacts have led the World Health Organization to call climate change the greatest threat to global health in the 21st century. Even if efforts to minimise future warming are successful, some effects will continue for centuries, including rising sea levels, rising ocean temperatures, and ocean acidification.
Many of these impacts are already felt at the current level of warming, which is about 1.2 °C (2.2 °F). The Intergovernmental Panel on Climate Change (IPCC) has issued a series of reports that project significant increases in these impacts as warming continues to 1.5 °C (2.7 °F) and beyond. Additional warming also increases the risk of triggering critical thresholds called tipping points. Responding to climate change involves mitigation and adaptation. Mitigation – limiting climate change – consists of reducing greenhouse gas emissions and removing them from the atmosphere; methods include the development and deployment of low-carbon energy sources such as wind and solar, a phase-out of coal, enhanced energy efficiency, reforestation, and forest preservation. Adaptation consists of adjusting to actual or expected climate, such as through improved coastline protection, better disaster management, assisted colonisation, and the development of more resistant crops. Adaptation alone cannot avert the risk of "severe, widespread and irreversible" impacts.
Under the 2015 Paris Agreement, nations collectively agreed to keep warming "well under 2.0 °C (3.6 °F)" through mitigation efforts. However, with pledges made under the Agreement, global warming would still reach about 2.8 °C (5.0 °F) by the end of the century. Limiting warming to 1.5 °C (2.7 °F) would require halving emissions by 2030 and achieving near-zero emissions by 2050.
Before the 1980s, when it was unclear whether warming by greenhouse gases would dominate aerosol-induced cooling, scientists often used the term inadvertent climate modification to refer to humankind's impact on the climate. In the 1980s, the terms global warming and climate change were popularised, the former referring only to increased surface warming, while the latter describes the full effect of greenhouse gases on the climate. Global warming became the most popular term after NASA climate scientist James Hansen used it in his 1988 testimony in the U.S. Senate. In the 2000s, the term climate change increased in popularity. Global warming usually refers to human-induced warming of the Earth system, whereas climate change can refer to natural as well as anthropogenic change. The two terms are often used interchangeably.
Various scientists, politicians and media figures have adopted the terms climate crisis or climate emergency to talk about climate change, while using global heating instead of global warming. The policy editor-in-chief of The Guardian explained that they included this language in their editorial guidelines "to ensure that we are being scientifically precise, while also communicating clearly with readers on this very important issue". Oxford Dictionary chose climate emergency as its word of the year in 2019 and defines the term as "a situation in which urgent action is required to reduce or halt climate change and avoid potentially irreversible environmental damage resulting from it".
Observed temperature rise
Multiple independently produced instrumental datasets show that the climate system is warming, with the 2009–2018 decade being 0.93 ± 0.07 °C (1.67 ± 0.13 °F) warmer than the pre-industrial baseline (1850–1900). Currently, surface temperatures are rising by about 0.2 °C (0.36 °F) per decade, with 2020 reaching a temperature of 1.2 °C (2.2 °F) above pre-industrial. Since 1950, the number of cold days and nights has decreased, and the number of warm days and nights has increased.
There was little net warming between the 18th century and the mid-19th century.
Climate proxies, sources of climate information from natural archives such as trees and
ice cores, show that natural variations offset the early effects of the
Thermometer records began to provide global coverage around 1850.
Historical patterns of warming and cooling, like the
Medieval Climate Anomaly and the
Little Ice Age, did not occur at the same time across different regions, but temperatures may have reached as high as those of the late-20th century in a limited set of regions.
There have been prehistorical episodes of global warming, such as the
Paleocene–Eocene Thermal Maximum.
However, the modern observed rise in temperature and CO
2 concentrations has been so rapid that even abrupt geophysical events that took place in Earth's history do not approach current rates.
Evidence of warming from air temperature measurements are reinforced with a wide range of other observations. There has been an increase in the frequency and intensity of heavy precipitation, melting of snow and land ice, and increased atmospheric humidity. Flora and fauna are also behaving in a manner consistent with warming; for instance, plants are flowering earlier in spring. Another key indicator is the cooling of the upper atmosphere, which demonstrates that greenhouse gases are trapping heat near the Earth's surface and preventing it from radiating into space.
While locations of warming vary, the patterns are independent of where greenhouse gases are emitted, because the gases persist long enough to diffuse across the planet. Since the pre-industrial period, global average land temperatures have increased almost twice as fast as global average surface temperatures. This is because of the larger heat capacity of oceans, and because oceans lose more heat by evaporation. Over 90% of the additional energy in the climate system over the last 50 years has been stored in the ocean, with the remainder warming the atmosphere, melting ice, and warming the continents.
The Northern Hemisphere and the North Pole have warmed much faster than the South Pole and Southern Hemisphere. The Northern Hemisphere not only has much more land, but also more seasonal snow cover and sea ice, because of how the land masses are arranged around the Arctic Ocean. As these surfaces flip from reflecting a lot of light to being dark after the ice has melted, they start absorbing more heat. Localised black carbon deposits on snow and ice also contribute to Arctic warming. Arctic temperatures have increased and are predicted to continue to increase during this century at over twice the rate of the rest of the world. Melting of glaciers and ice sheets in the Arctic disrupts ocean circulation, including a weakened Gulf Stream, further changing the climate.
Drivers of recent temperature rise
The climate system experiences various cycles on its own which can last for years (such as the El Niño–Southern Oscillation), decades or even centuries. Other changes are caused by an imbalance of energy that is "external" to the climate system, but not always external to the Earth. Examples of external forcings include changes in the composition of the atmosphere (e.g. increased concentrations of greenhouse gases), solar luminosity, volcanic eruptions, and variations in the Earth's orbit around the Sun.
To determine the human contribution to climate change, known internal climate variability and natural external forcings need to be ruled out. A key approach is to determine unique "fingerprints" for all potential causes, then compare these fingerprints with observed patterns of climate change. For example, solar forcing can be ruled out as a major cause because its fingerprint is warming in the entire atmosphere, and only the lower atmosphere has warmed, as expected from greenhouse gases (which trap heat energy radiating from the surface). Attribution of recent climate change shows that the primary driver is elevated greenhouse gases, but that aerosols also have a strong effect.
The Earth absorbs
radiates it as heat. Greenhouse gases in the atmosphere absorb and reemit
infrared radiation, slowing the rate at which it can pass through the atmosphere and escape into space.
Before the Industrial Revolution, naturally-occurring amounts of greenhouse gases caused the air near the surface to be about 33 °C (59 °F) warmer than it would have been in their absence.
water vapour (~50%) and clouds (~25%) are the biggest contributors to the greenhouse effect, they increase as a function of temperature and are therefore considered
feedbacks. On the other hand, concentrations of gases such as CO
2 (~20%), tropospheric ozone, CFCs and nitrous oxide are not temperature-dependent, and are therefor considered external forcings.
Human activity since the Industrial Revolution, mainly extracting and burning fossil fuels (
has increased the amount of greenhouse gases in the atmosphere, resulting in a
radiative imbalance. In 2018, the
concentrations of CO
2 and methane had increased by about 45% and 160%, respectively, since 1750. These CO
2 levels are much higher than they have been at any time during the last 800,000 years, the period for which reliable data have been collected from air trapped in ice cores. Less direct geological evidence indicates that CO
2 values have not been this high for millions of years.
greenhouse gas emissions in 2018, excluding those from land use change, were
equivalent to 52 billion tonnes of CO
2. Of these emissions, 72% was actual CO
2, 19% was methane, 6% was nitrous oxide, and 3% was fluorinated gases. CO
2 emissions primarily come from burning fossil fuels to provide energy for transport, manufacturing, heating, and electricity. Additional CO
2 emissions come from deforestation and industrial processes, which include the CO
2 released by the chemical reactions for making cement, steel, aluminum, and fertiliser. Methane emissions come from livestock, manure, rice cultivation, landfills, wastewater, coal mining, as well as oil and gas extraction. Nitrous oxide emissions largely come from the microbial decomposition of inorganic and organic fertiliser. From a production standpoint, the primary sources of global greenhouse gas emissions are estimated as: electricity and heat (25%), agriculture and forestry (24%), industry and manufacturing (21%), transport (14%), and buildings (6%).
Despite the contribution of deforestation to greenhouse gas emissions, the Earth's land surface, particularly its forests, remain a significant
carbon sink for CO
2. Natural processes, such as carbon fixation in the soil and photosynthesis, more than offset the greenhouse gas contributions from deforestation. The land-surface sink is estimated to remove about 29% of annual global CO
2 emissions. The ocean also serves as a significant carbon sink via a two-step process. First, CO
2 dissolves in the surface water. Afterwards, the ocean's overturning circulation distributes it deep into the ocean's interior, where it accumulates over time as part of the carbon cycle. Over the last two decades, the world's oceans have absorbed 20 to 30% of emitted CO
Aerosols and clouds
Air pollution, in the form of aerosols, not only puts a large burden on human health, but also affects the climate on a large scale. From 1961 to 1990, a gradual reduction in the amount of sunlight reaching the Earth's surface was observed, a phenomenon popularly known as global dimming, typically attributed to aerosols from biofuel and fossil fuel burning. Aerosol removal by precipitation gives tropospheric aerosols an atmospheric lifetime of only about a week, while stratospheric aerosols can remain in the atmosphere for a few years. Globally, aerosols have been declining since 1990, meaning that they no longer mask greenhouse gas warming as much.
In addition to their direct effects (scattering and absorbing solar radiation), aerosols have indirect effects on the Earth's radiation budget. Sulfate aerosols act as cloud condensation nuclei and thus lead to clouds that have more and smaller cloud droplets. These clouds reflect solar radiation more efficiently than clouds with fewer and larger droplets. This effect also causes droplets to be more uniform in size, which reduces the growth of raindrops and makes clouds more reflective to incoming sunlight. Indirect effects of aerosols are the largest uncertainty in radiative forcing.
While aerosols typically limit global warming by reflecting sunlight, black carbon in soot that falls on snow or ice can contribute to global warming. Not only does this increase the absorption of sunlight, it also increases melting and sea-level rise. Limiting new black carbon deposits in the Arctic could reduce global warming by 0.2 °C (0.36 °F) by 2050.
Changes of the land surface
Humans change the Earth's surface mainly to create more agricultural land. Today, agriculture takes up 34% of Earth's land area, while 26% is forests, and 30% is uninhabitable (glaciers, deserts, etc.). The amount of forested land continues to decrease, largely due to conversion to cropland in the tropics. This deforestation is the most significant aspect of land surface change affecting global warming. The main causes of deforestation are: permanent land-use change from forest to agricultural land producing products such as beef and palm oil (27%), logging to produce forestry/forest products (26%), short term shifting cultivation (24%), and wildfires (23%).
In addition to affecting greenhouse gas concentrations, land-use changes affect global warming through a variety of other chemical and physical mechanisms. Changing the type of vegetation in a region affects the local temperature, by changing how much of the sunlight gets reflected back into space ( albedo), and how much heat is lost by evaporation. For instance, the change from a dark forest to grassland makes the surface lighter, causing it to reflect more sunlight. Deforestation can also contribute to changing temperatures by affecting the release of aerosols and other chemical compounds that influence clouds, and by changing wind patterns. In tropic and temperate areas the net effect is to produce significant warming, while at latitudes closer to the poles a gain of albedo (as forest is replaced by snow cover) leads to an overall cooling effect. Globally, these effects are estimated to have led to a slight cooling, dominated by an increase in surface albedo.
Solar and volcanic activity
Physical climate models are unable to reproduce the rapid warming observed in recent decades when taking into account only variations in solar output and volcanic activity. As the Sun is the Earth's primary energy source, changes in incoming sunlight directly affect the climate system. Solar irradiance has been measured directly by satellites, and indirect measurements are available from the early 1600s. There has been no upward trend in the amount of the Sun's energy reaching the Earth. Further evidence for greenhouse gases being the cause of recent climate change come from measurements showing the warming of the lower atmosphere (the troposphere), coupled with the cooling of the upper atmosphere (the stratosphere). If solar variations were responsible for the observed warming, warming of both the troposphere and the stratosphere would be expected, but that has not been the case.
Explosive volcanic eruptions represent the largest natural forcing over the industrial era. When the eruption is sufficiently strong (with sulfur dioxide reaching the stratosphere) sunlight can be partially blocked for a couple of years, with a temperature signal lasting about twice as long. In the industrial era, volcanic activity has had negligible impacts on global temperature trends. Present-day volcanic CO2 emissions are equivalent to less than 1% of current anthropogenic CO2 emissions.
Climate change feedback
The response of the climate system to an initial forcing is modified by feedbacks: increased by
self-reinforcing feedbacks and reduced by
The main reinforcing feedbacks are the
water-vapour feedback, the
ice–albedo feedback, and probably the net effect of clouds.
The primary balancing feedback to global temperature change is
radiative cooling to space as
infrared radiation in response to rising surface temperature.
In addition to temperature feedbacks, there are feedbacks in the carbon cycle, such as the fertilizing effect of CO
2 on plant growth. Uncertainty over feedbacks is the major reason why different climate models project different magnitudes of warming for a given amount of emissions.
As air gets warmer,
it can hold more moisture. After initial warming due to emissions of greenhouse gases, the atmosphere will hold more water. As water vapour is a potent greenhouse gas, this further heats the atmosphere.
If cloud cover increases, more sunlight will be reflected back into space, cooling the planet. If clouds become more high and thin, they act as an insulator, reflecting heat from below back downwards and warming the planet.
Overall, the net cloud feedback over the industrial era has probably exacerbated temperature rise.
The reduction of snow cover and sea ice in the Arctic reduces the albedo of the Earth's surface.
More of the Sun's energy is now absorbed in these regions, contributing to
amplification of Arctic temperature changes.
Arctic amplification is also melting
permafrost, which releases methane and CO
2 into the atmosphere.
Around half of human-caused CO
2 emissions have been absorbed by land plants and by the oceans. On land, elevated CO
2 and an extended growing season have stimulated plant growth. Climate change increases droughts and heat waves that inhibit plant growth, which makes it uncertain whether this carbon sink will continue to grow in the future. Soils contain large quantities of carbon and may release some when they heat up. As more CO
2 and heat are absorbed by the ocean, it acidifies, its circulation changes and phytoplankton takes up less carbon, decreasing the rate at which the ocean absorbs atmospheric carbon. Climate change can increase methane emissions from wetlands, marine and freshwater systems, and permafrost.
Future warming and the carbon budget
Future warming depends on the strengths of climate feedbacks and on emissions of greenhouse gases. The former are often estimated using various climate models, developed by multiple scientific institutions. A climate model is a representation of the physical, chemical, and biological processes that affect the climate system. Models include changes in the Earth's orbit, historical changes in the Sun's activity, and volcanic forcing. Computer models attempt to reproduce and predict the circulation of the oceans, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere. Models project different future temperature rises for given emissions of greenhouse gases; they also do not fully agree on the strength of different feedbacks on climate sensitivity and magnitude of inertia of the climate system.
The physical realism of models is tested by examining their ability to simulate contemporary or past climates. Past models have underestimated the rate of Arctic shrinkage and underestimated the rate of precipitation increase. Sea level rise since 1990 was underestimated in older models, but more recent models agree well with observations. The 2017 United States-published National Climate Assessment notes that "climate models may still be underestimating or missing relevant feedback processes".
Various Representative Concentration Pathways (RCPs) can be used as input for climate models: "a stringent mitigation scenario (RCP2.6), two intermediate scenarios (RCP4.5 and RCP6.0) and one scenario with very high [greenhouse gas] emissions (RCP8.5)". RCPs only look at concentrations of greenhouse gases, and so do not include the response of the carbon cycle. Climate model projections summarised in the IPCC Fifth Assessment Report indicate that, during the 21st century, the global surface temperature is likely to rise a further 0.3 to 1.7 °C (0.5 to 3.1 °F) in a moderate scenario, or as much as 2.6 to 4.8 °C (4.7 to 8.6 °F) in an extreme scenario, depending on the rate of future greenhouse gas emissions and on climate feedback effects.
A subset of climate models add societal factors to a simple physical climate model. These models simulate how population, economic growth, and energy use affect – and interact with – the physical climate. With this information, these models can produce scenarios of how greenhouse gas emissions may vary in the future. This output is then used as input for physical climate models to generate climate change projections. In some scenarios emissions continue to rise over the century, while others have reduced emissions. Fossil fuel resources are too abundant for shortages to be relied on to limit carbon emissions in the 21st century. Emissions scenarios can be combined with modelling of the carbon cycle to predict how atmospheric concentrations of greenhouse gases might change in the future. According to these combined models, by 2100 the atmospheric concentration of CO2 could be as low as 380 or as high as 1400 ppm, depending on the socioeconomic scenario and the mitigation scenario.
The remaining carbon
emissions budget is determined by modelling the carbon cycle and the climate sensitivity to greenhouse gases.
According to the IPCC, global warming can be kept below 1.5 °C (2.7 °F) with a two-thirds chance if emissions after 2018 do not exceed 420 or 570 gigatonnes of CO
2, depending on exactly how the global temperature is defined. This amount corresponds to 10 to 13 years of current emissions. There are high uncertainties about the budget; for instance, it may be 100 gigatonnes of CO
2 smaller due to methane release from permafrost and wetlands.
The environmental effects of climate change are broad and far-reaching, affecting oceans, ice, and weather. Changes may occur gradually or rapidly. Evidence for these effects comes from studying climate change in the past, from modelling, and from modern observations. Since the 1950s, droughts and heat waves have appeared simultaneously with increasing frequency. Extremely wet or dry events within the monsoon period have increased in India and East Asia. The maximum rainfall and wind speed from hurricanes and typhoons is likely increasing. Frequency of tropical cyclones has not increased as a result of climate change. While tornado and severe thunderstorm frequency has not increased as a result of climate change, the areas affected by such phenomena may be changing.
Global sea level is rising as a consequence of glacial melt, melt of the ice sheets in Greenland and Antarctica, and thermal expansion. Between 1993 and 2017, the rise increased over time, averaging 3.1 ± 0.3 mm per year. Over the 21st century, the IPCC projects that in a very high emissions scenario the sea level could rise by 61–110 cm. Increased ocean warmth is undermining and threatening to unplug Antarctic glacier outlets, risking a large melt of the ice sheet and the possibility of a 2-meter sea level rise by 2100 under high emissions.
Climate change has led to decades of
shrinking and thinning of the Arctic sea ice, making it vulnerable to atmospheric anomalies.
While ice-free summers are expected to be rare at 1.5 °C (2.7 °F) degrees of warming, they are set to occur once every three to ten years at a warming level of 2.0 °C (3.6 °F).
Higher atmospheric CO
2 concentrations have led to changes in ocean chemistry. An increase in dissolved CO
2 is causing oceans to acidify. In addition, oxygen levels are decreasing as oxygen is less soluble in warmer water, with hypoxic dead zones expanding as a result of algal blooms stimulated by higher temperatures, higher CO
2 levels, ocean deoxygenation, and eutrophication.
Tipping points and long-term impacts
The greater the amount of global warming, the greater the risk of passing through ‘ tipping points’, thresholds beyond which certain impacts can no longer be avoided even if temperatures are reduced. An example is the collapse of West Antarctic and Greenland ice sheets, where a temperature rise of 1.5 to 2.0 °C (2.7 to 3.6 °F) may commit the ice sheets to melt, although the time scale of melt is uncertain and depends on future warming. Some large-scale changes could occur over a short time period, such as a collapse of the Atlantic Meridional Overturning Circulation, which would trigger major climate changes in the North Atlantic, Europe, and North America.
long-term effects of climate change include further ice melt, ocean warming, sea level rise, and ocean acidification. On the timescale of centuries to millennia, the magnitude of climate change will be determined primarily by anthropogenic CO
2 emissions. This is due to CO
2's long atmospheric lifetime. Oceanic CO
2 uptake is slow enough that ocean acidification will continue for hundreds to thousands of years. These emissions are estimated to have prolonged the current interglacial period by at least 100,000 years. Sea level rise will continue over many centuries, with an estimated rise of 2.3 metres per degree Celsius (4.2 ft/°F) after 2000 years.
Nature and wildlife
Recent warming has driven many terrestrial and freshwater species poleward and towards higher
Higher atmospheric CO
2 levels and an extended growing season have resulted in global greening, whereas heatwaves and drought have reduced ecosystem productivity in some regions. The future balance of these opposing effects is unclear. Climate change has contributed to the expansion of drier climate zones, such as the expansion of deserts in the subtropics. The size and speed of global warming is making abrupt changes in ecosystems more likely. Overall, it is expected that climate change will result in the extinction of many species.
The oceans have heated more slowly than the land, but plants and animals in the ocean have migrated towards the colder poles faster than species on land. Just as on land, heat waves in the ocean occur more frequently due to climate change, with harmful effects found on a wide range of organisms such as corals, kelp, and seabirds. Ocean acidification is impacting organisms who produce shells and skeletons, such as mussels and barnacles, and coral reefs; coral reefs have seen extensive bleaching after heat waves. Harmful algae bloom enhanced by climate change and eutrophication cause anoxia, disruption of food webs and massive large-scale mortality of marine life. Coastal ecosystems are under particular stress, with almost half of wetlands having disappeared as a consequence of climate change and other human impacts.
The effects of climate change on humans, mostly due to warming and shifts in precipitation, have been detected worldwide. Regional impacts of climate change are now observable on all continents and across ocean regions, with low-latitude, less developed areas facing the greatest risk. Continued emission of greenhouse gases will lead to further warming and long-lasting changes in the climate system, with potentially “severe, pervasive and irreversible impacts” for both people and ecosystems. Climate change risks are unevenly distributed, but are generally greater for disadvantaged people in developing and developed countries.
Food and health
Health impacts include both the direct effects of extreme weather, leading to injury and loss of life, as well as indirect effects, such as undernutrition brought on by crop failures. Various infectious diseases are more easily transmitted in a warmer climate, such as dengue fever, which affects children most severely, and malaria. Young children are the most vulnerable to food shortages, and together with older people, to extreme heat. The World Health Organization (WHO) has estimated that between 2030 and 2050, climate change is expected to cause approximately 250,000 additional deaths per year from heat exposure in elderly people, increases in diarrheal disease, malaria, dengue, coastal flooding, and childhood undernutrition. Over 500,000 additional adult deaths are projected yearly by 2050 due to reductions in food availability and quality. Other major health risks associated with climate change include air and water quality. The WHO has classified human impacts from climate change as the greatest threat to global health in the 21st century.
Climate change is affecting food security and has caused reduction in global mean yields of maize, wheat, and soybeans between 1981 and 2010. Future warming could further reduce global yields of major crops. Crop production will probably be negatively affected in low-latitude countries, while effects at northern latitudes may be positive or negative. Up to an additional 183 million people worldwide, particularly those with lower incomes, are at risk of hunger as a consequence of these impacts. The effects of warming on the oceans impact fish stocks, with a global decline in the maximum catch potential. Only polar stocks are showing an increased potential. Regions dependent on glacier water, regions that are already dry, and small islands are at increased risk of water stress due to climate change.
Economic damages due to climate change have been underestimated, and may be severe, with the probability of disastrous tail-risk events being nontrivial. Climate change has likely already increased global economic inequality, and is projected to continue doing so. Most of the severe impacts are expected in sub-Saharan Africa and South-East Asia, where existing poverty is already exacerbated. The World Bank estimates that climate change could drive over 120 million people into poverty by 2030. Current inequalities between men and women, between rich and poor, and between different ethnicities have been observed to worsen as a consequence of climate variability and climate change. An expert elicitation concluded that the role of climate change in armed conflict has been small compared to factors such as socio-economic inequality and state capabilities, but that future warming will bring increasing risks.
Low-lying islands and coastal communities are threatened through hazards posed by sea level rise, such as flooding and permanent submergence. This could lead to statelessness for populations in island nations, such as the Maldives and Tuvalu. In some regions, rise in temperature and humidity may be too severe for humans to adapt to. With worst-case climate change, models project that almost one-third of humanity might live in extremely hot and uninhabitable climates, similar to the current climate found mainly in the Sahara. These factors, plus weather extremes, can drive environmental migration, both within and between countries. Displacement of people is expected to increase as a consequence of more frequent extreme weather, sea level rise, and conflict arising from increased competition over natural resources. Climate change may also increase vulnerabilities, leading to "trapped populations" in some areas who are not able to move due to a lack of resources.
Responses: mitigation and adaptation
Climate change impacts can be mitigated by reducing greenhouse gas emissions and by enhancing sinks that absorb greenhouse gases from the atmosphere. In order to limit global warming to less than 1.5 °C with a high likelihood of success, global greenhouse gas emissions needs to be net-zero by 2050, or by 2070 with a 2 °C target. This requires far-reaching, systemic changes on an unprecedented scale in energy, land, cities, transport, buildings, and industry. Scenarios that limit global warming to 1.5 °C often describe reaching net negative emissions at some point. To make progress towards a goal of limiting warming to 2 °C, the United Nations Environment Programme estimates that, within the next decade, countries need to triple the amount of reductions they have committed to in their current Paris Agreements; an even greater level of reduction is required to meet the 1.5 °C goal.
Although there is no single pathway to limit global warming to 1.5 or 2.0 °C (2.7 or 3.6 °F), most scenarios and strategies see a major increase in the use of renewable energy in combination with increased energy efficiency measures to generate the needed greenhouse gas reductions. To reduce pressures on ecosystems and enhance their carbon sequestration capabilities, changes would also be necessary in sectors such as forestry and agriculture.
Other approaches to mitigating climate change entail a higher level of risk. Scenarios that limit global warming to 1.5 °C typically project the large-scale use of carbon dioxide removal methods over the 21st century. There are concerns, though, about over-reliance on these technologies, as well as possible environmental impacts. Solar radiation management (SRM) methods have also been explored as a possible supplement to deep reductions in emissions. However, SRM would raise significant ethical and legal issues, and the risks are poorly understood.
Long-term decarbonisation scenarios point to rapid and significant investment in renewable energy, which includes solar and wind power, bioenergy, geothermal energy, and hydropower. Fossil fuels accounted for 80% of the world's energy in 2018, while the remaining share was split between nuclear power and renewables; that mix is projected to change significantly over the next 30 years. Solar and wind have seen substantial growth and progress over the last few years; photovoltaic solar and onshore wind are the cheapest forms of adding new power generation capacity in most countries. Renewables represented 75% of all new electricity generation installed in 2019, with solar and wind constituting nearly all of that amount. Meanwhile, nuclear power costs are increasing amidst stagnant power share, so that nuclear power generation is now several times more expensive per megawatt-hour than wind and solar.
To achieve carbon neutrality by 2050, renewable energy would become the dominant form of electricity generation, rising to 85% or more by 2050 in some scenarios. The use of electricity for other needs, such as heating, would rise to the point where electricity becomes the largest form of overall energy supply. Investment in coal would be eliminated and coal use nearly phased out by 2050.
In transport, scenarios envision sharp increases in the market share of electric vehicles, and low carbon fuel substitution for other transportation modes like shipping. Building heating would be increasingly decarbonized with the use of technologies like heat pumps.
There are obstacles to the continued rapid development of renewables. For solar and wind power, a key challenge is their intermittency and seasonal variability. Traditionally, hydro dams with reservoirs and conventional power plants have been used when variable energy production is low. Intermittency can further be countered by demand flexibility, and by expanding battery storage and long-distance transmission to smooth variability of renewable output across wider geographic areas. Some environmental and land use concerns have been associated with large solar and wind projects, while bioenergy is often not carbon neutral and may have negative consequences for food security. Hydropower growth has been slowing and is set to decline further due to concerns about social and environmental impacts.
Clean energy improves human health by minimizing climate change and has the near-term benefit of reducing air pollution deaths, which were estimated at 7 million annually in 2016. Meeting the Paris Agreement goals that limit warming to a 2 °C increase could save about a million of those lives per year by 2050, whereas limiting global warming to 1.5 °C could save millions and simultaneously increase energy security and reduce poverty.
Reducing energy demand is another major feature of decarbonisation scenarios and plans. In addition to directly reducing emissions, energy demand reduction measures provide more flexibility for low carbon energy development, aid in the management of the electricity grid, and minimise carbon-intensive infrastructure development. Over the next few decades, major increases in energy efficiency investment will be required to achieve these reductions, comparable to the expected level of investment in renewable energy. However, several COVID-19 related changes in energy use patterns, energy efficiency investments, and funding have made forecasts for this decade more difficult and uncertain.
Efficiency strategies to reduce energy demand vary by sector. In transport, gains can be made by switching passengers and freight to more efficient travel modes, such as buses and trains, and increasing the use of electric vehicles. Industrial strategies to reduce energy demand include increasing the energy efficiency of heating systems and motors, designing less energy-intensive products, and increasing product lifetimes. In the building sector the focus is on better design of new buildings, and incorporating higher levels of energy efficiency in retrofitting techniques for existing structures. In addition to decarbonizing energy use, the use of technologies like heat pumps can also increase building energy efficiency.
Agriculture and industry
Agriculture and forestry face a triple challenge of limiting greenhouse gas emissions, preventing the further conversion of forests to agricultural land, and meeting increases in world food demand. A suite of actions could reduce agriculture/forestry-based greenhouse gas emissions by 66% from 2010 levels by reducing growth in demand for food and other agricultural products, increasing land productivity, protecting and restoring forests, and reducing greenhouse gas emissions from agricultural production.
In addition to the industrial demand reduction measures mentioned earlier, steel and cement production, which together are responsible for about 13% of industrial CO
2 emissions, present particular challenges. In these industries, carbon-intensive materials such as coke and lime play an integral role in the production process. Reducing CO
2 emissions here requires research driven efforts aimed at decarbonizing the chemistry of these processes.
Natural carbon sinks can be enhanced to sequester significantly larger amounts of CO
2 beyond naturally occurring levels. Reforestation and tree planting on non-forest lands are among the most mature sequestration techniques, although they raise food security concerns. Soil carbon sequestration and coastal carbon sequestration are less understood options. The feasibility of land-based negative emissions methods for mitigation are uncertain in models; the IPCC has described mitigation strategies based on them as risky.
Where energy production or CO
2-intensive heavy industries continue to produce waste CO
2, the gas can be captured and stored instead of being released to the atmosphere. Although its current use is limited in scale and expensive, carbon capture and storage (CCS) may be able to play a significant role in limiting CO
2 emissions by mid-century. This technique, in combination with bio-energy production (BECCS) can result in net-negative emissions, where the amount of greenhouse gasses that are released into the atmosphere are less than the sequestered, or stored, amount in the bio-energy fuel being grown. It remains highly uncertain whether carbon dioxide removal techniques, such as BECCS, will be able to play a large role in limiting warming to 1.5 °C, and policy decisions based on reliance on carbon dioxide removal increases the risk of global warming increasing beyond international goals.
Adaptation is "the process of adjustment to current or expected changes in climate and its effects". Without additional mitigation, adaptation cannot avert the risk of "severe, widespread and irreversible" impacts. More severe climate change requires more transformative adaptation, which can be prohibitively expensive. The capacity and potential for humans to adapt, called adaptive capacity, is unevenly distributed across different regions and populations, and developing countries generally have less. The first two decades of the 21st century saw an increase in adaptive capacity in most low- and middle-income countries with improved access to basic sanitation and electricity, but progress is slow. Many countries have implemented adaptation policies. However, there is a considerable gap between necessary and available finance.
Adaptation to sea level rise consists of avoiding at-risk areas, learning to live with increased flooding, protection and, if needed, the more transformative option of managed retreat. There are economic barriers for moderation of dangerous heat impact: avoiding strenuous work or employing private air conditioning is not possible for everybody. In agriculture, adaptation options include a switch to more sustainable diets, diversification, erosion control and genetic improvements for increased tolerance to a changing climate. Insurance allows for risk-sharing, but is often difficult to obtain for people on lower incomes. Education, migration and early warning systems can reduce climate vulnerability.
Ecosystems adapt to climate change, a process that can be supported by human intervention. Possible responses include increasing connectivity between ecosystems, allowing species to migrate to more favourable climate conditions and species relocation. Protection and restoration of natural and semi-natural areas helps build resilience, making it easier for ecosystems to adapt. Many of the actions that promote adaptation in ecosystems, also help humans adapt via ecosystem-based adaptation. For instance, restoration of natural fire regimes makes catastrophic fires less likely, and reduces human exposure. Giving rivers more space allows for more water storage in the natural system, reducing flood risk. Restored forest act as a carbon sink, but planting trees in unsuitable regions can exacerbate climate impacts.
There are some synergies and trade-offs between adaptation and mitigation. Adaptation measures often offer short-term benefits, whereas mitigation has longer-term benefits. Increased use of air conditioning allows people to better cope with heat, but increases energy demand. Compact urban development may lead to reduced emissions from transport and construction. Simultaneously, it may increase the urban heat island effect, leading to higher temperatures and increased exposure. Increased food productivity has large benefits for both adaptation and mitigation.
Policies and politics
Countries that are most vulnerable to climate change have typically been responsible for a small share of global emissions, which raises questions about justice and fairness. Climate change is strongly linked to sustainable development. Limiting global warming makes it easier to achieve sustainable development goals, such as eradicating poverty and reducing inequalities. The connection between the two is recognised in the Sustainable Development Goal 13 which is to "Take urgent action to combat climate change and its impacts". The goals on food, clean water and ecosystem protections have synergies with climate mitigation.
The geopolitics of climate change is complex and has often been framed as a free-rider problem, in which all countries benefit from mitigation done by other countries, but individual countries would lose from investing in a transition to a low-carbon economy themselves. This framing has been challenged. For instance, the benefits in terms of public health and local environmental improvements of coal phase-out exceed the costs in almost all regions. Another argument against this framing is that net importers of fossil fuels win economically from transitioning, causing net exporters to face stranded assets: fossil fuels they cannot sell.
A wide range of policies, regulations and laws are being used to reduce greenhouse gases. Carbon pricing mechanisms include carbon taxes and emissions trading systems. As of 2019, carbon pricing covers about 20% of global greenhouse gas emissions. Direct global fossil fuel subsidies reached $319 billion in 2017, and $5.2 trillion when indirect costs such as air pollution are priced in. Ending these can cause a 28% reduction in global carbon emissions and a 46% reduction in air pollution deaths. Subsidies could also be redirected to support the transition to clean energy. More prescriptive methods that can reduce greenhouse gases include vehicle efficiency standards, renewable fuel standards, and air pollution regulations on heavy industry. Renewable portfolio standards have been enacted in several countries requiring utilities to increase the percentage of electricity they generate from renewable sources.
As the use of fossil fuels is reduced, there are Just Transition considerations involving the social and economic challenges that arise. An example is the employment of workers in the affected industries, along with the well-being of the broader communities involved. Climate justice considerations, such as those facing indigenous populations in the Arctic, are another important aspect of mitigation policies.
International climate agreements
Nearly all countries in the world are parties to the 1994 United Nations Framework Convention on Climate Change (UNFCCC). The objective of the UNFCCC is to prevent dangerous human interference with the climate system. As stated in the convention, this requires that greenhouse gas concentrations are stabilised in the atmosphere at a level where ecosystems can adapt naturally to climate change, food production is not threatened, and economic development can be sustained. Global emissions have risen since signing of the UNFCCC, which does not actually restrict emissions but rather provides a framework for protocols that do. Its yearly conferences are the stage of global negotiations.
The 1997 Kyoto Protocol extended the UNFCCC and included legally binding commitments for most developed countries to limit their emissions, During Kyoto Protocol negotiations, the G77 (representing developing countries) pushed for a mandate requiring developed countries to "[take] the lead" in reducing their emissions, since developed countries contributed most to the accumulation of greenhouse gases in the atmosphere, and since per-capita emissions were still relatively low in developing countries and emissions of developing countries would grow to meet their development needs.
The 2009 Copenhagen Accord has been widely portrayed as disappointing because of its low goals, and was rejected by poorer nations including the G77. Associated parties aimed to limit the increase in global mean temperature to below 2.0 °C (3.6 °F). The Accord set the goal of sending $100 billion per year to developing countries in assistance for mitigation and adaptation by 2020, and proposed the founding of the Green Climate Fund. As of 2020 [update], the fund has failed to reach its expected target, and risks a shrinkage in its funding.
In 2015 all UN countries negotiated the Paris Agreement, which aims to keep global warming well below 1.5 °C (2.7 °F) and contains an aspirational goal of keeping warming under 1.5 °C. The agreement replaced the Kyoto Protocol. Unlike Kyoto, no binding emission targets were set in the Paris Agreement. Instead, the procedure of regularly setting ever more ambitious goals and reevaluating these goals every five years has been made binding. The Paris Agreement reiterated that developing countries must be financially supported. As of February 2021 [update], 194 states and the European Union have signed the treaty and 188 states and the EU have ratified or acceded to the agreement.
The 1987 Montreal Protocol, an international agreement to stop emitting ozone-depleting gases, may have been more effective at curbing greenhouse gas emissions than the Kyoto Protocol specifically designed to do so. The 2016 Kigali Amendment to the Montreal Protocol aims to reduce the emissions of hydrofluorocarbons, a group of powerful greenhouse gases which served as a replacement for banned ozone-depleting gases. This strengthened the makes the Montreal Protocol a stronger agreement against climate change.
In 2019, the United Kingdom parliament became the first national government in the world to officially declare a climate emergency. Other countries and jurisdictions followed suit. In November 2019 the European Parliament declared a "climate and environmental emergency", and the European Commission presented its European Green Deal with the goal of making the EU carbon-neutral by 2050. Major countries in Asia have made similar pledges: South Korea and Japan have committed to become carbon neutral by 2050, and China by 2060.
As of 2021, based on information from 48 NDCs which represent 40% of the parties to the Paris Agreement, estimated total greenhouse gas emissions will be 0.5% lower compared to 2010 levels, below the 45% or 25% reduction goals to limit global warming to 1.5 °C or 2 °C, respectively.
Scientific consensus and society
There is an overwhelming scientific consensus that global surface temperatures have increased in recent decades and that the trend is caused mainly by human-induced emissions of greenhouse gases, with 90–100% (depending on the exact question, timing and sampling methodology) of publishing climate scientists agreeing. The consensus has grown to 100% among research scientists on anthropogenic global warming as of 2019. No scientific body of national or international standing disagrees with this view. Consensus has further developed that some form of action should be taken to protect people against the impacts of climate change, and national science academies have called on world leaders to cut global emissions.
Scientific discussion takes place in journal articles that are peer-reviewed, which scientists subject to assessment every couple of years in the Intergovernmental Panel on Climate Change reports. In 2013, the IPCC Fifth Assessment Report stated that "it is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century". Their 2018 report expressed the scientific consensus as: "human influence on climate has been the dominant cause of observed warming since the mid-20th century". Scientists have issued two warnings to humanity, in 2017 and 2019, expressing concern about the current trajectory of potentially catastrophic climate change, and about untold human suffering as a consequence.
Climate change came to international public attention in the late 1980s. Due to confusing media coverage in the early 1990s, understanding was often confounded by conflation with other environmental issues like ozone depletion. In popular culture, the first movie to reach a mass public on the topic was The Day After Tomorrow in 2004, followed a few years later by the Al Gore documentary An Inconvenient Truth. Books, stories and films about climate change fall under the genre of climate fiction.
Significant regional differences exist in both public concern for and public understanding of climate change. In 2015, a median of 54% of respondents considered it "a very serious problem", but Americans and Chinese (whose economies are responsible for the greatest annual CO2 emissions) were among the least concerned. A 2018 survey found increased concern globally on the issue compared to 2013 in most countries. More highly educated people, and in some countries, women and younger people were more likely to see climate change as a serious threat. In the United States, there was a large partisan gap in opinion.
Denial and misinformation
Public debate about climate change has been strongly affected by climate change denial and misinformation, which originated in the United States and has since spread to other countries, particularly Canada and Australia. The actors behind climate change denial form a well-funded and relatively coordinated coalition of fossil fuel companies, industry groups, conservative think tanks, and contrarian scientists. Like the tobacco industry before, the main strategy of these groups has been to manufacture doubt about scientific data and results. Many who deny, dismiss, or hold unwarranted doubt about the scientific consensus on anthropogenic climate change are labelled as "climate change skeptics", which several scientists have noted is a misnomer.
There are different variants of climate denial: some deny that warming takes place at all, some acknowledge warming but attribute it to natural influences, and some minimise the negative impacts of climate change. Manufacturing uncertainty about the science later developed into a manufacturing controversy: creating the belief that there is significant uncertainty about climate change within the scientific community in order to delay policy changes. Strategies to promote these ideas include criticism of scientific institutions, and questioning the motives of individual scientists. An echo chamber of climate-denying blogs and media has further fomented misunderstanding of climate change.
Protest and litigation
Climate protests have risen in popularity in the 2010s in such forms as public demonstrations, fossil fuel divestment, and lawsuits. Prominent recent demonstrations include the school strike for climate, and civil disobedience. In the school strike, youth across the globe have protested by skipping school, inspired by Swedish teenager Greta Thunberg. Mass civil disobedience actions by groups like Extinction Rebellion have protested by causing disruption. Litigation is increasingly used as a tool to strengthen climate action, with many lawsuits targeting governments to demand that they take ambitious action or enforce existing laws regarding climate change. Lawsuits against fossil-fuel companies, from activists, shareholders and investors, generally seek compensation for loss and damage.
To explain why Earth's temperature was higher than expected considering only incoming solar radiation, Joseph Fourier proposed the existence of a greenhouse effect. Solar energy reaches the surface as the atmosphere is transparent to solar radiation. The warmed surface emits infrared radiation, but the atmosphere is relatively opaque to infrared and slows the emission of energy, warming the planet. Starting in 1859, John Tyndall established that nitrogen and oxygen (99% of dry air) are transparent to infrared, but water vapour and traces of some gases (significantly methane and carbon dioxide) both absorb infrared and, when warmed, emit infrared radiation. Changing concentrations of these gases could have caused "all the mutations of climate which the researches of geologists reveal" including ice ages.
Svante Arrhenius noted that water vapour in air continuously varied, but carbon dioxide (CO
2) was determined by long term geological processes. At the end of an ice age, warming from increased CO
2 would increase the amount of water vapour, amplifying its effect in a feedback process. In 1896, he published the first climate model of its kind, showing that halving of CO
2 could have produced the drop in temperature initiating the ice age. Arrhenius calculated the temperature increase expected from doubling CO
2 to be around 5–6 °C (9.0–10.8 °F). Other scientists were initially sceptical and believed the greenhouse effect to be saturated so that adding more CO
2 would make no difference. They thought climate would be self-regulating. From 1938 Guy Stewart Callendar published evidence that climate was warming and CO
2 levels increasing, but his calculations met the same objections.
In the 1950s,
Gilbert Plass created a detailed computer model that included different atmospheric layers and the infrared spectrum and found that increasing CO
2 levels would cause warming. In the same decade Hans Suess found evidence CO
2 levels had been rising, Roger Revelle showed the oceans would not absorb the increase, and together they helped Charles Keeling to begin a record of continued increase, the Keeling Curve. Scientists alerted the public, and the dangers were highlighted at James Hansen's 1988 Congressional testimony. The Intergovernmental Panel on Climate Change, set up in 1988 to provide formal advice to the world's governments, spurred interdisciplinary research.
- 2020s in environmental history
- Anthropocene – proposed new geological time interval in which humans are having significant geological impact
- Global cooling – minority view held by scientists in the 1970s that imminent cooling of the Earth would take place
- USGCRP Chapter 3 2017 Figure 3.1 panel 2, Figure 3.3 panel 5.
- IPCC AR5 WG1 Summary for Policymakers 2013, p. 4: Warming of the climate system is unequivocal, and since the 1950s many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased; IPCC SR15 Ch1 2018, p. 54: Abundant empirical evidence of the unprecedented rate and global scale of impact of human influence on the Earth System (Steffen et al., 2016; Waters et al., 2016) has led many scientists to call for an acknowledgment that the Earth has entered a new geological epoch: the Anthropocene.
- Olivier & Peters 2019, pp. 14, 16–17, 23.
EPA 2020: Carbon dioxide enters the atmosphere through burning fossil fuels (coal, natural gas, and oil), solid waste, trees and other biological materials, and also as a result of certain chemical reactions (e.g., manufacture of cement). Fossil fuel use is the primary source of CO
2 can also be emitted from direct human-induced impacts on forestry and other land use, such as through deforestation, land clearing for agriculture, and degradation of soils. Methane is emitted during the production and transport of coal, natural gas, and oil. Methane emissions also result from livestock and other agricultural practices and by the decay of organic waste in municipal solid waste landfills.
- "Scientific Consensus: Earth's Climate is Warming". Climate Change: Vital Signs of the Planet. NASA JPL. Archived from the original on 28 March 2020. Retrieved 29 March 2020.; Gleick, 7 January 2017.
- IPCC SRCCL 2019, p. 7: Since the pre-industrial period, the land surface air temperature has risen nearly twice as much as the global average temperature (high confidence). Climate change... contributed to desertification and land degradation in many regions (high confidence).; IPCC SRCCL 2019, p. 45: Climate change is playing an increasing role in determining wildfire regimes alongside human activity (medium confidence), with future climate variability expected to enhance the risk and severity of wildfires in many biomes such as tropical rainforests (high confidence).
- IPCC SROCC 2019, p. 16: Over the last decades, global warming has led to widespread shrinking of the cryosphere, with mass loss from ice sheets and glaciers (very high confidence), reductions in snow cover (high confidence) and Arctic sea ice extent and thickness (very high confidence), and increased permafrost temperature (very high confidence).
- USGCRP Chapter 9 2017, p. 260.
- EPA (19 January 2017).
"Climate Impacts on Ecosystems".
Archived from the original on 27 January 2018. Retrieved 5 February 2019.
Mountain and arctic ecosystems and species are particularly sensitive to climate change... As ocean temperatures warm and the acidity of the ocean increases, bleaching and coral die-offs are likely to become more frequent.
- IPCC AR5 SYR 2014, pp. 13–16; WHO, Nov 2015: "Climate change is the greatest threat to global health in the 21st century. Health professionals have a duty of care to current and future generations. You are on the front line in protecting people from climate impacts - from more heat-waves and other extreme weather events; from outbreaks of infectious diseases such as malaria, dengue and cholera; from the effects of malnutrition; as well as treating people that are affected by cancer, respiratory, cardiovascular and other non-communicable diseases caused by environmental pollution."
IPCC SR15 Ch1 2018, p. 64: Sustained net zero anthropogenic emissions of CO
2 and declining net anthropogenic non-CO
2 radiative forcing over a multi-decade period would halt anthropogenic global warming over that period, although it would not halt sea level rise or many other aspects of climate system adjustment.
- Trenberth & Fasullo 2016
- "The State of the Global Climate 2020". World Meteorological Organization. 14 January 2021. Retrieved 3 March 2021.
- IPCC SR15 Summary for Policymakers 2018, p. 7
- IPCC AR5 SYR 2014, p. 77, 3.2
- NASA, Mitigation and Adaptation 2020
- IPCC AR5 SYR 2014, p. 17, SPM 3.2
- Climate Action Tracker 2019, p. 1: Under current pledges, the world will warm by 2.8°C by the end of the century, close to twice the limit they agreed in Paris. Governments are even further from the Paris temperature limit in terms of their real-world action, which would see the temperature rise by 3°C.; United Nations Environment Programme 2019, p. 27.
IPCC SR15 Ch2 2018, pp. 95–96: In model pathways with no or limited overshoot of 1.5°C, global net anthropogenic CO
2 emissions decline by about 45% from 2010 levels by 2030 (40–60% interquartile range), reaching net zero around 2050 (2045–2055 interquartile range); IPCC SR15 2018, p. 17, SPM C.3:All pathways that limit global warming to 1.5°C with limited or no overshoot project the use of carbon dioxide removal (CDR) on the order of 100–1000 GtCO2 over the 21st century. CDR would be used to compensate for residual emissions and, in most cases, achieve net negative emissions to return global warming to 1.5°C following a peak (high confidence). CDR deployment of several hundreds of GtCO2 is subject to multiple feasibility and sustainability constraints (high confidence).; Rogelj et al. 2015; Hilaire et al. 2019
- NASA, 5 December 2008.
- Weart "The Public and Climate Change: The Summer of 1988", "News reporters gave only a little attention ...".
- Joo et al. 2015.
- NOAA, 17 June 2015: "when scientists or public leaders talk about global warming these days, they almost always mean human-caused warming"; IPCC AR5 SYR Glossary 2014, p. 120: "Climate change refers to a change in the state of the climate that can be identified (e.g., by using statistical tests) by changes in the mean and/or the variability of its properties and that persists for an extended period, typically decades or longer. Climate change may be due to natural internal processes or external forcings such as modulations of the solar cycles, volcanic eruptions and persistent anthropogenic changes in the composition of the atmosphere or in land use."
- NASA, 7 July 2020; Shaftel 2016: " 'Climate change' and 'global warming' are often used interchangeably but have distinct meanings. ... Global warming refers to the upward temperature trend across the entire Earth since the early 20th century ... Climate change refers to a broad range of global phenomena ...[which] include the increased temperature trends described by global warming."; Associated Press, 22 September 2015: "The terms global warming and climate change can be used interchangeably. Climate change is more accurate scientifically to describe the various effects of greenhouse gases on the world because it includes extreme weather, storms and changes in rainfall patterns, ocean acidification and sea level.".
- Hodder & Martin 2009; BBC Science Focus Magazine, 3 February 2020.
- The Guardian, 17 May 2019; BBC Science Focus Magazine, 3 February 2020.
- USA Today, 21 November 2019.
- Neukom et al. 2019.
- "Global Annual Mean Surface Air Temperature Change". NASA. Retrieved 23 February 2020.
- EPA 2016: The U.S. Global Change Research Program, the National Academy of Sciences, and the Intergovernmental Panel on Climate Change (IPCC) have each independently concluded that warming of the climate system in recent decades is "unequivocal". This conclusion is not drawn from any one source of data but is based on multiple lines of evidence, including three worldwide temperature datasets showing nearly identical warming trends as well as numerous other independent indicators of global warming (e.g. rising sea levels, shrinking Arctic sea ice).
- IPCC SR15 Summary for Policymakers 2018, p. 4; WMO 2019, p. 6.
- IPCC SR15 Ch1 2018, p. 81.
- IPCC AR5 WG1 Ch2 2013, p. 162.
- IPCC SR15 Ch1 2018, p. 57: This report adopts the 51-year reference period, 1850–1900 inclusive, assessed as an approximation of pre-industrial levels in AR5 ... Temperatures rose by 0.0 °C–0.2 °C from 1720–1800 to 1850–1900; Hawkins et al. 2017, p. 1844.
- IPCC AR5 WG1 Summary for Policymakers 2013, pp. 4–5: "Global-scale observations from the instrumental era began in the mid-19th century for temperature and other variables ... the period 1880 to 2012 ... multiple independently produced datasets exist."
- IPCC AR5 WG1 Ch5 2013, p. 386; Neukom et al. 2019.
- IPCC AR5 WG1 Ch5 2013, pp. 389, 399–400: "The PETM [around 55.5–55.3 million years ago] was marked by ... global warming of 4 °C to 7 °C ... Deglacial global warming occurred in two main steps from 17.5 to 14.5 ka [thousand years ago] and 13.0 to 10.0 ka."
- IPCC SR15 Ch1 2018, p. 54.
- Kennedy et al. 2010, p. S26. Figure 2.5.
- Kennedy et al. 2010, pp. S26, S59–S60; USGCRP Chapter 1 2017, p. 35.
- IPCC AR4 WG2 Ch1 2007, Sec. 220.127.116.11, p. 99.
NASA JPL. Retrieved 11 September 2020.
Satellite measurements show warming in the troposphere but cooling in the stratosphere. This vertical pattern is consistent with global warming due to increasing greenhouse gases but inconsistent with warming from natural causes.
- IPCC SRCCL Summary for Policymakers 2019, p. 7.
- Sutton, Dong & Gregory 2007.
- "Climate Change: Ocean Heat Content". NOAA. 2018. Archived from the original on 12 February 2019. Retrieved 20 February 2019.
- IPCC AR5 WG1 Ch3 2013, p. 257: " Ocean warming dominates the global energy change inventory. Warming of the ocean accounts for about 93% of the increase in the Earth's energy inventory between 1971 and 2010 (high confidence), with warming of the upper (0 to 700 m) ocean accounting for about 64% of the total.
- NOAA, 10 July 2011.
- United States Environmental Protection Agency 2016, p. 5: "Black carbon that is deposited on snow and ice darkens those surfaces and decreases their reflectivity (albedo). This is known as the snow/ice albedo effect. This effect results in the increased absorption of radiation that accelerates melting."
- IPCC AR5 WG1 Ch12 2013, p. 1062; IPCC SROCC Ch3 2019, p. 212.
- NASA, 12 September 2018.
- Delworth & Zeng 2012, p. 5; Franzke et al. 2020.
- National Research Council 2012, p. 9.
- IPCC AR5 WG1 Ch10 2013, p. 916.
- Knutson 2017, p. 443; IPCC AR5 WG1 Ch10 2013, pp. 875–876.
- USGCRP 2009, p. 20.
- IPCC AR5 WG1 Summary for Policymakers 2013, pp. 13–14.
- NASA. "The Causes of Climate Change". Climate Change: Vital Signs of the Planet. Archived from the original on 8 May 2019. Retrieved 8 May 2019.
- IPCC AR4 WG1 Ch1 2007, FAQ1.1: "To emit 240 W m−2, a surface would have to have a temperature of around −19 °C (−2 °F). This is much colder than the conditions that actually exist at the Earth's surface (the global mean surface temperature is about 14 °C).
- ACS. "What Is the Greenhouse Effect?". Archived from the original on 26 May 2019. Retrieved 26 May 2019.
- Ozone acts as a greenhouse gas in the lowest layer of the atmosphere, the troposphere (as opposed to the stratospheric ozone layer). Wang, Shugart & Lerdau 2017
- Schmidt et al. 2010; USGCRP Climate Science Supplement 2014, p. 742.
- The Guardian, 19 February 2020.
- WMO 2020, p. 5.
- Siegenthaler et al. 2005; Lüthi et al. 2008.
- BBC, 10 May 2013.
- Our World in Data, 18 September 2020.
Olivier & Peters 2019, p. 17;
Our World in Data, 18 September 2020;
EPA 2020: Greenhouse gas emissions from industry primarily come from burning fossil fuels for energy, as well as greenhouse gas emissions from certain chemical reactions necessary to produce goods from raw materials;
"Redox, extraction of iron and transition metals".
Hot air (oxygen) reacts with the coke (carbon) to produce carbon dioxide and heat energy to heat up the furnace. Removing impurities: The calcium carbonate in the limestone thermally decomposes to form calcium oxide. calcium carbonate → calcium oxide + carbon dioxide; Kvande 2014: Carbon dioxide gas is formed at the anode, as the carbon anode is consumed upon reaction of carbon with the oxygen ions from the alumina (Al2O3). Formation of carbon dioxide is unavoidable as long as carbon anodes are used, and it is of great concern because CO2 is a greenhouse gas
- EPA 2020; Global Methane Initiative 2020: Estimated Global Anthropogenic Methane Emissions by Source, 2020: Enteric fermentation (27%), Manure Management (3%), Coal Mining (9%), Municipal Solid Waste (11%), Oil & Gas (24%), Wastewater (7%), Rice Cultivation (7%).
- Michigan State University 2014: Nitrous oxide is produced by microbes in almost all soils. In agriculture, N2O is emitted mainly from fertilized soils and animal wastes – wherever nitrogen (N) is readily available.; EPA 2019: Agricultural activities, such as fertilizer use, are the primary source of N2O emissions; Davidson 2009: 2.0% of manure nitrogen and 2.5% of fertilizer nitrogen was converted to nitrous oxide between 1860 and 2005; these percentage contributions explain the entire pattern of increasing nitrous oxide concentrations over this period.
- EPA 2019.
- IPCC SRCCL Summary for Policymakers 2019, p. 10.
- IPCC SROCC Ch5 2019, p. 450.
- Haywood 2016, p. 456; McNeill 2017; Samset et al. 2018.
- IPCC AR5 WG1 Ch2 2013, p. 183.
- He et al. 2018; Storelvmo et al. 2016.
- Ramanathan & Carmichael 2008.
- Wild et al. 2005; Storelvmo et al. 2016; Samset et al. 2018.
- Twomey 1977.
- Albrecht 1989.
- USGCRP Chapter 2 2017, p. 78.
- Ramanathan & Carmichael 2008; RIVM 2016.
- Sand et al. 2015.
- World Resources Institute, 31 March 2021
- Ritchie & Roser 2018
- The Sustainability Consortium, 13 September 2018; UN FAO 2016, p. 18.
- Curtis et al. 2018.
- World Resources Institute, 8 December 2019.
- IPCC SRCCL Ch2 2019, p. 172: "The global biophysical cooling alone has been estimated by a larger range of climate models and is −0.10 ± 0.14°C; it ranges from −0.57°C to +0.06°C ... This cooling is essentially dominated by increases in surface albedo: historical land cover changes have generally led to a dominant brightening of land".
- Schmidt, Shindell & Tsigaridis 2014; Fyfe et al. 2016.
- USGCRP Chapter 2 2017, p. 78.
- National Research Council 2008, p. 6.
- "Is the Sun causing global warming?". Climate Change: Vital Signs of the Planet. Archived from the original on 5 May 2019. Retrieved 10 May 2019.
- IPCC AR4 WG1 Ch9 2007, pp. 702–703; Randel et al. 2009.
- USGCRP Chapter 2 2017, p. 79
- Fischer & Aiuppa 2020.
- "Thermodynamics: Albedo". NSIDC. Archived from the original on 11 October 2017. Retrieved 10 October 2017.
- "The study of Earth as an integrated system". Vitals Signs of the Planet. Earth Science Communications Team at NASA's Jet Propulsion Laboratory / California Institute of Technology. 2013. Archived from the original on 26 February 2019..
- USGCRP Chapter 2 2017, pp. 89–91.
- USGCRP Chapter 2 2017, pp. 89–90.
- Wolff et al. 2015: "the nature and magnitude of these feedbacks are the principal cause of uncertainty in the response of Earth's climate (over multi-decadal and longer periods) to a particular emissions scenario or greenhouse gas concentration pathway."
- Williams, Ceppi & Katavouta 2020.
- USGCRP Chapter 2 2017, p. 90.
- NASA, 28 May 2013.
- Cohen et al. 2014.
- Turetsky et al. 2019.
- NASA, 16 June 2011: "So far, land plants and the ocean have taken up about 55 percent of the extra carbon people have put into the atmosphere while about 45 percent has stayed in the atmosphere. Eventually, the land and oceans will take up most of the extra carbon dioxide, but as much as 20 percent may remain in the atmosphere for many thousands of years."
- IPCC SRCCL Ch2 2019, pp. 133, 144.
- Melillo et al. 2017: Our first-order estimate of a warming-induced loss of 190 Pg of soil carbon over the 21st century is equivalent to the past two decades of carbon emissions from fossil fuel burning.
- USGCRP Chapter 2 2017, pp. 93–95.
- Dean et al. 2018.
- Wolff et al. 2015
- Carbon Brief, 15 January 2018, "Who does climate modelling around the world?".
- IPCC AR5 SYR Glossary 2014, p. 120.
- Carbon Brief, 15 January 2018, "What are the different types of climate models?".
- Carbon Brief, 15 January 2018, "What is a climate model?".
- Stott & Kettleborough 2002.
- IPCC AR4 WG1 Ch8 2007, FAQ 8.1.
- Stroeve et al. 2007; National Geographic, 13 August 2019.
- Liepert & Previdi 2009.
- Rahmstorf et al. 2007; Mitchum et al. 2018.
- USGCRP Chapter 15 2017.
- IPCC AR5 SYR Summary for Policymakers 2014, Sec. 2.1.
- IPCC AR5 WG1 Technical Summary 2013, pp. 79–80.
- IPCC AR5 WG1 Technical Summary 2013, p. 57.
- Carbon Brief, 15 January 2018, "What are the inputs and outputs for a climate model?".
- Riahi et al. 2017; Carbon Brief, 19 April 2018.
- IPCC AR5 WG3 Ch5 2014, pp. 379–380.
- Matthews et al. 2009.
- Carbon Brief, 19 April 2018; Meinshausen 2019, p. 462.
- Rogelj et al. 2019.
- IPCC SR15 Summary for Policymakers 2018, p. 12.
- NOAA 2017.
- Hansen et al. 2016; Smithsonian, 26 June 2016.
- USGCRP Chapter 15 2017, p. 415.
- Scientific American, 29 April 2014; Burke & Stott 2017.
- "Hurricanes and Climate Change". Center for Climate and Energy Solutions.
- "Tornado Ally may be Shifting East". USA Today.
- WCRP Global Sea Level Budget Group 2018.
- IPCC SROCC Ch4 2019, p. 324: GMSL (global mean sea level, red) will rise between 0.43 m (0.29–0.59 m, likely range) (RCP2.6) and 0.84 m (0.61–1.10 m, likely range) (RCP8.5) by 2100 (medium confidence) relative to 1986–2005.
- DeConto & Pollard 2016.
- Bamber et al. 2019.
- Zhang et al. 2008.
- IPCC SROCC Summary for Policymakers 2019, p. 18.
- Doney et al. 2009.
- Deutsch et al. 2011
- IPCC SROCC Ch5 2019, p. 510; "Climate Change and Harmful Algal Blooms". EPA. Retrieved 11 September 2020.
- IPCC SR15 Ch3 2018, p. 283.
- "Tipping points in Antarctic and Greenland ice sheets". NESSC. 12 November 2018. Retrieved 25 February 2019.
- Clark et al. 2008.
- Liu et al. 2017.
- National Research Council 2011, p. 14; IPCC AR5 WG1 Ch12 2013, pp. 88–89, FAQ 12.3.
- IPCC AR5 WG1 Ch12 2013, p. 1112.
- Crucifix 2016
- Smith et al. 2009; Levermann et al. 2013.
- IPCC SR15 Ch3 2018, p. 218.
- IPCC SRCCL Ch2 2019, p. 133.
- IPCC SRCCL Summary for Policymakers 2019, p. 7; Zeng & Yoon 2009.
- Turner et al. 2020, p. 1.
- Urban 2015.
- Poloczanska et al. 2013; Lenoir et al. 2020.
- Smale et al. 2019.
- IPCC SROCC Summary for Policymakers 2019, p. 13.
- IPCC SROCC Ch5 2019, p. 510
- IPCC SROCC Ch5 2019, p. 451.
"Coral Reef Risk Outlook".
National Oceanic and Atmospheric Administration. Retrieved 4 April 2020.
At present, local human activities, coupled with past thermal stress, threaten an estimated 75 percent of the world's reefs. By 2030, estimates predict more than 90% of the world's reefs will be threatened by local human activities, warming, and acidification, with nearly 60% facing high, very high, or critical threat levels.
- Carbon Brief, 7 January 2020.
- IPCC AR5 WG2 Ch28 2014, p. 1596: "Within 50 to 70 years, loss of hunting habitats may lead to elimination of polar bears from seasonally ice-covered areas, where two-thirds of their world population currently live."
- "What a changing climate means for Rocky Mountain National Park". National Park Service. Retrieved 9 April 2020.
- IPCC AR5 WG2 Ch18 2014, pp. 983, 1008.
- IPCC AR5 WG2 Ch19 2014, p. 1077.
- IPCC AR5 SYR Summary for Policymakers 2014, p. 8, SPM 2
- IPCC AR5 SYR Summary for Policymakers 2014, p. 13, SPM 2.3
- IPCC AR5 WG2 Ch11 2014, pp. 720–723.
- Costello et al. 2009; Watts et al. 2015; IPCC AR5 WG2 Ch11 2014, p. 713
- Watts et al. 2019, pp. 1836, 1848.
- Watts et al. 2019, pp. 1841, 1847.
- WHO 2014
- Springmann et al. 2016, p. 2; Haines & Ebi 2019
- Haines & Ebi 2019, Figure 3; IPCC AR5 SYR 2014, p. 15, SPM 2.3
- WHO, Nov 2015
- IPCC SRCCL Ch5 2019, p. 451.
- Zhao et al. 2017; IPCC SRCCL Ch5 2019, p. 439
- IPCC AR5 WG2 Ch7 2014, p. 488
- IPCC SRCCL Ch5 2019, p. 462
- IPCC SROCC Ch5 2019, p. 503.
- Holding et al. 2016; IPCC AR5 WG2 Ch3 2014, pp. 232–233.
- DeFries et al. 2019, p. 3; Krogstrup & Oman 2019, p. 10.
- Diffenbaugh & Burke 2019; The Guardian, 26 January 2015; Burke, Davis & Diffenbaugh 2018.
- IPCC AR5 WG2 Ch13 2014, pp. 796–797.
- Hallegatte et al. 2016, p. 12.
- IPCC AR5 WG2 Ch13 2014, p. 796.
- Mach et al. 2019.
- IPCC SROCC Ch4 2019, p. 328.
- UNHCR 2011, p. 3.
- Matthews 2018, p. 399.
- Balsari, Dresser & Leaning 2020
- Cattaneo et al. 2019; UN Environment, 25 October 2018.
- Flavell 2014, p. 38; Kaczan & Orgill-Meyer 2020
- Serdeczny et al. 2016.
- IPCC SRCCL Ch5 2019, pp. 439, 464.
- National Oceanic and Atmospheric Administration. "What is nuisance flooding?". Retrieved 8 April 2020.
- Kabir et al. 2016.
- Van Oldenborgh et al. 2019.
- IPCC AR5 SYR Glossary 2014, p. 125.
- IPCC SR15 Summary for Policymakers 2018, p. 12.
- IPCC SR15 Summary for Policymakers 2018, p. 15.
- IPCC SR15 2018, p. 17, C.3
- United Nations Environment Programme 2019, p. XX.
- IPCC SR15 Ch2 2018, p. 109.
- Teske, ed. 2019, p. xxiii.
- World Resources Institute, 8 August 2019.
- Bui et al. 2018, p. 1068; IPCC SR15 Summary for Policymakers 2018, p. 17.
- IPCC SR15 2018, p. 34; IPCC SR15 Summary for Policymakers 2018, p. 17
- IPCC SR15 Ch4 2018, pp. 347–352
- Friedlingstein et al. 2019.
- United Nations Environment Programme 2019, p. 46.; Vox, 20 September 2019.; "The Role of Firm Low-Carbon Electricity Resources in Deep Decarbonization of Power Generation".
- Teske et al. 2019, p. 163, Table 7.1.
- REN21 2020, p. 32, Fig.1.
- IEA 2020a, p. 12; Ritchie 2019
- The Guardian, 6 April 2020.
- Dunai, Marton; De Clercq, Geert (23 September 2019).
"Nuclear energy too slow, too expensive to save climate: report". Reuters.
The cost of generating solar power ranges from $36 to $44 per megawatt hour (MWh), the WNISR said, while onshore wind power comes in at $29–56 per MWh. Nuclear energy costs between $112 and $189. Over the past decade, (costs) for utility-scale solar have dropped by 88% and for wind by 69%. For nuclear, they have increased by 23%.
- United Nations Environment Programme 2019, p. XXIII, Table ES.3; Teske, ed. 2019, p. xxvii, Fig.5.
- IPCC SR15 Ch2 2018, p. 131, Figure 2.15; Teske 2019, pp. 409–410.
- IPCC SR15 Ch2 2018, pp. 142–144; United Nations Environment Programme 2019, Table ES.3 & p.49.
- IPCC AR5 WG3 Ch9 2014, p. 697; NREL 2017, pp. vi, 12
- Berrill et al. 2016.
- IPCC SR15 Ch4 2018, pp. 324–325.
"Hydropower". iea.org. International Energy Agency. Retrieved 12 October 2020.
Hydropower generation is estimated to have increased by over 2% in 2019 owing to continued recovery from drought in Latin America as well as strong capacity expansion and good water availability in China (...) capacity expansion has been losing speed. This downward trend is expected to continue, due mainly to less large-project development in China and Brazil, where concerns over social and environmental impacts have restricted projects.
- Watts et al. 2019, pp. 1854; WHO 2018, p. 27
- Watts et al. 2019, pp. 1837; WHO 2016
- WHO 2018, p. 27; Vandyck et al. 2018; IPCC SR15 2018, p. 97: "Limiting warming to 1.5°C can be achieved synergistically with poverty alleviation and improved energy security and can provide large public health benefits through improved air quality, preventing millions of premature deaths. However, specific mitigation measures, such as bioenergy, may result in trade-offs that require consideration."
- IPCC SR15 Ch2 2018, p. 97
- IPCC AR5 SYR Summary for Policymakers 2014, p. 29; IEA 2020b
- IPCC SR15 Ch2 2018, p. 155, Fig. 2.27
- IEA 2020b
- IPCC SR15 Ch2 2018, p. 142
- IPCC SR15 Ch2 2018, pp. 138–140
- IPCC SR15 Ch2 2018, pp. 141–142
- IPCC AR5 WG3 Ch9 2014, pp. 686–694.
- World Resources Institute, December 2019, p. 1.
- World Resources Institute, December 2019, p. 10.
- "Low and zero emissions in the steel and cement industries" (PDF). pp. 11, 19–22.
- World Resources Institute, 8 August 2019: IPCC SRCCL Ch2 2019, pp. 189–193.
- Ruseva et al. 2020.
- Krause et al. 2018, pp. 3026–3027.
- IPCC SR15 Ch4 2018, pp. 326–327; Bednar, Obersteiner & Wagner 2019; European Commission, 28 November 2018, p. 188.
- Bui et al. 2018, p. 1068.
- IPCC AR5 SYR 2014, p. 125; Bednar, Obersteiner & Wagner 2019.
- IPCC SR15 2018, p. 34
- IPCC SR15 Ch4 2018, pp. 396–397.
- IPCC AR5 SYR 2014, p. 17.
- IPCC AR4 WG2 Ch19 2007, p. 796.
- UNEP 2018, pp. xii-xiii.
- Stephens, Scott A; Bell, Robert G; Lawrence, Judy (2018). "Developing signals to trigger adaptation to sea-level rise". Environmental Research Letters. 13 (10): 104004. Bibcode: 2018ERL....13j4004S. doi: 10.1088/1748-9326/aadf96. ISSN 1748-9326.
- Matthews 2018, p. 402.
- IPCC SRCCL Ch5 2019, p. 439.
- Surminski, Swenja; Bouwer, Laurens M.; Linnerooth-Bayer, Joanne (2016). "How insurance can support climate resilience". Nature Climate Change. 6 (4): 333–334. Bibcode: 2016NatCC...6..333S. doi: 10.1038/nclimate2979. ISSN 1758-6798.
- IPCC SR15 Ch4 2018, pp. 336=337.
- Morecroft, Michael D.; Duffield, Simon; Harley, Mike; Pearce-Higgins, James W.; et al. (2019). "Measuring the success of climate change adaptation and mitigation in terrestrial ecosystems". Science. 366 (6471): eaaw9256. doi: 10.1126/science.aaw9256. ISSN 0036-8075. PMID 31831643. S2CID 209339286.
- Berry, Pam M.; Brown, Sally; Chen, Minpeng; Kontogianni, Areti; et al. (2015). "Cross-sectoral interactions of adaptation and mitigation measures". Climatic Change. 128 (3): 381–393. Bibcode: 2015ClCh..128..381B. doi: 10.1007/s10584-014-1214-0. ISSN 1573-1480. S2CID 153904466.
- Sharifi, Ayyoob (2020). "Trade-offs and conflicts between urban climate change mitigation and adaptation measures: A literature review". Journal of Cleaner Production. 276: 122813. doi: 10.1016/j.jclepro.2020.122813. ISSN 0959-6526.
- IPCC AR5 SYR 2014, p. 54.
- IPCC AR5 SYR Summary for Policymakers 2014, p. 17, Section 3.
- IPCC SR15 Ch5 2018, p. 447; United Nations (2017) Resolution adopted by the General Assembly on 6 July 2017, Work of the Statistical Commission pertaining to the 2030 Agenda for Sustainable Development ( A/RES/71/313)
- IPCC SR15 Ch5 2018, p. 477.
- Rauner et al. 2020.
- Mercure et al. 2018.
- Union of Concerned Scientists, 8 January 2017; Hagmann, Ho & Loewenstein 2019.
- World Bank, June 2019, p. 12, Box 1.
- Watts et al. 2019, p. 1866
- UN Human Development Report 2020, p. 10
- International Institute for Sustainable Development 2019, p. iv.
- ICCT 2019, p. iv; Natural Resources Defense Council, 29 September 2017.
- National Conference of State Legislators, 17 April 2020; European Parliament, February 2020.
- Carbon Brief, 4 Jan 2017.
- Pacific Environment, 3 October 2018; Ristroph 2019.
- UNCTAD 2009.
- Friedlingstein et al. 2019, Table 7.
- UNFCCC, "What is the United Nations Framework Convention on Climate Change?"
- UNFCCC 1992, Article 2.
- IPCC AR4 WG3 Ch1 2007, p. 97.
- UNFCCC, "What are United Nations Climate Change Conferences?".
- Kyoto Protocol 1997; Liverman 2009, p. 290.
- Dessai 2001, p. 4; Grubb 2003.
- Liverman 2009, p. 290.
- Müller 2010; The New York Times, 25 May 2015; UNFCCC: Copenhagen 2009; EUobserver, 20 December 2009.
- UNFCCC: Copenhagen 2009.
- Conference of the Parties to the Framework Convention on Climate Change. Copenhagen. 7–18 December 2009. un document= FCCC/CP/2009/L.7. Archived from the original on 18 October 2010. Retrieved 24 October 2010.
- Cui, Lianbiao; Sun, Yi; Song, Malin; Zhu, Lei (2020). "Co-financing in the green climate fund: lessons from the global environment facility". Climate Policy. 20 (1): 95–108. doi: 10.1080/14693062.2019.1690968. ISSN 1469-3062. S2CID 213694904.
- Paris Agreement 2015.
- Climate Focus 2015, p. 3; Carbon Brief, 8 October 2018.
- Climate Focus 2015, p. 5.
- "Status of Treaties, United Nations Framework Convention on Climate Change". United Nations Treaty Collection. Retrieved 20 November 2019.; Salon, 25 September 2019.
- Goyal et al. 2019.
- Yeo, Sophie (10 October 2016). "Explainer: Why a UN climate deal on HFCs matters". Carbon Brief. Retrieved 10 January 2021.
- BBC, 1 May 2019; Vice, 2 May 2019.
- The Verge, 27 December 2019.
- The Guardian, 28 November 2019
- Politico, 11 December 2019.
- The Guardian, 28 October 2020
- UN NDC Synthesis Report 2021, pp. 4–5; UNFCCC Press Office (26 February 2021). "Greater Climate Ambition Urged as Initial NDC Synthesis Report Is Published". Retrieved 21 April 2021.
- Cook et al. 2016
- Cook et al. 2016; NASA, Scientific Consensus 2020
- Powell, James (20 November 2019). "Scientists Reach 100% Consensus on Anthropogenic Global Warming". Bulletin of Science, Technology & Society. 37 (4): 183–184. doi: 10.1177/0270467619886266. S2CID 213454806. Retrieved 15 November 2020.
- NRC 2008, p. 2; Oreskes 2007, p. 68; Gleick, 7 January 2017
- Joint statement of the G8+5 Academies (2009); Gleick, 7 January 2017.
- Royal Society 2005.
- IPCC AR5 WG1 Summary for Policymakers 2013, p. 17, D.3.
- IPCC SR15 Ch1 2018, p. 53.
- Ripple et al. 2017; Ripple et al. 2019; Fletcher 2019, p. 9
- Weart "The Public and Climate Change (since 1980)".
- Newell 2006, p. 80; Yale Climate Connections, 2 November 2010.
- Pew Research Center 2015.
- Pew Research Center, 18 April 2019.
- Stover 2014.
- Dunlap & McCright 2011, pp. 144, 155; Björnberg et al. 2017.
- Oreskes & Conway 2010; Björnberg et al. 2017.
- O’Neill & Boykoff 2010; Björnberg et al. 2017.
- Björnberg et al. 2017.
- Dunlap & McCright 2015, p. 308.
- Dunlap & McCright 2011, p. 146.
- Harvey et al. 2018.
- The New York Times, 29 April 2017.
- Gunningham 2018.
- The Guardian, 19 March 2019; Boulianne, Lalancette & Ilkiw 2020.
- Deutsche Welle, 22 June 2019.
- Connolly, Kate (29 April 2021). "'Historic' German ruling says climate goals not tough enough". The Guardian. Retrieved 1 May 2021.
- Setzer & Byrnes 2019.
- Archer & Pierrehumbert 2013, pp. 10–14.
- Tyndall 1861.
Archer & Pierrehumbert 2013, pp.
Tyndall. In 1856
Eunice Newton Foote experimented using glass cylinders filled with different gases heated by sunlight, but her apparatus could not distinguish the infrared greenhouse effect. She found moist air warmed more than dry air, and CO
2 warmed most, so she concluded higher levels of this in the past would have increased temperatures: Huddleston 2019.
- Lapenis 1998.
- Weart "The Carbon Dioxide Greenhouse Effect"; Fleming 2008, Arrhenius.
- Callendar 1938; Fleming 2007.
- Weart "Suspicions of a Human-Caused Greenhouse (1956–1969)".
- Weart 2013, p. 3567.
AR4 Working Group I Report
IPCC (2007). Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; et al. (eds.).
Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the
Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press.
- Le Treut, H.; Somerville, R.; Cubasch, U.; Ding, Y.; et al. (2007). "Chapter 1: Historical Overview of Climate Change Science" (PDF). IPCC AR4 WG1 2007. pp. 93–127.
- Randall, D. A.; Wood, R. A.; Bony, S.; Colman, R.; et al. (2007). "Chapter 8: Climate Models and their Evaluation" (PDF). IPCC AR4 WG1 2007. pp. 589–662.
- Hegerl, G. C.; Zwiers, F. W.; Braconnot, P.; Gillett, N. P.; et al. (2007). "Chapter 9: Understanding and Attributing Climate Change" (PDF). IPCC AR4 WG1 2007. pp. 663–745.
AR4 Working Group II Report
IPCC (2007). Parry, M. L.; Canziani, O. F.; Palutikof, J. P.; van der Linden, P. J.; et al. (eds.).
Climate Change 2007: Impacts, Adaptation and Vulnerability. Contribution of Working Group II to the
Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press.
- Rosenzweig, C.; Casassa, G.; Karoly, D. J.; Imeson, A.; et al. (2007). "Chapter 1: Assessment of observed changes and responses in natural and managed systems" (PDF). IPCC AR4 WG2 2007. pp. 79–131.
- Schneider, S. H.; Semenov, S.; Patwardhan, A.; Burton, I.; et al. (2007). "Chapter 19: Assessing key vulnerabilities and the risk from climate change" (PDF). IPCC AR4 WG2 2007. pp. 779–810.
AR4 Working Group III Report
- IPCC (2007). Metz, B.; Davidson, O. R.; Bosch, P. R.; Dave, R.; et al. (eds.). Climate Change 2007: Mitigation of Climate Change. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4.
AR5 Working Group I Report
IPCC (2013). Stocker, T. F.; Qin, D.; Plattner, G.-K.; Tignor, M.; et al. (eds.).
Climate Change 2013: The Physical Science Basis (PDF). Contribution of Working Group I to the
Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK & New York: Cambridge University Press.
AR5 Climate Change 2013: The Physical Science Basis — IPCC
- IPCC (2013). "Summary for Policymakers" (PDF). IPCC AR5 WG1 2013.
- Stocker, T. F.; Qin, D.; Plattner, G.-K.; Alexander, L. V.; et al. (2013). "Technical Summary" (PDF). IPCC AR5 WG1 2013. pp. 33–115.
- Hartmann, D. L.; Klein Tank, A. M. G.; Rusticucci, M.; Alexander, L. V.; et al. (2013). "Chapter 2: Observations: Atmosphere and Surface" (PDF). IPCC AR5 WG1 2013. pp. 159–254.
- Rhein, M.; Rintoul, S. R.; Aoki, S.; Campos, E.; et al. (2013). "Chapter 3: Observations: Ocean" (PDF). IPCC AR5 WG1 2013. pp. 255–315.
- Masson-Delmotte, V.; Schulz, M.; Abe-Ouchi, A.; Beer, J.; et al. (2013). "Chapter 5: Information from Paleoclimate Archives" (PDF). IPCC AR5 WG1 2013. pp. 383–464.
- Bindoff, N. L.; Stott, P. A.; AchutaRao, K. M.; Allen, M. R.; et al. (2013). "Chapter 10: Detection and Attribution of Climate Change: from Global to Regional" (PDF). IPCC AR5 WG1 2013. pp. 867–952.
- Collins, M.; Knutti, R.; Arblaster, J. M.; Dufresne, J.-L.; et al. (2013). "Chapter 12: Long-term Climate Change: Projections, Commitments and Irreversibility" (PDF). IPCC AR5 WG1 2013. pp. 1029–1136.
AR5 Working Group II Report
IPCC (2014). Field, C. B.; Barros, V. R.; Dokken, D. J.; Mach, K. J.; et al. (eds.). Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part A: Global and Sectoral Aspects. Contribution of Working Group II to the
Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press.
978-1-107-05807-1.. Chapters 1–20, SPM, and Technical Summary.
- Jiménez Cisneros, B. E.; Oki, T.; Arnell, N. W.; Benito, G.; et al. (2014). "Chapter 3: Freshwater Resources" (PDF). IPCC AR5 WG2 A 2014. pp. 229–269.
- Porter, J. R.; Xie, L.; Challinor, A. J.; Cochrane, K.; et al. (2014). "Chapter 7: Food Security and Food Production Systems" (PDF). IPCC AR5 WG2 A 2014. pp. 485–533.
- Smith, K. R.; Woodward, A.; Campbell-Lendrum, D.; Chadee, D. D.; et al. (2014). "Chapter 11: Human Health: Impacts, Adaptation, and Co-Benefits" (PDF). In IPCC AR5 WG2 A 2014. pp. 709–754.
- Olsson, L.; Opondo, M.; Tschakert, P.; Agrawal, A.; et al. (2014). "Chapter 13: Livelihoods and Poverty" (PDF). IPCC AR5 WG2 A 2014. pp. 793–832.
- Cramer, W.; Yohe, G. W.; Auffhammer, M.; Huggel, C.; et al. (2014). "Chapter 18: Detection and Attribution of Observed Impacts" (PDF). IPCC AR5 WG2 A 2014. pp. 979–1037.
- Oppenheimer, M.; Campos, M.; Warren, R.; Birkmann, J.; et al. (2014). "Chapter 19: Emergent Risks and Key Vulnerabilities" (PDF). IPCC AR5 WG2 A 2014. pp. 1039–1099.
- IPCC (2014). Barros, V. R.; Field, C. B.; Dokken, D. J.; Mach, K. J.; et al. (eds.). Climate Change 2014: Impacts, Adaptation, and Vulnerability. Part B: Regional Aspects (PDF). Contribution of Working Group II to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK & New York: Cambridge University Press. ISBN 978-1-107-05816-3.. Chapters 21–30, Annexes, and Index.
AR5 Working Group III Report
- IPCC (2014). Edenhofer, O.; Pichs-Madruga, R.; Sokona, Y.; Farahani, E.; et al. (eds.). Climate Change 2014: Mitigation of Climate Change. Contribution of Working Group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, UK & New York, NY: Cambridge University Press. ISBN 978-1-107-05821-7.
AR5 Synthesis Report
- IPCC AR5 SYR (2014). The Core Writing Team; Pachauri, R. K.; Meyer, L. A. (eds.). Climate Change 2014: Synthesis Report. Contribution of Working Groups I, II and III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. Geneva, Switzerland: IPCC.
Special Report: Global Warming of 1.5 °C
IPCC (2018). Masson-Delmotte, V.; Zhai, P.; Pörtner, H.-O.; Roberts, D.; et al. (eds.).
Global Warming of 1.5°C. An IPCC Special Report on the impacts of global warming of 1.5°C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty (PDF). Intergovernmental Panel on Climate Change.
Global Warming of 1.5 ºC —.
- IPCC (2018). "Summary for Policymakers" (PDF). IPCC SR15 2018. pp. 3–24.
- Allen, M. R.; Dube, O. P.; Solecki, W.; Aragón-Durand, F.; et al. (2018). "Chapter 1: Framing and Context" (PDF). IPCC SR15 2018. pp. 49–91.
- Rogelj, J.; Shindell, D.; Jiang, K.; Fifta, S.; et al. (2018). "Chapter 2: Mitigation Pathways Compatible with 1.5°C in the Context of Sustainable Development" (PDF). IPCC SR15 2018. pp. 93–174.
- Hoegh-Guldberg, O.; Jacob, D.; Taylor, M.; Bindi, M.; et al. (2018). "Chapter 3: Impacts of 1.5ºC Global Warming on Natural and Human Systems" (PDF). IPCC SR15 2018. pp. 175–311.
- de Coninck, H.; Revi, A.; Babiker, M.; Bertoldi, P.; et al. (2018). "Chapter 4: Strengthening and Implementing the Global Response" (PDF). IPCC SR15 2018. pp. 313–443.
- Roy, J.; Tschakert, P.; Waisman, H.; Abdul Halim, S.; et al. (2018). "Chapter 5: Sustainable Development, Poverty Eradication and Reducing Inequalities" (PDF). IPCC SR15 2018. pp. 445–538.
Special Report: Climate change and Land
IPCC (2019). Shukla, P. R.; Skea, J.; Calvo Buendia, E.; Masson-Delmotte, V.; et al. (eds.).
IPCC Special Report on Climate Change, Desertification, Land Degradation, Sustainable Land Management, Food Security, and Greenhouse gas fluxes in Terrestrial Ecosystems (PDF). In press.
- IPCC (2019). "Summary for Policymakers" (PDF). IPCC SRCCL 2019. pp. 3–34.
- Jia, G.; Shevliakova, E.; Artaxo, P. E.; De Noblet-Ducoudré, N.; et al. (2019). "Chapter 2: Land-Climate Interactions" (PDF). IPCC SRCCL 2019. pp. 131–247.
- Mbow, C.; Rosenzweig, C.; Barioni, L. G.; Benton, T.; et al. (2019). "Chapter 5: Food Security" (PDF). IPCC SRCCL 2019. pp. 437–550.
Special Report: The Ocean and Cryosphere in a Changing Climate
IPCC (2019). Pörtner, H.-O.; Roberts, D. C.; Masson-Delmotte, V.; Zhai, P.; et al. (eds.).
IPCC Special Report on the Ocean and Cryosphere in a Changing Climate (PDF). In press.
- IPCC (2019). "Summary for Policymakers" (PDF). IPCC SROCC 2019. pp. 3–35.
- Meredith, M.; Sommerkorn, M.; Cassotta, S.; Derksen, C.; et al. (2019). "Chapter 3: Polar Regions" (PDF). IPCC SROCC 2019. pp. 203–320.
- Oppenheimer, M.; Glavovic, B.; Hinkel, J.; van de Wal, R.; et al. (2019). "Chapter 4: Sea Level Rise and Implications for Low Lying Islands, Coasts and Communities" (PDF). IPCC SROCC 2019. pp. 321–445.
- Bindoff, N. L.; Cheung, W. W. L.; Kairo, J. G.; Arístegui, J.; et al. (2019). "Chapter 5: Changing Ocean, Marine Ecosystems, and Dependent Communities" (PDF). IPCC SROCC 2019. pp. 447–587.
Other peer-reviewed sources
- Albrecht, Bruce A. (1989). "Aerosols, Cloud Microphysics, and Fractional Cloudiness". Science. 245 (4923): 1227–1239. Bibcode: 1989Sci...245.1227A. doi: 10.1126/science.245.4923.1227. PMID 17747885. S2CID 46152332.
- Balsari, S.; Dresser, C.; Leaning, J. (2020). "Climate Change, Migration, and Civil Strife". Curr Environ Health Rep. 7 (4): 404–414. doi: 10.1007/s40572-020-00291-4. PMC 7550406. PMID 33048318.
- Bamber, Jonathan L.; Oppenheimer, Michael; Kopp, Robert E.; Aspinall, Willy P.; Cooke, Roger M. (2019). "Ice sheet contributions to future sea-level rise from structured expert judgment". Proceedings of the National Academy of Sciences. 116 (23): 11195–11200. Bibcode: 2019PNAS..11611195B. doi: 10.1073/pnas.1817205116. ISSN 0027-8424. PMC 6561295. PMID 31110015.
- Bednar, Johannes; Obersteiner, Michael; Wagner, Fabian (2019). "On the financial viability of negative emissions". Nature Communications. 10 (1): 1783. Bibcode: 2019NatCo..10.1783B. doi: 10.1038/s41467-019-09782-x. ISSN 2041-1723. PMC 6467865. PMID 30992434.
- Berrill, P.; Arvesen, A.; Scholz, Y.; Gils, H. C.; et al. (2016). "Environmental impacts of high penetration renewable energy scenarios for Europe". Environmental Research Letters. 11 (1): 014012. Bibcode: 2016ERL....11a4012B. doi: 10.1088/1748-9326/11/1/014012.
- Björnberg, Karin Edvardsson; Karlsson, Mikael; Gilek, Michael; Hansson, Sven Ove (2017). "Climate and environmental science denial: A review of the scientific literature published in 1990–2015". Journal of Cleaner Production. 167: 229–241. doi: 10.1016/j.jclepro.2017.08.066. ISSN 0959-6526.
- Boulianne, Shelley; Lalancette, Mireille; Ilkiw, David (2020). ""School Strike 4 Climate": Social Media and the International Youth Protest on Climate Change". Media and Communication. 8 (2): 208–218. doi: 10.17645/mac.v8i2.2768. ISSN 2183-2439.
- Bui, M.; Adjiman, C.; Bardow, A.; Anthony, Edward J.; et al. (2018). "Carbon capture and storage (CCS): the way forward". Energy & Environmental Science. 11 (5): 1062–1176. doi: 10.1039/c7ee02342a.
- Burke, Claire; Stott, Peter (2017). "Impact of Anthropogenic Climate Change on the East Asian Summer Monsoon". Journal of Climate. 30 (14): 5205–5220. arXiv: 1704.00563. Bibcode: 2017JCli...30.5205B. doi: 10.1175/JCLI-D-16-0892.1. ISSN 0894-8755. S2CID 59509210.
- Burke, Marshall; Davis, W. Matthew; Diffenbaugh, Noah S (2018). "Large potential reduction in economic damages under UN mitigation targets". Nature. 557 (7706): 549–553. Bibcode: 2018Natur.557..549B. doi: 10.1038/s41586-018-0071-9. ISSN 1476-4687. PMID 29795251. S2CID 43936274.
- Callendar, G. S. (1938). "The artificial production of carbon dioxide and its influence on temperature". Quarterly Journal of the Royal Meteorological Society. 64 (275): 223–240. Bibcode: 1938QJRMS..64..223C. doi: 10.1002/qj.49706427503.
- Cattaneo, Cristina; Beine, Michel; Fröhlich, Christiane J.; Kniveton, Dominic; et al. (2019). "Human Migration in the Era of Climate Change". Review of Environmental Economics and Policy. 13 (2): 189–206. doi: 10.1093/reep/rez008. hdl: 10.1093/reep/rez008. ISSN 1750-6816. S2CID 198660593.
- Cohen, Judah; Screen, James; Furtado, Jason C.; Barlow, Mathew; et al. (2014). "Recent Arctic amplification and extreme mid-latitude weather" (PDF). Nature Geoscience. 7 (9): 627–637. Bibcode: 2014NatGe...7..627C. doi: 10.1038/ngeo2234. ISSN 1752-0908.
- Cook, John; Oreskes, Naomi; Doran, Peter T.; Anderegg, William R. L.; et al. (2016). "Consensus on consensus: a synthesis of consensus estimates on human-caused global warming". Environmental Research Letters. 11 (4): 048002. Bibcode: 2016ERL....11d8002C. doi: 10.1088/1748-9326/11/4/048002.
- Costello, Anthony; Abbas, Mustafa; Allen, Adriana; Ball, Sarah; et al. (2009). "Managing the health effects of climate change". The Lancet. 373 (9676): 1693–1733. doi: 10.1016/S0140-6736(09)60935-1. PMID 19447250. S2CID 205954939. Archived from the original on 13 August 2017.
- Curtis, P.; Slay, C.; Harris, N.; Tyukavina, A.; et al. (2018). "Classifying drivers of global forest loss". Science. 361 (6407): 1108–1111. Bibcode: 2018Sci...361.1108C. doi: 10.1126/science.aau3445. PMID 30213911. S2CID 52273353.
- Davidson, Eric (2009). "The contribution of manure and fertilizer nitrogen to atmospheric nitrous oxide since 1860". Nature Geoscience. 2: 659–662. doi: 10.1016/j.chemer.2016.04.002.
- DeConto, Robert M.; Pollard, David (2016). "Contribution of Antarctica to past and future sea-level rise". Nature. 531 (7596): 591–597. Bibcode: 2016Natur.531..591D. doi: 10.1038/nature17145. ISSN 1476-4687. PMID 27029274. S2CID 205247890.
- Dean, Joshua F.; Middelburg, Jack J.; Röckmann, Thomas; Aerts, Rien; et al. (2018). "Methane Feedbacks to the Global Climate System in a Warmer World". Reviews of Geophysics. 56 (1): 207–250. Bibcode: 2018RvGeo..56..207D. doi: 10.1002/2017RG000559. ISSN 1944-9208.
- Delworth, Thomas L.; Zeng, Fanrong (2012). "Multicentennial variability of the Atlantic meridional overturning circulation and its climatic influence in a 4000 year simulation of the GFDL CM2.1 climate model". Geophysical Research Letters. 39 (13): n/a. Bibcode: 2012GeoRL..3913702D. doi: 10.1029/2012GL052107. ISSN 1944-8007.
- Deutsch, Curtis; Brix, Holger; Ito, Taka; Frenzel, Hartmut; et al. (2011). "Climate-Forced Variability of Ocean Hypoxia" (PDF). Science. 333 (6040): 336–339. Bibcode: 2011Sci...333..336D. doi: 10.1126/science.1202422. PMID 21659566. S2CID 11752699. Archived (PDF) from the original on 9 May 2016.
- Diffenbaugh, Noah S.; Burke, Marshall (2019). "Global warming has increased global economic inequality". Proceedings of the National Academy of Sciences. 116 (20): 9808–9813. doi: 10.1073/pnas.1816020116. ISSN 0027-8424. PMC 6525504. PMID 31010922.
- Doney, Scott C.; Fabry, Victoria J.; Feely, Richard A.; Kleypas, Joan A. (2009). "Ocean Acidification: The Other CO2 Problem". Annual Review of Marine Science. 1 (1): 169–192. Bibcode: 2009ARMS....1..169D. doi: 10.1146/annurev.marine.010908.163834. PMID 21141034. S2CID 402398.
- Fahey, D. W.; Doherty, S. J.; Hibbard, K. A.; Romanou, A.; Taylor, P. C. (2017). "Chapter 2: Physical Drivers of Climate Change" (PDF). In USGCRP2017.
- Knutson, T.; Kossin, J. P.; Mears, C.; Perlwitz, J.; Wehner, M. F. (2017). "Chapter 3: Detection and Attribution of Climate Change" (PDF). In USGCRP2017.
- Fischer, Tobias P.; Aiuppa, Alessandro (2020).
"AGU Centennial Grand Challenge: Volcanoes and Deep Carbon Global CO
2 Emissions From Subaerial Volcanism – Recent Progress and Future Challenges". Geochemistry, Geophysics, Geosystems. 21 (3): e08690. Bibcode: 2020GGG....2108690F. doi: 10.1029/2019GC008690. ISSN 1525-2027.
- Franzke, Christian L. E.; Barbosa, Susana; Blender, Richard; Fredriksen, Hege-Beate; et al. (2020). "The Structure of Climate Variability Across Scales". Reviews of Geophysics. 58 (2): e2019RG000657. Bibcode: 2020RvGeo..5800657F. doi: 10.1029/2019RG000657. ISSN 1944-9208.
- Friedlingstein, Pierre; Jones, Matthew W.; O'Sullivan, Michael; Andrew, Robbie M.; et al. (2019). "Global Carbon Budget 2019". Earth System Science Data. 11 (4): 1783–1838. Bibcode: 2019ESSD...11.1783F. doi: 10.5194/essd-11-1783-2019. ISSN 1866-3508.
- Fyfe, John C.; Meehl, Gerald A.; England, Matthew H.; Mann, Michael E.; et al. (2016). "Making sense of the early-2000s warming slowdown" (PDF). Nature Climate Change. 6 (3): 224–228. Bibcode: 2016NatCC...6..224F. doi: 10.1038/nclimate2938. Archived (PDF) from the original on 7 February 2019.
- Goyal, Rishav; England, Matthew H; Sen Gupta, Alex; Jucker, Martin (2019). "Reduction in surface climate change achieved by the 1987 Montreal Protocol". Environmental Research Letters. 14 (12): 124041. Bibcode: 2019ERL....14l4041G. doi: 10.1088/1748-9326/ab4874. ISSN 1748-9326.
- Grubb, M. (2003). "The Economics of the Kyoto Protocol" (PDF). World Economics. 4 (3): 144–145. Archived from the original (PDF) on 4 September 2012.
- Gunningham, Neil (2018). "Mobilising civil society: can the climate movement achieve transformational social change?" (PDF). Interface: A Journal for and About Social Movements. 10. Archived (PDF) from the original on 12 April 2019. Retrieved 12 April 2019.
- Hagmann, David; Ho, Emily H.; Loewenstein, George (2019). "Nudging out support for a carbon tax". Nature Climate Change. 9 (6): 484–489. Bibcode: 2019NatCC...9..484H. doi: 10.1038/s41558-019-0474-0. S2CID 182663891.
- Haines, A.; Ebi, K. (2019). "The Imperative for Climate Action to Protect Health". New England Journal of Medicine. 380 (3): 263–273. doi: 10.1056/NEJMra1807873. PMID 30650330. S2CID 58662802.
- Hansen, James; Sato, Makiko; Hearty, Paul; Ruedy, Reto; et al. (2016). "Ice melt, sea level rise and superstorms: evidence from paleoclimate data, climate modeling, and modern observations that 2 °C global warming could be dangerous". Atmospheric Chemistry and Physics. 16 (6): 3761–3812. arXiv: 1602.01393. Bibcode: 2016ACP....16.3761H. doi: 10.5194/acp-16-3761-2016. ISSN 1680-7316. S2CID 9410444.
- Harvey, Jeffrey A.; Van den Berg, Daphne; Ellers, Jacintha; Kampen, Remko; et al. (2018). "Internet Blogs, Polar Bears, and Climate-Change Denial by Proxy". BioScience. 68 (4): 281–287. doi: 10.1093/biosci/bix133. ISSN 0006-3568. PMC 5894087. PMID 29662248.
- Hawkins, Ed; Ortega, Pablo; Suckling, Emma; Schurer, Andrew; et al. (2017). "Estimating Changes in Global Temperature since the Preindustrial Period". Bulletin of the American Meteorological Society. 98 (9): 1841–1856. Bibcode: 2017BAMS...98.1841H. doi: 10.1175/bams-d-16-0007.1. ISSN 0003-0007.
- He, Yanyi; Wang, Kaicun; Zhou, Chunlüe; Wild, Martin (2018). "A Revisit of Global Dimming and Brightening Based on the Sunshine Duration". Geophysical Research Letters. 45 (9): 4281–4289. Bibcode: 2018GeoRL..45.4281H. doi: 10.1029/2018GL077424. ISSN 1944-8007.
- Hilaire, Jérôme; Minx, Jan C.; Callaghan, Max W.; Edmonds, Jae; Luderer, Gunnar; Nemet, Gregory F.; Rogelj, Joeri; Zamora, Maria Mar (17 October 2019). "Negative emissions and international climate goals—learning from and about mitigation scenarios". Climatic Change. 157 (2): 189–219. Bibcode: 2019ClCh..157..189H. doi: 10.1007/s10584-019-02516-4. Retrieved 24 February 2021.
- Hodder, Patrick; Martin, Brian (2009). "Climate Crisis? The Politics of Emergency Framing". Economic and Political Weekly. 44 (36): 53–60. ISSN 0012-9976. JSTOR 25663518.
- Holding, S.; Allen, D. M.; Foster, S.; Hsieh, A.; et al. (2016). "Groundwater vulnerability on small islands". Nature Climate Change. 6 (12): 1100–1103. Bibcode: 2016NatCC...6.1100H. doi: 10.1038/nclimate3128. ISSN 1758-6798.
- Joo, Gea-Jae; Kim, Ji Yoon; Do, Yuno; Lineman, Maurice (2015). "Talking about Climate Change and Global Warming". PLOS ONE. 10 (9): e0138996. Bibcode: 2015PLoSO..1038996L. doi: 10.1371/journal.pone.0138996. ISSN 1932-6203. PMC 4587979. PMID 26418127.
- Kabir, Russell; Khan, Hafiz T. A.; Ball, Emma; Caldwell, Khan (2016). "Climate Change Impact: The Experience of the Coastal Areas of Bangladesh Affected by Cyclones Sidr and Aila". Journal of Environmental and Public Health. 2016: 9654753. doi: 10.1155/2016/9654753. PMC 5102735. PMID 27867400.
- Kaczan, David J.; Orgill-Meyer, Jennifer (2020). "The impact of climate change on migration: a synthesis of recent empirical insights". Climatic Change. 158 (3): 281–300. Bibcode: 2020ClCh..158..281K. doi: 10.1007/s10584-019-02560-0. S2CID 207988694. Retrieved 9 February 2021.
- Kennedy, J. J.; Thorne, W. P.; Peterson, T. C.; Ruedy, R. A.; et al. (2010). Arndt, D. S.; Baringer, M. O.; Johnson, M. R. (eds.). "How do we know the world has warmed?". Special supplement: State of the Climate in 2009. Bulletin of the American Meteorological Society. 91 (7). S26-S27. doi: 10.1175/BAMS-91-7-StateoftheClimate.
- Kopp, R. E.; Hayhoe, K.; Easterling, D. R.; Hall, T.; et al. (2017). "Chapter 15: Potential Surprises: Compound Extremes and Tipping Elements". In USGCRP 2017. Archived from the original on 20 August 2018.
- Kossin, J. P.; Hall, T.; Knutson, T.; Kunkel, K. E.; Trapp, R. J.; Waliser, D. E.; Wehner, M. F. (2017). "Chapter 9: Extreme Storms". In USGCRP2017.
- Knutson, T. (2017). "Appendix C: Detection and attribution methodologies overview.". In USGCRP2017.
- Krause, Andreas; Pugh, Thomas A. M.; Bayer, Anita D.; Li, Wei; et al. (2018). "Large uncertainty in carbon uptake potential of land-based climate-change mitigation efforts". Global Change Biology. 24 (7): 3025–3038. Bibcode: 2018GCBio..24.3025K. doi: 10.1111/gcb.14144. ISSN 1365-2486. PMID 29569788. S2CID 4919937.
- Kvande, H. (2014). "The Aluminum Smelting Process". Journal of Occupational and Environmental Medicine. 56 (5 Suppl): S2–S4. doi: 10.1097/JOM.0000000000000154. PMC 4131936. PMID 24806722.
- Lapenis, Andrei G. (1998). "Arrhenius and the Intergovernmental Panel on Climate Change". Eos. 79 (23): 271. Bibcode: 1998EOSTr..79..271L. doi: 10.1029/98EO00206.
- Levermann, Anders; Clark, Peter U.; Marzeion, Ben; Milne, Glenn A.; et al. (2013). "The multimillennial sea-level commitment of global warming". Proceedings of the National Academy of Sciences. 110 (34): 13745–13750. Bibcode: 2013PNAS..11013745L. doi: 10.1073/pnas.1219414110. ISSN 0027-8424. PMC 3752235. PMID 23858443.
- Lenoir, Jonathan; Bertrand, Romain; Comte, Lise; Bourgeaud, Luana; et al. (2020). "Species better track climate warming in the oceans than on land". Nature Ecology & Evolution. 4 (8): 1044–1059. doi: 10.1038/s41559-020-1198-2. ISSN 2397-334X. PMID 32451428. S2CID 218879068.
- Liepert, Beate G.; Previdi, Michael (2009). "Do Models and Observations Disagree on the Rainfall Response to Global Warming?". Journal of Climate. 22 (11): 3156–3166. Bibcode: 2009JCli...22.3156L. doi: 10.1175/2008JCLI2472.1.
- Liverman, Diana M. (2009). "Conventions of climate change: constructions of danger and the dispossession of the atmosphere". Journal of Historical Geography. 35 (2): 279–296. doi: 10.1016/j.jhg.2008.08.008.
- Liu, Wei; Xie, Shang-Ping; Liu, Zhengyu; Zhu, Jiang (2017). "Overlooked possibility of a collapsed Atlantic Meridional Overturning Circulation in warming climate". Science Advances. 3 (1): e1601666. Bibcode: 2017SciA....3E1666L. doi: 10.1126/sciadv.1601666. PMC 5217057. PMID 28070560.
- Lüthi, Dieter; Le Floch, Martine; Bereiter, Bernhard; Blunier, Thomas; et al. (2008). "High-resolution carbon dioxide concentration record 650,000–800,000 years before present" (PDF). Nature. 453 (7193): 379–382. Bibcode: 2008Natur.453..379L. doi: 10.1038/nature06949. PMID 18480821. S2CID 1382081.
- Mach, Katharine J.; Kraan, Caroline M.; Adger, W. Neil; Buhaug, Halvard; et al. (2019). "Climate as a risk factor for armed conflict". Nature. 571 (7764): 193–197. Bibcode: 2019Natur.571..193M. doi: 10.1038/s41586-019-1300-6. ISSN 1476-4687. PMID 31189956. S2CID 186207310.
- Matthews, H. Damon; Gillett, Nathan P.; Stott, Peter A.; Zickfeld, Kirsten (2009). "The proportionality of global warming to cumulative carbon emissions". Nature. 459 (7248): 829–832. Bibcode: 2009Natur.459..829M. doi: 10.1038/nature08047. ISSN 1476-4687. PMID 19516338. S2CID 4423773.
- Matthews, Tom (2018). "Humid heat and climate change". Progress in Physical Geography: Earth and Environment. 42 (3): 391–405. doi: 10.1177/0309133318776490. S2CID 134820599.
- McNeill, V. Faye (2017). "Atmospheric Aerosols: Clouds, Chemistry, and Climate". Annual Review of Chemical and Biomolecular Engineering. 8 (1): 427–444. doi: 10.1146/annurev-chembioeng-060816-101538. ISSN 1947-5438. PMID 28415861.
- Melillo, J. M.; Frey, S. D.; DeAngelis, K. M.; Werner, W. J.; et al. (2017). "Long-term pattern and magnitude of soil carbon feedback to the climate system in a warming world". Science. 358 (6359): 101–105. Bibcode: 2017Sci...358..101M. doi: 10.1126/science.aan2874. PMID 28983050.
- Mercure, J.-F.; Pollitt, H.; Viñuales, J. E.; Edwards, N. R.; et al. (2018). "Macroeconomic impact of stranded fossil fuel assets" (PDF). Nature Climate Change. 8 (7): 588–593. Bibcode: 2018NatCC...8..588M. doi: 10.1038/s41558-018-0182-1. ISSN 1758-6798. S2CID 89799744.
- Mitchum, G. T.; Masters, D.; Hamlington, B. D.; Fasullo, J. T.; et al. (2018). "Climate-change–driven accelerated sea-level rise detected in the altimeter era". Proceedings of the National Academy of Sciences. 115 (9): 2022–2025. Bibcode: 2018PNAS..115.2022N. doi: 10.1073/pnas.1717312115. ISSN 0027-8424. PMC 5834701. PMID 29440401.
- National Research Council (2011). Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia. Washington, D.C.: National Academies Press. doi: 10.17226/12877. ISBN 978-0-309-15176-4. Archived from the original on 20 July 2010. Retrieved 19 August 2013.
- National Research Council (2011). "Causes and Consequences of Climate Change". America's Climate Choices. Washington, D.C.: The National Academies Press. doi: 10.17226/12781. ISBN 978-0-309-14585-5. Archived from the original on 21 July 2015. Retrieved 28 January 2019.
- Neukom, Raphael; Steiger, Nathan; Gómez-Navarro, Juan José; Wang, Jianghao; et al. (2019). "No evidence for globally coherent warm and cold periods over the preindustrial Common Era" (PDF). Nature. 571 (7766): 550–554. Bibcode: 2019Natur.571..550N. doi: 10.1038/s41586-019-1401-2. ISSN 1476-4687. PMID 31341300. S2CID 198494930.
- Neukom, Raphael; Barboza, Luis A.; Erb, Michael P.; Shi, Feng; et al. (2019). "Consistent multidecadal variability in global temperature reconstructions and simulations over the Common Era". Nature Geoscience. 12 (8): 643–649. Bibcode: 2019NatGe..12..643P. doi: 10.1038/s41561-019-0400-0. ISSN 1752-0908. PMC 6675609. PMID 31372180.
- O’Neill, Saffron J.; Boykoff, Max (2010). "Climate denier, skeptic, or contrarian?". Proceedings of the National Academy of Sciences of the United States of America. 107 (39): E151. Bibcode: 2010PNAS..107E.151O. doi: 10.1073/pnas.1010507107. ISSN 0027-8424. PMC 2947866. PMID 20807754.
- Poloczanska, Elvira S.; Brown, Christopher J.; Sydeman, William J.; Kiessling, Wolfgang; et al. (2013). "Global imprint of climate change on marine life" (PDF). Nature Climate Change. 3 (10): 919–925. Bibcode: 2013NatCC...3..919P. doi: 10.1038/nclimate1958. ISSN 1758-6798.
- Rahmstorf, Stefan; Cazenave, Anny; Church, John A.; Hansen, James E.; et al. (2007). "Recent Climate Observations Compared to Projections" (PDF). Science. 316 (5825): 709. Bibcode: 2007Sci...316..709R. doi: 10.1126/science.1136843. PMID 17272686. S2CID 34008905. Archived (PDF) from the original on 6 September 2018.
- Ramanathan, V.; Carmichael, G. (2008). "Global and Regional Climate Changes due to Black Carbon". Nature Geoscience. 1 (4): 221–227. Bibcode: 2008NatGe...1..221R. doi: 10.1038/ngeo156.
- Randel, William J.; Shine, Keith P.; Austin, John; Barnett, John; et al. (2009). "An update of observed stratospheric temperature trends" (PDF). Journal of Geophysical Research. 114 (D2): D02107. Bibcode: 2009JGRD..11402107R. doi: 10.1029/2008JD010421.
- Rauner, Sebastian; Bauer, Nico; Dirnaichner, Alois; Van Dingenen, Rita; Mutel, Chris; Luderer, Gunnar (2020). "Coal-exit health and environmental damage reductions outweigh economic impacts". Nature Climate Change. 10 (4): 308–312. Bibcode: 2020NatCC..10..308R. doi: 10.1038/s41558-020-0728-x. ISSN 1758-6798. S2CID 214619069.
- Riahi, Keywan; van Vuuren, Detlef P.; Kriegler, Elmar; Edmonds, Jae; et al. (2017). "The Shared Socioeconomic Pathways and their energy, land use, and greenhouse gas emissions implications: An overview". Global Environmental Change. 42: 153–168. doi: 10.1016/j.gloenvcha.2016.05.009. ISSN 0959-3780.
- Ripple, William J.; Wolf, Christopher; Newsome, Thomas M.; Galetti, Mauro; et al. (2017). "World Scientists' Warning to Humanity: A Second Notice". BioScience. 67 (12): 1026–1028. doi: 10.1093/biosci/bix125.
- Ripple, William J.; Wolf, Christopher; Newsome, Thomas M.; Barnard, Phoebe; et al. (2019). "World Scientists' Warning of a Climate Emergency". BioScience. doi: 10.1093/biosci/biz088. hdl: 1808/30278.
- Ristroph, E. (2019). "Fulfilling Climate Justice And Government Obligations To Alaska Native Villages: What Is The Government Role?". William & Mary Environmental Law and Policy Review. 43 (2).
- Rogelj, Joeri; Forster, Piers M.; Kriegler, Elmar; Smith, Christopher J.; et al. (2019). "Estimating and tracking the remaining carbon budget for stringent climate targets". Nature. 571 (7765): 335–342. Bibcode: 2019Natur.571..335R. doi: 10.1038/s41586-019-1368-z. ISSN 1476-4687. PMID 31316194. S2CID 197542084.
- Rogelj, Joeri; Meinshausen, Malte; Schaeffer, Michiel; Knutti, Reto; Riahi, Keywan (2015).
"Impact of short-lived non-CO
2 mitigation on carbon budgets for stabilizing global warming". Environmental Research Letters. 10 (7): 1–10. Bibcode: 2015ERL....10g5001R. doi: 10.1088/1748-9326/10/7/075001.
- Ruseva, Tatyana; Hedrick, Jamie; Marland, Gregg; Tovar, Henning; et al. (2020). "Rethinking standards of permanence for terrestrial and coastal carbon: implications for governance and sustainability". Current Opinion in Environmental Sustainability. 45: 69–77. doi: 10.1016/j.cosust.2020.09.009. ISSN 1877-3435.
- Samset, B. H.; Sand, M.; Smith, C. J.; Bauer, S. E.; et al. (2018). "Climate Impacts From a Removal of Anthropogenic Aerosol Emissions" (PDF). Geophysical Research Letters. 45 (2): 1020–1029. Bibcode: 2018GeoRL..45.1020S. doi: 10.1002/2017GL076079. ISSN 1944-8007. PMC 7427631. PMID 32801404.
- Sand, M.; Berntsen, T. K.; von Salzen, K.; Flanner, M. G.; et al. (2015). "Response of Arctic temperature to changes in emissions of short-lived climate forcers". Nature. 6 (3): 286–289. doi: 10.1038/nclimate2880.
- Schmidt, Gavin A.; Ruedy, Reto A.; Miller, Ron L.; Lacis, Andy A. (2010). "Attribution of the present-day total greenhouse effect". Journal of Geophysical Research: Atmospheres. 115 (D20): D20106. Bibcode: 2010JGRD..11520106S. doi: 10.1029/2010JD014287. ISSN 2156-2202. S2CID 28195537.
- Schmidt, Gavin A.; Shindell, Drew T.; Tsigaridis, Kostas (2014). "Reconciling warming trends". Nature Geoscience. 7 (3): 158–160. Bibcode: 2014NatGe...7..158S. doi: 10.1038/ngeo2105. hdl: 2060/20150000726.
- Serdeczny, Olivia; Adams, Sophie; Baarsch, Florent; Coumou, Dim; et al. (2016). "Climate change impacts in Sub-Saharan Africa: from physical changes to their social repercussions" (PDF). Regional Environmental Change. 17 (6): 1585–1600. doi: 10.1007/s10113-015-0910-2. ISSN 1436-378X. S2CID 3900505.
- Siegenthaler, Urs; Stocker, Thomas F.; Monnin, Eric; Lüthi, Dieter; et al. (2005). "Stable Carbon Cycle–Climate Relationship During the Late Pleistocene" (PDF). Science. 310 (5752): 1313–1317. Bibcode: 2005Sci...310.1313S. doi: 10.1126/science.1120130. PMID 16311332.
- Sutton, Rowan T.; Dong, Buwen; Gregory, Jonathan M. (2007). "Land/sea warming ratio in response to climate change: IPCC AR4 model results and comparison with observations". Geophysical Research Letters. 34 (2): L02701. Bibcode: 2007GeoRL..3402701S. doi: 10.1029/2006GL028164.
- Smale, Dan A.; Wernberg, Thomas; Oliver, Eric C. J.; Thomsen, Mads; Harvey, Ben P. (2019). "Marine heatwaves threaten global biodiversity and the provision of ecosystem services" (PDF). Nature Climate Change. 9 (4): 306–312. Bibcode: 2019NatCC...9..306S. doi: 10.1038/s41558-019-0412-1. ISSN 1758-6798. S2CID 91471054.
- Smith, Joel B.; Schneider, Stephen H.; Oppenheimer, Michael; Yohe, Gary W.; et al. (2009). "Assessing dangerous climate change through an update of the Intergovernmental Panel on Climate Change (IPCC) 'reasons for concern'". Proceedings of the National Academy of Sciences. 106 (11): 4133–4137. Bibcode: 2009PNAS..106.4133S. doi: 10.1073/pnas.0812355106. PMC 2648893. PMID 19251662.
- Springmann, M.; Mason-D’Croz, D.; Robinson, S.; Garnett, T.; et al. (2016). "Global and regional health effects of future food production under climate change: a modelling study". Lancet. 387 (10031): 1937–1946. doi: 10.1016/S0140-6736(15)01156-3. PMID 26947322. S2CID 41851492.
- Stott, Peter A.; Kettleborough, J. A. (2002). "Origins and estimates of uncertainty in predictions of twenty-first century temperature rise". Nature. 416 (6882): 723–726. Bibcode: 2002Natur.416..723S. doi: 10.1038/416723a. ISSN 1476-4687. PMID 11961551. S2CID 4326593.
- Stroeve, J.; Holland, Marika M.; Meier, Walt; Scambos, Ted; et al. (2007). "Arctic sea ice decline: Faster than forecast". Geophysical Research Letters. 34 (9): L09501. Bibcode: 2007GeoRL..3409501S. doi: 10.1029/2007GL029703.
- Storelvmo, T.; Phillips, P. C. B.; Lohmann, U.; Leirvik, T.; Wild, M. (2016). "Disentangling greenhouse warming and aerosol cooling to reveal Earth's climate sensitivity" (PDF). Nature Geoscience. 9 (4): 286–289. Bibcode: 2016NatGe...9..286S. doi: 10.1038/ngeo2670. ISSN 1752-0908.
- Trenberth, Kevin E.; Fasullo, John T. (2016). "Insights into Earth's Energy Imbalance from Multiple Sources". Journal of Climate. 29 (20): 7495–7505. Bibcode: 2016JCli...29.7495T. doi: 10.1175/JCLI-D-16-0339.1. OSTI 1537015.
- Turetsky, Merritt R.; Abbott, Benjamin W.; Jones, Miriam C.; Anthony, Katey Walter; et al. (2019). "Permafrost collapse is accelerating carbon release". Nature. 569 (7754): 32–34. Bibcode: 2019Natur.569...32T. doi: 10.1038/d41586-019-01313-4. PMID 31040419.
- Turner, Monica G.; Calder, W. John; Cumming, Graeme S.; Hughes, Terry P.; et al. (2020). "Climate change, ecosystems and abrupt change: science priorities". Philosophical Transactions of the Royal Society B. 375 (1794). doi: 10.1098/rstb.2019.0105. PMC 7017767. PMID 31983326.
- Twomey, S. (1977). "The Influence of Pollution on the Shortwave Albedo of Clouds". J. Atmos. Sci. 34 (7): 1149–1152. Bibcode: 1977JAtS...34.1149T. doi: 10.1175/1520-0469(1977)034<1149:TIOPOT>2.0.CO;2. ISSN 1520-0469.
- Tyndall, John (1861). "On the Absorption and Radiation of Heat by Gases and Vapours, and on the Physical Connection of Radiation, Absorption, and Conduction". Philosophical Magazine. 4. 22: 169–194, 273–285. Archived from the original on 26 March 2016.
- Urban, Mark C. (2015). "Accelerating extinction risk from climate change". Science. 348 (6234): 571–573. Bibcode: 2015Sci...348..571U. doi: 10.1126/science.aaa4984. ISSN 0036-8075. PMID 25931559.
- USGCRP (2009). Karl, T. R.; Melillo, J.; Peterson, T.; Hassol, S. J. (eds.). Global Climate Change Impacts in the United States. Cambridge University Press. ISBN 978-0-521-14407-0. Archived from the original on 6 April 2010. Retrieved 17 April 2010.
- USGCRP (2017). Wuebbles, D. J.; Fahey, D. W.; Hibbard, K. A.; Dokken, D. J.; et al. (eds.). Climate Science Special Report: Fourth National Climate Assessment, Volume I. Washington, D.C.: U.S. Global Change Research Program. doi: 10.7930/J0J964J6.
- Vandyck, T.; Keramidas, K.; Kitous, A.; Spadaro, J.; et al. (2018). "Air quality co-benefits for human health and agriculture counterbalance costs to meet Paris Agreement pledges". Nature Communications. 9 (4939): 4939. Bibcode: 2018NatCo...9.4939V. doi: 10.1038/s41467-018-06885-9. PMC 6250710. PMID 30467311.
- Wuebbles, D. J.; Easterling, D. R.; Hayhoe, K.; Knutson, T.; et al. (2017). "Chapter 1: Our Globally Changing Climate" (PDF). In USGCRP2017.
- Walsh, John; Wuebbles, Donald; Hayhoe, Katherine; Kossin, Kossin; et al. (2014). "Appendix 3: Climate Science Supplement" (PDF). Climate Change Impacts in the United States: The Third National Climate Assessment. US National Climate Assessment.
- Wang, Bin; Shugart, Herman H.; Lerdau, Manuel T. (2017). "Sensitivity of global greenhouse gas budgets to tropospheric ozone pollution mediated by the biosphere". Environmental Research Letters. 12 (8): 084001. Bibcode: 2017ERL....12h4001W. doi: 10.1088/1748-9326/aa7885. ISSN 1748-9326.
- Watts, Nick; Adger, W Neil; Agnolucci, Paolo; Blackstock, Jason; et al. (2015). "Health and climate change: policy responses to protect public health". The Lancet. 386 (10006): 1861–1914. doi: 10.1016/S0140-6736(15)60854-6. hdl: 10871/20783. PMID 26111439. S2CID 205979317. Archived from the original on 7 April 2017.
- Watts, Nick; Amann, Markus; Arnell, Nigel; Ayeb-Karlsson, Sonja; et al. (2019). "The 2019 report of The Lancet Countdown on health and climate change: ensuring that the health of a child born today is not defined by a changing climate". The Lancet. 394 (10211): 1836–1878. doi: 10.1016/S0140-6736(19)32596-6. ISSN 0140-6736. PMID 31733928. S2CID 207976337.
- WCRP Global Sea Level Budget Group (2018). "Global sea-level budget 1993–present". Earth System Science Data. 10 (3): 1551–1590. Bibcode: 2018ESSD...10.1551W. doi: 10.5194/essd-10-1551-2018. ISSN 1866-3508.
- Weart, Spencer (2013). "Rise of interdisciplinary research on climate". Proceedings of the National Academy of Sciences. 110 (Supplement 1): 3657–3664. doi: 10.1073/pnas.1107482109. PMC 3586608. PMID 22778431.
- Wild, M.; Gilgen, Hans; Roesch, Andreas; Ohmura, Atsumu; et al. (2005). "From Dimming to Brightening: Decadal Changes in Solar Radiation at Earth's Surface". Science. 308 (5723): 847–850. Bibcode: 2005Sci...308..847W. doi: 10.1126/science.1103215. PMID 15879214. S2CID 13124021.
- Williams, Richard G; Ceppi, Paulo; Katavouta, Anna (2020). "Controls of the transient climate response to emissions by physical feedbacks, heat uptake and carbon cycling". Environmental Research Letters. 15 (9): 0940c1. Bibcode: 2020ERL....15i40c1W. doi: 10.1088/1748-9326/ab97c9.
- Wolff, Eric W.; Shepherd, John G.; Shuckburgh, Emily; Watson, Andrew J. (2015). "Feedbacks on climate in the Earth system: introduction". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 373 (2054): 20140428. Bibcode: 2015RSPTA.37340428W. doi: 10.1098/rsta.2014.0428. PMC 4608041. PMID 26438277.
- Zeng, Ning; Yoon, Jinho (2009). "Expansion of the world's deserts due to vegetation-albedo feedback under global warming". Geophysical Research Letters. 36 (17): L17401. Bibcode: 2009GeoRL..3617401Z. doi: 10.1029/2009GL039699. ISSN 1944-8007. S2CID 1708267.
- Zhang, Jinlun; Lindsay, Ron; Steele, Mike; Schweiger, Axel (2008). "What drove the dramatic arctic sea ice retreat during summer 2007?". Geophysical Research Letters. 35: 1–5. Bibcode: 2008GeoRL..3511505Z. doi: 10.1029/2008gl034005. S2CID 9387303.
- Zhao, C.; Liu, B.; et al. (2017). "Temperature increase reduces global yields of major crops in four independent estimates". Proceedings of the National Academy of Sciences. 114 (35): 9326–9331. doi: 10.1073/pnas.1701762114. PMC 5584412. PMID 28811375.
Books, reports and legal documents
- Adams, B.; Luchsinger, G. (2009). Climate Justice for a Changing Planet: A Primer for Policy Makers and NGOs (PDF). UN Non-Governmental Liaison Service (NGLS). ISBN 978-92-1-101208-8.
- Archer, David; Pierrehumbert, Raymond (2013). The Warming Papers: The Scientific Foundation for the Climate Change Forecast. John Wiley & Sons. ISBN 978-1-118-68733-8.
- Climate Focus (December 2015). "The Paris Agreement: Summary. Climate Focus Client Brief on the Paris Agreement III" (PDF). Archived (PDF) from the original on 5 October 2018. Retrieved 12 April 2019.
- Clark, P. U.; Weaver, A. J.; Brook, E.; Cook, E. R.; et al. (December 2008). "Executive Summary". In: Abrupt Climate Change. A Report by the U.S. Climate Change Science Program and the Subcommittee on Global Change Research. Reston, VA: U.S. Geological Survey. Archived from the original on 4 May 2013.
- Climate Action Tracker (2019). Warming projections global update, December 2019 (PDF) (Report). New Climate Institute.
- Conceição; et al. (2020). Human Development Report 2020 The Next Frontier: Human Development and the Anthropocene (PDF) (Report). United Nations Development Programme. Retrieved 9 January 2021.
- DeFries, Ruth; Edenhofer, Ottmar; Halliday, Alex; Heal, Geoffrey; et al. (September 2019). The missing economic risks in assessments of climate change impacts (PDF) (Report). Grantham Research Institute on Climate Change and the Environment, London School of Economics and Political Science.
- Dessai, Suraje (2001). "The climate regime from The Hague to Marrakech: Saving or sinking the Kyoto Protocol?" (PDF). Tyndall Centre Working Paper 12. Tyndall Centre. Archived from the original (PDF) on 10 June 2012. Retrieved 5 May 2010.
- Dunlap, Riley E.; McCright, Aaron M. (2011). "Chapter 10: Organized climate change denial". In Dryzek, John S.; Norgaard, Richard B.; Schlosberg, David (eds.). The Oxford Handbook of Climate Change and Society. Oxford University Press. pp. 144–160. ISBN 978-0199566600.
- Dunlap, Riley E.; McCright, Aaron M. (2015). "Chapter 10: Challenging Climate Change: The Denial Countermovement". In Dunlap, Riley E.; Brulle, Robert J. (eds.). Climate Change and Society: Sociological Perspectives. Oxford University Press. pp. 300–332. ISBN 978-0199356119.
- European Commission (28 November 2018). In-depth analysis accompanying the Commission Communication COM(2018) 773: A Clean Planet for all – A European strategic long-term vision for a prosperous, modern, competitive and climate neutral economy (PDF) (Report). Brussels. p. 188.
- Flavell, Alex (2014). IOM outlook on migration, environment and climate change (PDF) (Report). Geneva, Switzerland: International Organization for Migration (IOM). ISBN 978-92-9068-703-0. OCLC 913058074.
- Fleming, James Rodger (2007). The Callendar Effect: the life and work of Guy Stewart Callendar (1898–1964). Boston: American Meteorological Society. ISBN 978-1-878220-76-9.
- Fletcher, Charles (2019). Climate change : what the science tells us. Hoboken, NJ: John Wiley & Sons, Inc. ISBN 978-1-118-79306-0. OCLC 1048028378.
- Academia Brasileira de Ciéncias (Brazil); Royal Society of Canada; Chinese Academy of Sciences; Académie des Sciences (France); Deutsche Akademie der Naturforscher Leopoldina (Germany); Indian National Science Academy; Accademia Nazionale dei Lincei (Italy); Science Council of Japan, Academia Mexicana de Ciencias; Russian Academy of Sciences; Academy of Science of South Africa; Royal Society (United Kingdom); National Academy of Sciences (United States of America) (May 2009). "G8+5 Academies' joint statement: Climate change and the transformation of energy technologies for a low carbon future" (PDF). The National Academies of Sciences, Engineering, and Medicine. Archived (PDF) from the original on 15 February 2010. Retrieved 5 May 2010.
- Global Methane Initiative (2020). Global Methane Emissions and Mitigation Opportunities (PDF) (Report). Global Methane Initiative.
- Haywood, Jim (2016). "Chapter 27 – Atmospheric Aerosols and Their Role in Climate Change". In Letcher, Trevor M. (ed.). Climate Change: Observed Impacts on Planet Earth. Elsevier. ISBN 978-0-444-63524-2.
- IEA (November 2020). Renewables 2020 Analysis and forecast to 2025 (Report). Retrieved 27 April 2021.
- IEA (December 2020). "Covid-19 and energy efficiency". Energy Efficiency 2020 (Report). Paris, France. Retrieved 6 April 2021.
- Bridle, Richard; Sharma, Shruti; Mostafa, Mostafa; Geddes, Anna (June 2019). Fossil Fuel to Clean Energy Subsidy Swaps (PDF) (Report).
- Krogstrup, Signe; Oman, William (4 September 2019). Macroeconomic and Financial Policies for Climate Change Mitigation: A Review of the Literature (PDF). IMF working papers. doi: 10.5089/9781513511955.001. ISBN 978-1-5135-1195-5. ISSN 1018-5941. S2CID 203245445.
- Meinshausen, Malte (2019). "Implications of the Developed Scenarios for Climate Change". In Teske, Sven (ed.). Achieving the Paris Climate Agreement Goals. Achieving the Paris Climate Agreement Goals: Global and Regional 100% Renewable Energy Scenarios with Non-energy GHG Pathways for +1.5 °C and +2 °C. Springer International Publishing. pp. 459–469. doi: 10.1007/978-3-030-05843-2_12. ISBN 978-3-030-05843-2.
- Millar, Neville; Doll, Julie; Robertson, G. (November 2014). Management of nitrogen fertilizer to reduce nitrous oxide (N2O) emissions from field crops (PDF) (Report). Michigan State University.
- Miller, J.; Du, L.; Kodjak, D. (2017). Impacts of World-Class Vehicle Efficiency and Emissions Regulations in Select G20 Countries (PDF) (Report). Washington, D.C.: The International Council on Clean Transportation.
- Müller, Benito (February 2010). Copenhagen 2009: Failure or final wake-up call for our leaders? EV 49 (PDF). Oxford Institute for Energy Studies. p. i. ISBN 978-1-907555-04-6. Archived (PDF) from the original on 10 July 2017. Retrieved 18 May 2010.
- National Research Council (2008). Understanding and responding to climate change: Highlights of National Academies Reports, 2008 edition, produced by the US National Research Council (US NRC) (Report). Washington, D.C.: National Academy of Sciences. Archived from the original on 4 March 2016. Retrieved 14 January 2016.
- National Research Council (2012). Climate Change: Evidence, Impacts, and Choices (PDF) (Report). Archived (PDF) from the original on 20 February 2013. Retrieved 9 September 2017.
- Newell, Peter (14 December 2006). Climate for Change: Non-State Actors and the Global Politics of the Greenhouse. Cambridge University Press. ISBN 978-0-521-02123-4. Retrieved 30 July 2018.
- NOAA. "January 2017 analysis from NOAA: Global and Regional Sea Level Rise Scenarios for the United States" (PDF). Archived (PDF) from the original on 18 December 2017. Retrieved 7 February 2019.
- NRC (2008). "Understanding and Responding to Climate Change" (PDF). Board on Atmospheric Sciences and Climate, US National Academy of Sciences. Archived (PDF) from the original on 11 October 2017. Retrieved 9 November 2010.
- Olivier, J. G. J.; Peters, J. A. H. W. (2019).
Trends in global CO
2 and total greenhouse gas emissions (PDF). The Hague: PBL Netherlands Environmental Assessment Agency.
- Oreskes, Naomi (2007). "The scientific consensus on climate change: How do we know we're not wrong?". In DiMento, Joseph F. C.; Doughman, Pamela M. (eds.). Climate Change: What It Means for Us, Our Children, and Our Grandchildren. The MIT Press. ISBN 978-0-262-54193-0.
- Oreskes, Naomi; Conway, Erik (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming (first ed.). Bloomsbury Press. ISBN 978-1-59691-610-4.
- REN21 (2020). Renewables 2020 Global Status Report (PDF). Paris: REN21 Secretariat. ISBN 978-3-948393-00-7.
- Royal Society (13 April 2005). Economic Affairs – Written Evidence. The Economics of Climate Change, the Second Report of the 2005–2006 session, produced by the UK Parliament House of Lords Economics Affairs Select Committee. UK Parliament. Archived from the original on 13 November 2011. Retrieved 9 July 2011.
- Setzer, Joana; Byrnes, Rebecca (July 2019). Global trends in climate change litigation: 2019 snapshot (PDF). London: the Grantham Research Institute on Climate Change and the Environment and the Centre for Climate Change Economics and Policy.
- Steinberg, D.; Bielen, D.; et al. (July 2017). Electrification & Decarbonization: Exploring U.S. Energy Use and Greenhouse Gas Emissions in Scenarios with Widespread Electrification and Power Sector Decarbonization (PDF) (Report). Golden, Colorado: National Renewable Energy Laboratory.
- Teske, Sven, ed. (2019). "Executive Summary" (PDF). Achieving the Paris Climate Agreement Goals: Global and Regional 100% Renewable Energy Scenarios with Non-energy GHG Pathways for +1.5 °C and +2 °C. Springer International Publishing. pp. xiii–xxxv. doi: 10.1007/978-3-030-05843-2. ISBN 978-3-030-05843-2.
- Teske, Sven; Nagrath, Kriti; Morris, Tom; Dooley, Kate (2019). "Renewable Energy Resource Assessment". In Teske, Sven (ed.). Achieving the Paris Climate Agreement Goals. Achieving the Paris Climate Agreement Goals: Global and Regional 100% Renewable Energy Scenarios with Non-energy GHG Pathways for +1.5 °C and +2 °C. Springer International Publishing. pp. 161–173. doi: 10.1007/978-3-030-05843-2_7. hdl: 10453/139583. ISBN 978-3-030-05843-2.
- Teske, Sven (2019). "Trajectories for a Just Transition of the Fossil Fuel Industry". In Teske, Sven (ed.). Achieving the Paris Climate Agreement Goals. Achieving the Paris Climate Agreement Goals: Global and Regional 100% Renewable Energy Scenarios with Non-energy GHG Pathways for +1.5 °C and +2 °C. Springer International Publishing. pp. 403–411. doi: 10.1007/978-3-030-05843-2_9. hdl: 10453/139584. ISBN 978-3-030-05843-2.
- UN FAO (2016). Global Forest Resources Assessment 2015. How are the world's forests changing? (PDF) (Report). Food and Agriculture Organization of the United Nations. ISBN 978-92-5-109283-5. Retrieved 1 December 2019.
- United Nations Environment Programme (2019). Emissions Gap Report 2019 (PDF). Nairobi. ISBN 978-92-807-3766-0.
- UNEP (2018). The Adaptation Gap Report 2018. Nairobi, Kenya: United Nations Environment Programme (UNEP). ISBN 978-92-807-3728-8.
- UNFCCC (1992). United Nations Framework Convention on Climate Change (PDF).
- UNFCCC (1997). "Kyoto Protocol to the United Nations Framework Convention on Climate Change". United Nations.
- UNFCCC (30 March 2010). "Decision 2/CP.15: Copenhagen Accord". Report of the Conference of the Parties on its fifteenth session, held in Copenhagen from 7 to 19 December 2009. United Nations Framework Convention on Climate Change. FCCC/CP/2009/11/Add.1. Archived from the original on 30 April 2010. Retrieved 17 May 2010.
- UNFCCC (2015). "Paris Agreement" (PDF). United Nations Framework Convention on Climate Change.
- UNFCCC (26 February 2021). Nationally determined contributions under the Paris Agreement Synthesis report by the secretariat (PDF) (Report). United Nations Framework Convention on Climate Change.
- Park, Susin (May 2011). "Climate Change and the Risk of Statelessness: The Situation of Low-lying Island States" (PDF). United Nations High Commissioner for Refugees. Archived (PDF) from the original on 2 May 2013. Retrieved 13 April 2012.
- United States Environmental Protection Agency (2016). Methane and Black Carbon Impacts on the Arctic: Communicating the Science (Report). Archived from the original on 6 September 2017. Retrieved 27 February 2019.
- Van Oldenborgh, Geert-Jan; Philip, Sjoukje; Kew, Sarah; Vautard, Robert; et al. (2019). "Human contribution to the record-breaking June 2019 heat wave in France". Semantic Scholar. S2CID 199454488.
- State and Trends of Carbon Pricing 2019 (PDF) (Report). Washington, D.C.: World Bank. June 2019. doi: 10.1596/978-1-4648-1435-8. hdl: 10986/29687.
- World Health Organization (2014). Quantitative risk assessment of the effects of climate change on selected causes of death, 2030s and 2050s (PDF) (Report). Geneva, Switzerland. ISBN 978-92-4-150769-1.
- World Health Organization (2016). Ambient air pollution: a global assessment of exposure and burden of disease (Report). Geneva, Switzerland. ISBN 978-92-4-1511353.
- World Health Organization (2018). COP24 Special Report Health and Climate Change (PDF). Geneva. ISBN 978-92-4-151497-2.
- World Meteorological Organization (2019). WMO Statement on the State of the Global Climate in 2018. WMO-No. 1233. Geneva. ISBN 978-92-63-11233-0.
- World Meteorological Organization (2020). WMO Statement on the State of the Global Climate in 2019. WMO-No. 1248. Geneva. ISBN 978-92-63-11248-4.
- Hallegatte, Stephane; Bangalore, Mook; Bonzanigo, Laura; Fay, Marianne; et al. (2016). Shock Waves : Managing the Impacts of Climate Change on Poverty. Climate Change and Development (PDF). Washington, D.C.: World Bank. doi: 10.1596/978-1-4648-0673-5. hdl: 10986/22787. ISBN 978-1-4648-0674-2.
- World Resources Institute (December 2019). Creating a Sustainable Food Future: A Menu of Solutions to Feed Nearly 10 Billion People by 2050 (PDF). Washington, D.C. ISBN 978-1-56973-953-2.
American Institute of Physics
- Weart, Spencer (October 2008). The Discovery of Global Warming (2nd ed.). Cambridge, MA: Harvard University Press. ISBN 978-0-67403-189-0. Archived from the original on 18 November 2016. Retrieved 16 June 2020.
Weart, Spencer (February 2019).
The Discovery of Global Warming (online ed.).
Archived from the original on 18 June 2020. Retrieved 19 June 2020.
- Weart, Spencer (January 2020). "The Carbon Dioxide Greenhouse Effect". The Discovery of Global Warming. American Institute of Physics. Archived from the original on 11 November 2016. Retrieved 19 June 2020.
- Weart, Spencer (January 2020).
"The Public and Climate Change". The Discovery of Global Warming. American Institute of Physics.
Archived from the original on 11 November 2016. Retrieved 19 June 2020.
- Weart, Spencer (January 2020). "The Public and Climate Change: Suspicions of a Human-Caused Greenhouse (1956–1969)". The Discovery of Global Warming. American Institute of Physics. Archived from the original on 11 November 2016. Retrieved 19 June 2020.
- Weart, Spencer (January 2020).
"The Public and Climate Change (cont. – since 1980)". The Discovery of Global warming. American Institute of Physics.
Archived from the original on 11 November 2016. Retrieved 19 June 2020.
- Weart, Spencer (January 2020). "The Public and Climate Change: The Summer of 1988". The Discovery of Global Warming. American Institute of Physics. Archived from the original on 11 November 2016. Retrieved 19 June 2020.
- Colford, Paul (22 September 2015). "An addition to AP Stylebook entry on global warming". AP Style Blog. Retrieved 6 November 2019.
- Amos, Jonathan (10 May 2013). "Carbon dioxide passes symbolic mark". BBC. Archived from the original on 29 May 2013. Retrieved 27 May 2013.
- "UK Parliament declares climate change emergency". BBC. 1 May 2019. Retrieved 30 June 2019.
- Rigby, Sara (3 February 2020). "Climate change: should we change the terminology?". BBC Science Focus Magazine. Retrieved 24 March 2020.
- Bulletin of the Atomic Scientists
- Yeo, Sophie (4 January 2017). "Clean energy: The challenge of achieving a 'just transition' for workers". Carbon Brief. Retrieved 18 May 2020.
- McSweeney, Robert M.; Hausfather, Zeke (15 January 2018). "Q&A: How do climate models work?". Carbon Brief. Archived from the original on 5 March 2019. Retrieved 2 March 2019.
- Hausfather, Zeke (19 April 2018). "Explainer: How 'Shared Socioeconomic Pathways' explore future climate change". Carbon Brief. Retrieved 20 July 2019.
- Hausfather, Zeke (8 October 2018). "Analysis: Why the IPCC 1.5C report expanded the carbon budget". Carbon Brief. Retrieved 28 July 2020.
- Dunne, Daisy; Gabbatiss, Josh; Mcsweeny, Robert (7 January 2020). "Media reaction: Australia's bushfires and climate change". Carbon Brief. Retrieved 11 January 2020.
- Ruiz, Irene Banos (22 June 2019). "Climate Action: Can We Change the Climate From the Grassroots Up?". Ecowatch. Deutsche Welle. Archived from the original on 23 June 2019. Retrieved 23 June 2019.
- "Myths vs. Facts: Denial of Petitions for Reconsideration of the Endangerment and Cause or Contribute Findings for Greenhouse Gases under Section 202(a) of the Clean Air Act". U.S. Environmental Protection Agency. 25 August 2016. Retrieved 7 August 2017.
- US EPA (13 September 2019). "Global Greenhouse Gas Emissions Data". Archived from the original on 17 February 2020. Retrieved 8 August 2020.
- US EPA (15 September 2020). "Overview of Greenhouse Gases". Retrieved 15 September 2020.
- Ciucci, M. (February 2020). "Renewable Energy". European Parliament. Retrieved 3 June 2020.
- Nuccitelli, Dana (26 January 2015). "Climate change could impact the poor much more than previously thought". The Guardian. Archived from the original on 28 December 2016.
- Carrington, Damian (19 March 2019). "School climate strikes: 1.4 million people took part, say campaigners". The Guardian. Archived from the original on 20 March 2019. Retrieved 12 April 2019.
- Carrington, Damian (17 May 2019). "Why the Guardian is changing the language it uses about the environment". The Guardian. Retrieved 20 May 2019.
- Rankin, Jennifer (28 November 2019). "'Our house is on fire': EU parliament declares climate emergency". The Guardian. ISSN 0261-3077. Retrieved 28 November 2019.Too risky
- Watts, Jonathan (19 February 2020). "Oil and gas firms 'have had far worse climate impact than thought'". The Guardian.
- Carrington, Damian (6 April 2020). "New renewable energy capacity hit record levels in 2019". The Guardian. Retrieved 25 May 2020.
- McCurry, Justin (28 October 2020). "South Korea vows to go carbon neutral by 2050 to fight climate emergency". The Guardian. Retrieved 6 December 2020.
- "Arctic amplification". NASA. 2013. Archived from the original on 31 July 2018.
- Carlowicz, Michael (12 September 2018). "Watery heatwave cooks the Gulf of Maine". NASA's Earth Observatory.
- Conway, Erik M. (5 December 2008). "What's in a Name? Global Warming vs. Climate Change". NASA. Archived from the original on 9 August 2010.
- "Responding to Climate Change". NASA. 21 December 2020. Archived from the original on 4 January 2021.
- Riebeek, H. (16 June 2011). "The Carbon Cycle: Feature Articles: Effects of Changing the Carbon Cycle". Earth Observatory, part of the EOS Project Science Office located at NASA Goddard Space Flight Center. Archived from the original on 6 February 2013. Retrieved 4 February 2013.
- "Scientific Consensus: Earth's Climate is Warming". NASA. 21 December 2020. Archived from the original on 4 January 2021.
- Shaftel, Holly (January 2016). "What's in a name? Weather, global warming and climate change". NASA Climate Change: Vital Signs of the Planet. Archived from the original on 28 September 2018. Retrieved 12 October 2018.
- Shaftel, Holly; Jackson, Randal; Callery, Susan; Bailey, Daniel, eds. (7 July 2020). "Overview: Weather, Global Warming and Climate Change". Climate Change: Vital Signs of the Planet. Retrieved 14 July 2020.
National Conference of State Legislators
- "State Renewable Portfolio Standards and Goals". National Conference of State Legislators. 17 April 2020. Retrieved 3 June 2020.
- Welch, Craig (13 August 2019). "Arctic permafrost is thawing fast. That affects us all". National Geographic. Retrieved 25 August 2019.
National Science Digital Library
- Fleming, James R. (17 March 2008). "Climate Change and Anthropogenic Greenhouse Warming: A Selection of Key Articles, 1824–1995, with Interpretive Essays". National Science Digital Library Project Archive PALE:ClassicArticles. Retrieved 7 October 2019.
Natural Resources Defense Council
- "What Is the Clean Power Plan?". Natural Resources Defense Council. 29 September 2017. Retrieved 3 August 2020.
The New York Times
- Rudd, Kevin (25 May 2015). "Paris Can't Be Another Copenhagen". The New York Times. Archived from the original on 3 February 2018. Retrieved 26 May 2015.
- Fandos, Nicholas (29 April 2017). "Climate March Draws Thousands of Protesters Alarmed by Trump's Environmental Agenda". The New York Times. ISSN 0362-4331. Archived from the original on 12 April 2019. Retrieved 12 April 2019.
- NOAA (10 July 2011). "Polar Opposites: the Arctic and Antarctic". Archived from the original on 22 February 2019. Retrieved 20 February 2019.
- NOAA (17 June 2015). "What's the difference between global warming and climate change?". Archived from the original on 1 January 2021. Retrieved 9 January 2021.
- Huddleston, Amara (17 July 2019). "Happy 200th birthday to Eunice Foote, hidden climate science pioneer". NOAA Climate.gov. Retrieved 8 October 2019.
Our World in Data
- Ritchie, Hannah; Roser, Max (15 January 2018). "Land Use". Our World in Data. Retrieved 1 December 2019.
- Ritchie, Hannah (2019). "Renewable Energy". Our World in Data. Retrieved 31 July 2020.
- Ritchie, Hannah (18 September 2020). "Sector by sector: where do global greenhouse gas emissions come from?". Our World in Data. Retrieved 28 October 2020.
Pew Research Center
- Pew Research Center (5 November 2015). Global Concern about Climate Change, Broad Support for Limiting Emissions (Report). Archived from the original on 29 July 2017. Retrieved 7 August 2017.
- Fagan, Moira; Huang, Christine (18 April 2019). "A look at how people around the world view climate change". Pew Research Center. Retrieved 19 December 2020.
- Tyson, Dj (3 October 2018). "This is What Climate Change Looks Like in Alaska – Right Now". Pacific Environment. Retrieved 3 June 2020.
- Tamma, Paola; Schaart, Eline; Gurzu, Anca (11 December 2019). "Europe's Green Deal plan unveiled". Politico. Retrieved 29 December 2019.
- Leopold, Evelyn (25 September 2019). "How leaders planned to avert climate catastrophe at the UN (while Trump hung out in the basement)". Salon. Retrieved 20 November 2019.
- Gleick, Peter (7 January 2017). "Statements on Climate Change from Major Scientific Academies, Societies, and Associations (January 2017 update)". ScienceBlogs. Retrieved 2 April 2020.
- Scientific American
- Wing, Scott L. (29 June 2016). "Studying the Climate of the Past Is Essential for Preparing for Today's Rapidly Changing Climate". Smithsonian. Retrieved 8 November 2019.
- The Sustainability Consortium
- "One-Fourth of Global Forest Loss Permanent: Deforestation Is Not Slowing Down". The Sustainability Consortium. 13 September 2018. Retrieved 1 December 2019.
- UN Environment
- "Curbing environmentally unsafe, irregular and disorderly migration". UN Environment. 25 October 2018. Archived from the original on 18 April 2019. Retrieved 18 April 2019.
- "What are United Nations Climate Change Conferences?". UNFCCC. Archived from the original on 12 May 2019. Retrieved 12 May 2019.
- "What is the United Nations Framework Convention on Climate Change?". UNFCCC.
Union of Concerned Scientists
- "Carbon Pricing 101". Union of Concerned Scientists. 8 January 2017. Retrieved 15 May 2020.
- Rice, Doyle (21 November 2019). "'Climate emergency' is Oxford Dictionary's word of the year". USA Today. Retrieved 3 December 2019.
- Segalov, Michael (2 May 2019). "The UK Has Declared a Climate Emergency: What Now?". Vice. Retrieved 30 June 2019.
- Calma, Justine (27 December 2019). "2019 was the year of 'climate emergency' declarations". The Verge. Retrieved 28 March 2020.
- Roberts, D. (20 September 2019). "Getting to 100% renewables requires cheap energy storage. But how cheap?". Vox. Retrieved 28 May 2020.
- World Health Organization
- "WHO calls for urgent action to protect health from climate change – Sign the call". World Health Organization. November 2015. Archived from the original on 3 January 2021. Retrieved 2 September 2020.
World Resources Institute
- Butler, Rhett A. (31 March 2021). "Global forest loss increases in 2020". Mongabay. Archived from the original on 1 April 2021. ● Mongabay graphing WRI data from "Forest Loss / How much tree cover is lost globally each year?". research.WRI.org. World Resources Institute — Global Forest Review. January 2021. Archived from the original on 10 March 2021.
- Levin, Kelly (8 August 2019). "How Effective Is Land At Removing Carbon Pollution? The IPCC Weighs In". World Resources institute. Retrieved 15 May 2020.
- Seymour, Frances; Gibbs, David (8 December 2019). "Forests in the IPCC Special Report on Land Use: 7 Things to Know". World Resources Institute.
Yale Climate Connections
- Peach, Sara (2 November 2010). "Yale Researcher Anthony Leiserowitz on Studying, Communicating with American Public". Yale Climate Connections. Archived from the original on 7 February 2019. Retrieved 30 July 2018.
|Scholia has a profile for global warming (Q7942).|
Library resources about |
- Climate Change at the National Academies – Repository for reports
- Met Office: Climate Guide – UK National Weather Service
- Educational Global Climate Modelling (EdGCM) – Research-quality climate change simulator
- Global Climate Change Indicators – NOAA
- Result of total melting of Polar regions on World – National Geographic | https://earthspot.org/geo/?search=Climate_change | 21 |
19 | Presentation on theme: "ECONOMIC DECISION MAKING IS PRETTY SIMPLE BECAUSE IT ONLY INVOLVES A FEW TERMS AND RULES. IN FACT, YOU PROBABLY ALREADY THINK ABOUT MANY PROBLEMS IN THE."— Presentation transcript:
ECONOMIC DECISION MAKING IS PRETTY SIMPLE BECAUSE IT ONLY INVOLVES A FEW TERMS AND RULES. IN FACT, YOU PROBABLY ALREADY THINK ABOUT MANY PROBLEMS IN THE SAME WAY THAT ECONOMISTS DO. Making Economic Decisions
Trade-Offs Scarcity forces people to make choices about how they will use their resources Most economic decisions are made with common sense and careful analysis In economic choices, people exchange one good or service for another The trade-off is the alternative you face if you decide to do one thing rather than another Example: If you decide to buy $100 jeans, then your trade-off is the $100
Opportunity Cost The cost of the next best use of your time and money when you choose to do one thing over another Includes more than just money also takes into accounts all the possible discomforts and inconveniences linked to the choice made The opportunity cost of any action is the value of what is given up because the choice was made The opportunity cost is generated from the next highest ranked alternative, not all alternatives
For example: Suppose Congress votes to spend $2 billion for projects to clean up polluted rivers. The opportunity cost of the vote is the next best alternative use of those same tax dollars. Congress could have used the money for increased funding on space research. In this example, the opportunity cost of cleaning up polluted rivers is less funding for the space program. Being aware of trade-offs and opportunity costs is important in making economic decisions: you will make wiser use of your own resources if you are aware of the opportunity costs and trade-offs.
Other Measures of Cost Fixed Costs- expenses that are the same no matter how many units of a good that are produced Examples: mortgage payments and property taxes Variable costs-expenses that change with the number of units of a good that is produced Examples: wages and materials -these expenses increase as production is increased or decrease as production is decreased
Total costs- the addition of the variable and fixed costs together -many businesses focus on the average total cost -to arrive at the average total cost, divide the total cost by the quantity produced Marginal costs- the extra cost of producing one additional unit of output Example: if it cost $2000 to produce 50 items and $2050 to produce 51 items, the marginal cost is $50
Measures of Revenue Businesses use two key measures to decide what output will produce the greatest profit Total revenue- the number of units sold multiplied by the average price per unit Example: 50 units sold at $40 each= $2000 total revenue Marginal Revenue- the change in total revenue for selling one more unit of output Marginal Benefit- the additional or extra benefit associated with an action
Cost Benefit Analysis Model that is create by economists to compare marginal costs and marginal benefits of a decision Rational economic decision making tells us to choose an action when the benefits are greater than the costs If the costs outweigh the benefits, then the chosen option should be rejected Example: If you produce something that costs $10 and you cannot sell the item for $10, then there is no benefit
Using Cost Benefit Analysis Look at the graph on page 413 of your online textbook Suppose you are a farmer trying to decide how much of your 25 acres to plant. Assume the marginal cost of planting and harvesting are the same for all 25 acres (the horizontal line on the graph). Let’s assume that some of the land is better than other. As a result, the size of the harvest that you can expect from each acre goes down as the number increases, as you plant the most fertile land first. As more is planted, the less fertile land must be used. The downward-sloping line would represent the diminishing marginal benefits.
The graph makes it easy to see how much land you should plant. Clearly, you should plant the first 5 acres because the marginal cost is low when compared to the benefits to be gained. It would make sense to plant up to 15 acres because to that point the marginal benefit is greater than the marginal cost. You would not want to plant more than 15 acres because the extra cost is greater than the benefit. | https://slideplayer.com/slide/7072905/ | 21 |
17 | Sleep is a behavior steeped in mystery, yet it appears to offer essential benefits (Rattenborg et al., 2007; Cirelli & Tononi, 2008). Sleep may specifically assist with honey bee communication (Klein et al., 2010, 2018) and memory (Zwaka et al., 2015), so accurately identifying sleep and knowing when and where it occurs is essential for further investigating sleep’s role in honey bee ecology. To better understand sleep’s benefits, or the detriments that come with sleep loss, it is essential to monitor sleep, including when it occurs in dark, hidden places. Several species of honey bees (Apis spp.) nest inside cavities, and all species of honey bees spend periods of their lives concealed inside honeycomb cells, within which much of the colony’s behaviors occur and without which the colony would inevitably perish. Honeycomb is where honey bees store honey and pollen, rear brood, and where some appear to sleep. Young adult workers (callows/cell cleaners) appear to sleep almost exclusively inside cells (Klein et al., 2008, 2014), but workers spend less and less time sleeping inside cells as they age and change tasks, with foragers spending none of their time asleep inside cells (Kaiser, 1988; Klein et al., 2008). If significant periods of sleep occur within honeycomb cells, it would be wise to take inside-cell behavior into account.
Looking for sleep within the dark confines of a honey bee nest requires close examination or monitoring of bees, and modifying a nest to expose the inner workings of a colony has a long and curious history (Crane, 1983), including displaying comb by adding transparent glass jars to hives (Kritsky, 2010), and designing research-friendly observation hives (Von Frisch, 1967; Seeley, 1995). Observation hives increase visibility by encapsulating frames of honeycomb between panes of glass, and this innovation has led to revolutionary discoveries in animal behavior (Von Frisch, 1967; Seeley, 2019). Lindauer (1952) cleverly and patiently recorded the behaviors of individual honey bee workers in a modified observation hive that allowed him to observe activities within a subset of honeycomb cells. Lindauer made a point of recording when a bee was “Müssig” (idle), and concluded that time spent exhibiting this behavior, a portion of which occurred inside cells, far outweighed the time spent performing other tasks (e.g., bee #107: 68 h 53 min out of the total 176 h 45 min observed). He referred to bees seeking out undisturbed resting places, like empty or egg-containing cells, and spending long, calm periods inside these cells. Lindauer used an icon of a couch or bed to symbolize this behavior, despite including cleaning movements and other somewhat superficially immobile states in his calculations. We now know that a portion of this relatively immobile time in cells is devoted to heating brood in adjacent cells (Kleinhenz et al., 2003), but to what extent the remaining time is spent sleeping has never been rigorously established.
Few studies following Lindauer’s observations have addressed immobility or potentially sleep-like states in cells. These include reports of honey bees exhibiting a resting state (“Ruhezustand”, Sakagami, 1953), a “motionless” state (Moore et al., 1998), “rest” (Kaiser, 1988), or ventilatory signs indicative of sleep (Sauer, Menna-Barreto & Kaiser, 1998; Kleinhenz et al., 2003). Sleep is typically defined by several behavioral criteria, most important of which is an increased threshold of responsivity to a stimulus. An animal with an increased response threshold exhibits a specific posture during states of relative immobility that are reversible (Flanigan, Wilcox & Rechtschaffen, 1973). This suite of sleep signs is internally controlled (Tobler, 1985), meaning that if deprived of the state, the organism will respond with an increased expression of the behavior. Kaiser (1988) and Sauer, Herrmann & Kaiser (2004) confirmed that these coincident behavioral traits exist in honey bees.
Not all sleep signs can be observed simultaneously, so a dependable indicator of sleep would be of great value. Antennal immobility is a feature that has been used as a proxy for sleep (Eban-Rothschild & Bloch, 2008; Hussaini et al., 2009; Zwaka et al., 2015; Vázquez et al., 2020) because the amount of antennal immobility per unit time correlates with an increased response threshold (Kaiser, 1988). Another feature that covaries with antennal immobility (Sauer et al., 2003) and could, therefore, be an alternative proxy for sleep, is discontinuous ventilation. The honey bee’s metasoma (hereafter referred to as “abdomen”) moves in anterior-posterior pumping motions (pulses) at various rates and degrees of continuity. Easily observed extremes include “continuous” and “discontinuous” ventilation, in which the interim between anterior-posterior abdominal motions is consistently brief (continuous) or occasionally broken by extended pauses of at least 10 s (discontinuous; Kleinhenz et al., 2003). Honey bees exhibit continuous and discontinuous ventilation inside cells (Kleinhenz et al., 2003) and outside cells (Kaiser, 1988), suggesting higher and lower rates of respiration (Bailey, 1954). A discontinuously ventilating honey bee appears to almost always have a higher response threshold, a hallmark of sleep (B. Klein, 2020, in preparation). Because it can be difficult or impossible to gauge antennal movement in a nest, especially if a bee is inserted in a honeycomb cell, ventilatory activity holds promise as a more suitable indicator of sleep under natural or close-to-natural conditions. It is worth noting that antennal immobility may even be a misleading indicator of sleep because brood-incubating (heater) bees, which may appear to be asleep on the comb surface because of an absence of large body movements, also exhibit slow to no antennal movement (Bujok et al., 2002).
Kleinhenz et al. (2003) modified Lindauer’s (1952) hive manipulation and used ventilatory rates, in part, to distinguish resting versus heating honey bees. We adopted this approach to peer at the undisturbed activities of worker bees in comb cells to see if what can be seen outside a cell (tip of abdomen; Fig. 1A) can serve as a reliable indication of what is going on with the rest of a bee’s body inside the cell (Figs. 1B–1D). We hypothesized that ventilatory rate (continuous vs. discontinuous ventilation) and the presence/absence of larger movements of the abdomen can be used to predict behavior of bees inside cells. If correlations are robust between behavior of honey bees inside cells with behavior that is observable outside cells, someone observing only the posterior tips of honey bees when bees are inserted in honeycomb should be able to identify the bees’ behaviors, including sleep.
Materials and Methods
We set up a small colony of honey bees in an observation hive with honeycomb positioned so that the interiors of some cells were visible. We recorded bees’ behaviors inside the visible cells using an infrared-sensitive camera and a thermographic camera, first by surveying all of the visible cells, then by zooming into and recording examples of behaviors for later analysis. We classified behaviors into four categories based on body movement, ventilatory rate and surface temperatures. To test the viability of identifying behaviors based solely on viewing the portion of a bee that is visible when honeycomb is exposed in a more conventional hive, we asked naïve viewers to identify behaviors from a subset of the videos. The viewers used the same four behavioral categories, but the videos were modified so only the posteriors of the bees were visible. Their identifications, made under limited-visibility conditions, were compared to our identifications, which benefitted from careful examination of behavior visible only inside the cells, and surface temperatures visible using the thermographic camera.
Study organisms and hive
We collected one queen, two frames of honeycomb, and 800–1,000 Carniolan worker honey bees (A. mellifera carnica Pollman, 1879) from a bee yard hive, with permission from Dr. Jürgen Tautz and the University of Würzburg (Würzburg, Germany; 49°46′47″ N, 9°58′31″ E). We cut out three sections of comb to fit within a honey bee mating cage (Begattungskästchen; see Kleinhenz et al., 2003) and cleaned out most of the cells along the edges to increase our likelihood of viewing visiting workers. The interiors of 93 empty cells and 22 cells with food were visible along the edges of the hive (Fig. 2). The sections of comb included brood cells, pollen (at least 10 cells), uncapped honey and empty cells. The comb slice on the left side of the hive contained only uncapped honey and empty cells. The middle slice contained 50 capped brood on the left side (nine were one-cell deep from the empty edge cells) and 27 capped brood on the right side (two were one-cell deep). The right slice contained 40 capped brood on the left side (two were one-cell deep) and 41 capped brood on the right side (two were edge cells and five were one-cell deep, with at least six uncapped larval cells toward the back of the comb). Twenty-one hours after inserting the queen, followed by the workers, we introduced 49 uniquely paint-marked callows using nontoxic, oil-based markers (Sharpie, Oak Brook, IL, USA). Marking did not noticeably affect temperature readings in preliminary tests. The intent of introducing individually-marked callows was to increase our likelihood of observing sleep inside cells, because young adults appear to sleep more inside cells than older adults (Klein et al., 2008, 2014). The callows had been incubated at 36 °C and collected within 24 h of emergence, marked on dorsal side of mesosoma (hereafter referred to as “thorax”) and abdomen, and placed in a small cage on top of the new hive, separated from the hive by a screen. After 5 h of callows being exposed to nest odor, the screen was removed and newly marked bees were accepted without any sign of aggression. Ultimately, only a subset of the data recorded came from these introduced, marked workers (see ventilatory and thermal methods, below).
The hive allowed for unrestricted access to the outdoors for the duration of the study (20–24 August 2008, with data collected from 23–24 August) via an entrance tunnel. The hive window was replaced with a sheet of transparent polypropylene giftwrap (pbsfactory, Artikel 00347, Rheinland-Pfalz, Germany) that remained in place for the entire study to allow for thermographic recordings. The ambient temperature of the small room was maintained high enough by using a space heater that insulation was not used to cover the hive during any portion of the short study. Diet was supplemented with honey and sugar water ad libitum.
Behaviors of interest
Four categories of behaviors were recorded: sleeping, maintaining cells, eating, and heating (Table 1). Sleeping was identified by a bee’s discontinuous ventilation and otherwise relative immobility (see description, above). Maintaining cells (i.e., cleaning or building) was identified by occasional large body movements, or obvious mandibular activity while continuously ventilating (although ventilation could be difficult to assess during large body movement episodes) in a cell devoid of food. Sakagami (1953) identified cleaners as externally quiet or irregularly moving, rotating once in a while in a cell. Eating was rarely observed, but obvious when it did occur; a continuously ventilating bee extended her tongue into a cell containing liquid. Heating was identified when a bee with a relatively hot thorax was deep in a cell, continuously ventilating and otherwise immobile. We have no data for bees packing pollen and, because we had no uncapped brood in exposed cells, we have no data involving development or direct tending of brood.
|Behavior||Criteria used to identify behavior when cell interior was visible||Criteria used to identify behavior when cell interior was digitally obscured (test videos)||n (bees)|
|Sleeping||Discontinuously ventilating, otherwise relatively immobile (Klein et al., 2008)||Discontinuously ventilating, otherwise immobile||12|
|Maintaining cells||Body active in empty cell, often obscuring continuous ventilation; mandibular or antennal movement commonly observed (i.e., cleaning or building cells)||Continuously ventilating, often coupled with larger body movements (in and out, or rotating in cell; Sakagami, 1953)||10|
|Eating||Tongue extended in cell containing liquid; continuously ventilating||Continuously ventilating with possible body movement, and only partially in cell (cell contents prevent bee from going deeper)||3|
|Heating||Continuously ventilating, otherwise immobile while deep in cell; thorax obviously hotter than surroundings (when viewed using thermal camera)||Continuously ventilating, otherwise immobile||12|
We conducted three sets of analyses: (1) We surveyed behaviors of bees visible inside cells across multiple time points, and after zooming in with the video camera to record exemplars of the different behaviors, we (2) analyzed a subset of the surveyed bees for ventilatory rates, then (3) used the thermography to measure surface temperatures associated with a subset of the bees that had been analyzed for ventilatory rates. By restricting thermal analyses to only those bees for which we acquired ventilatory rate data, we could test whether heating bees could be identified by ventilatory rates (and relative immobility) alone.
For survey data (Dataset S1 and S2), we scanned the cells with visible interiors (n = 115) at 49 discrete time points, and recorded behavior for all bees inside cells. Surveys were separated by at least 10 min, and, because cell maintenance was so commonly observed (considered the default behavior when not explicitly announced by B.A.K.), surveys sometimes started when a behavior other than cell maintenance was detected to ensure sampling of these other behaviors. Each survey involved examining every bee inserted at least partially inside comb cells for at least three to five seconds if obviously maintaining cells (i.e., cleaning or building), or eating, and for longer (>10 s, and sometimes for several minutes) if a worker appeared to be sleeping or heating (Table 1; Fig. 3; Movies S1–S4). Surveys stopped immediately after identifications of behaviors, or after a few minutes of close-up filming for subsequent ventilatory analysis. B.A.K. identified behaviors in real time and identified individually paint-marked bees by briefly shining a tiny white light on the abdomen. Each behavioral count represented a unique bee within each survey, but some bees were undoubtedly repeatedly measured across surveys.
We collected ventilatory rate data (Dataset S3) from a subset of the surveyed bees. Sleeping and heating bees are relatively immobile (no major head, wing, leg, or body movements), except for ventilatory motions of the abdomen, described above (for more details describing relative immobility during sleep, see Klein et al., 2008). Discontinuous ventilation, identified by bouts of abdominal pulses separated by pauses of stillness exceeding 10 s, occasionally included a single, isolated, apparently spontaneous abdominal jerk during one of these pauses. We excluded a pulse (jerk) if isolated from other pulses by >5 s before and after. We recorded ventilatory rates using JWatcher, an event recorder and analytical software package designed for study of behavior (version 1.0, http://www.jwatcher.ucla.edu/). Recording events with JWatcher entails pressing keys assigned to represent behaviors of interest on a keyboard, with event times automatically recorded. M.K.B. manually pressed one key in time with a pulsing abdomen replayed at 0.3× the normal speed, to increase accuracy and consistency of data transcription. Although we cannot be certain that each behavioral recording represents a unique bee, three steps were taken to increase the likelihood: (1) 12 of the 37 bees were individually marked, and individually-marked bees were analyzed only once; (2) some of the recordings of unmarked bees captured several unmarked bees concurrently, so each was unique within those recordings; and (3) surveys were separated by at least 10 min, and sometimes by several hours.
In addition to behaviors exhibited in exposed cells, we recorded mean surface temperature of a bee’s thorax (Tth) and mean surface temperature of her surroundings (Tsurr) using FLIR’s analysis software package (ResearchIR Max version 4, FLIR Systems, Inc.) from a subset of the bees for which we analyzed ventilatory rates, above (Dataset S4). To calculate the mean temperature of a bee’s thorax (Tth), we drew a circle (within which a mean temperature could be automatically generated) over the region of interest (thorax) in an image taken at the beginning of a thermal recording (several seconds after entering cell), the middle of the recording, and the end (several seconds before exiting cell). To calculate the mean surface temperature of her surroundings (Tsurr) at identical time points, we dragged the same ellipse over three regions bordering the bee’s thorax: above and below thorax, and anterior to head. These regions of interest surrounding the bee’s thorax included almost exclusively cells and cell walls and, unlike a previous study by Klein et al. (2014), did not include any portion of the bee herself. This updated method avoids problems of the bee’s body contributing to the measurement of Tsurr. We report the difference of Tsurr from Tth to indicate the surface temperature of the bee relative to the surface temperature of her surroundings (Tdiff = Tth − Tsurr) (Klein et al., 2014). There was no statistically meaningful difference between using the mean body temperature from the middle time point versus the mean of means across all three time points for any behavior, so we use the middle point when reporting Tth (W = 27, 52, 4, 89; P = 0.65, 0.94, 1.00, 0.07 for bees sleeping, maintaining cells, eating, and heating, respectively) and Tdiff (W = 26, 52, 3, 69; P = 0.58, 0.91, 0.70, 0.61 for bees sleeping, maintaining cells, eating and heating, respectively). Because these bees represent a subset of the bees analyzed above (32 of the 37 bees for which we analyzed ventilatory rates; 11 of the 32 bees were individually marked), the same discussion of unique sampling applies here as well.
To test how predictable a behavior is from observing tips of abdomens alone, we first edited video clips so that they were without sound and a dark gray bar concealed cell contents, revealing only what extended beyond each cell (tip of abdomen and, sometimes, distal portions of hindlegs). We placed a tiny digital mark to indicate the bee(s) of interest in each video (Fig. 4; Movies S5–S8). B.A.K. trained 54 students for 20–30 min by showing and describing behaviors (criteria: Table 1) in 12 video clips (four sleeping, five maintaining cells, two heating, and one showing food in a cell; eating was described but not shown due to lack of additional examples from our recordings). Videos used during training were not used during testing, but were made available to students, had they wished to continue training on their own. Once trained, students independently watched 30 video clips of bees with digitally obscured cell contents—a subset of the 37 bee recordings used in our ventilatory rate analyses (11 sleeping, six maintaining cells, two eating and 11 heating)—and recorded what they believed to be each bee’s behavior.
We eliminated outdoor light and lit the room with a single desk lamp covered with a red acetate filter (#27 Medium Red, transparency = 4%, peak at 670 nm, Supergel by Rosco, Stamford, CT, USA), selected because honey bees may be less sensitive to frequencies beyond 600 nm (Von Frisch & Lindauer, 1977) or 650 nm (Dustmann & Geffcken, 2000). The same filter was applied to a headlamp, used to facilitate observations. The warm lights were kept away from the hive, and angled to minimize glare that would otherwise affect thermal measures. We filmed under the low, red light, and with an infrared spotlight by using an infrared-sensitive video camera (AGDVC 30, Panasonic, Japan) side-by-side with a thermal camera (FLIR SC660, FLIR Systems Inc., Boston, MA, USA; accuracy 1 °C or 1% of reading, according to FLIR manual and FLIR technical support). We adjusted thermal camera settings to match the emissivity value of a honey bee’s thorax (0.97; Stabentheiner & Schmaranzer, 1987), although wax and other surface temperatures were recorded for Tsurr, and set the transmissivity to that of polypropylene (0.89). The giftwrap used as the observation hive’s window produced a nonlinear error when recording temperature as temperature increased, so we adjusted absolute temperature measurements (Dataset S5; see Klein et al., 2014 for details). Some data were taken using an audio recorder (Olympus VN-4100PC Digital Voice Recorder) and later transcribed. Audio was synchronized with video recordings by the researcher making a noise, followed by announcing the exact time as was recorded on video when the noise was made. Bees were often pointed out when announced, and this served to synchronize thermal imagery with video and audio.
To determine how prevalent each behavior was within the hive, we compared total counts of bees performing each behavior using a Kruskal–Wallis Rank Sum test. We then conducted post-hoc pairwise tests using six two-sided, non-paired Wilcoxon-Mann-Whitney tests. To avoid multiple testing problems, we corrected resulting P-values using the Holm method in the R function p.adjust(). To account for day/night differences between behaviors, we conducted three Kolmogorov-Smirnov tests using the R function ks.test(). This two-sided test’s null hypothesis states that two sets of data, x and y, were drawn from the same continuous distribution. Therefore, we set our x and y to be the day and night distributions of each behavior, respectively. We performed three such tests to include sleeping, heating, and cell-maintaining behaviors, each time testing that each behavior count distribution remained the same from day to night. We did not perform this test on eating behavior because we did not have a large enough sample. We used local sunrise/sunset times to distinguish day and night (https://www.gaisma.com/en/location/wurzburg.html).
To address whether different behaviors could be distinguished using the time separations between their individual within-bout abdominal ventilation pulses (“pulse separations”), we performed a Kruskal–Wallis Rank Sum test. To test specifically for differences in pulse separations among behaviors, we filtered data to include only those abdominal pulses separated by <1 s, and repeated the aforementioned Kruskal–Wallis Rank Sum test. We then conducted pairwise Wilcoxon-Mann-Whitney tests comparing pulse separations across behavior groups. We applied a Holm correction for multiple testing. Since individual bees were monitored for different durations while performing behaviors inside cells, some bees could have disproportionately influenced the separation interval of the behavioral category to which they belonged. To account for any effect of individual bees on the timing between abdominal pulses, we performed a linear mixed effects logistic regression analysis using the R library lme4 (in package lmer test), with bee ID as random factor and behavior as fixed effect. Because our residuals were initially not normally distributed, we performed a rank transformation before conducting the regression analysis.
We measured Tth and Tsurr across three timepoints (beginning, middle and end of each bee’s behavior duration). If we were to use all temperatures in our analyses, then we would have three Tth and nine Tsurr per bee. If we were to use only the middle temperature measurements (which might avoid behaviorally transitional complications), then we would have one Tth and three Tsurr per bee. To determine whether either method would affect behavior mean Tth or Tdiff, we compared mean Tth, then mean Tdiff of each behavior between the two methods using four Wilcoxon-Mann-Whitney tests. Corrections for multiple testing were not necessary. Heating behavior was confirmed by comparing a bee’s thoracic temperature to that of the surrounding region. For temperature difference analyses, because the data were not normally distributed and the sample size was relatively small, we applied the Wilcoxon-Mann-Whitney test with Holm correction. To see if the temperature associated with behavior differed across behaviors, we applied a Kruskal–Wallis Chi-square test using the Tth and Tsurr from the middle timepoint thermal measurements. We then conducted post-hoc pairwise Wilcoxon-Mann-Whitney tests with Holm correction on each of six combinations of behavior pairs. We repeated these methods for Tdiff. To determine whether Tdiff changed over the duration of a bee’s behavior in a cell, we used the R package nparLD (for nonparametric longitudinal data; Noguchi et al., 2012) to conduct a non-parametric ANOVA-type test. We applied the formula F1-LD-F1, which tests for group (behavior) differences, change over time, and the interaction between group and time. Tdiff was compared across temperature measurement periods 1, 2 and 3 for all bees, grouped by behavior. Before analyzing any thermal data, we corrected for the thermal signature of the thin film of giftwrap that functioned in enclosing the observation colony. To do this, we used thermographic measurements of the same neutral surface with and without the giftwrap film covering at a range of room temperatures from 26.5–43.6 °C. Differences between the two measurements at the same nominal temperatures were used to generate a set of correction values, which were then added as offsets to all thermal measurements of the colony behind the giftwrap (Dataset S5).
To calculate the reliability of identifying behavior based on observing only the posterior tip of a bee’s abdomen, we applied a binomial test (Binomial Test Calculator, https://www.socscistatistics.com/tests/binomial/default2.aspx), with the null hypothesis that determination of behavior is random and not related to the actual correct behaviors. We then corrected for multiple testing using the Holm method. We set alpha at 0.05, report two-tailed P-values for all tests, and report errors as standard deviations. M.K.B. performed all statistical tests using R (R Core Team, 2019), except for binomial test on limited-visibility experiment.
We conducted 49 surveys (21 nighttime, 28 daytime) of behaviors exhibited inside comb cells across 34.5 h. Absolute counts of each behavior differed across the surveys (Kruskal–Wallis rank sum test χ2 = 123.3, df = 3, P < 2.2 × 10−16). Of the 455 behavioral events monitored inside cells, bees spent 16.9% sleeping (n = 63), 76.4% maintaining cells (n = 362), 0.4% eating (n = 2) and 6.4% heating (n = 28). Bees slept for bouts of 1316 ± 1038 s (range: 257–3,346 s), maintained cells for 237 ± 257 s (range: 61–845 s), ate for 447 ± 233 s (range: 197–659 s), and heated for 956 ± 509 s (range: 452–2,214 s) (n = 7, 9, 3, 10 recordings of entire duration in cell for each behavior category, respectively). Behaviors were exhibited day and night, with no evidence of day-night bias for any behavior (2-sample Kolmogorov–Smirnov tests, D = 0.13, 0.26, 0.09; P = 0.99, 0.40, 1.00 for sleeping, maintaining cells and heating, respectively; eating sample size was too low for test to be meaningful; Fig. 5). None of the discontinuously ventilating bees exhibited visible signs of wakeful activity (larger movements of body, antennal movement, chewing, etc.), and because discontinuous ventilation covaries with other sleep signs (see above), “sleeping” is used as a shorthand for discontinuous ventilation + relative immobility inside cells, below.
Ventilatory signatures as indicators of behavior
Ventilatory patterns differed among behaviors exhibited inside cells, as evident when plotting abdominal pulses (Fig. 6), and time between pulses (Fig. 7) (Kruskal–Wallis rank sum test, χ2 = 185.2, df = 3, P = 2.2 × 10−16). Discontinuous ventilation associated with sleep was identified by having discrete bouts of abdominal pulses, with the bouts separated by at least 10 s. Bouts of pulses were separated by 34.5 s ± 12.7 s (range: 10.1–336.6 s, n = 179 bout separations with a mean of 10 bout separations per bee across 12 bees). Pulses within bouts were separated by 0.27 s ± 0.06 s, when excluding pulse separations >1 s, which helped to exclude possible spontaneous abdominal jerks that appeared distinct from bouts of pulses (n = 1166 pulse separations with a mean of 97 pulse separations per bee across 12 bees).
Continuous ventilation (by bees maintaining cells, eating, or heating) rarely included separation of pulses by greater than 10 s (n = 28 out of 490 pulse separations when maintaining cells, 14 out of 394 when eating, and only 16 out of 5,525 when heating; Fig. 6), and instead featured relatively continuous abdominal pulses, which were separated by the same amount of time as sleeping bees, above (0.33 s ± 0.08 s for pulse separations <1 s, n = 5701 pulse separations with a mean of 328 pulses and a mean of 78 pulse separations per bee across 25 bees; linear mixed model after rank transformation, F3,34 = 1.19; P = 0.33).
Abdominal pulses can be difficult to discern when bees are very active (maintaining cells or eating) in cells, so ventilatory rates should be viewed in the context of whether or not a bee is exhibiting larger body motions.
Thermal measures as indicators of behavior
Body temperatures (Tth) differed from surrounding temperatures (Tsurr) (Kruskal–Wallis χ2 = 21.2, df = 3, P = 9.5 × 10−5), but only when bees were heating. Tth did not differ from Tsurr when bees were sleeping, maintaining cells, or eating (Tdiff = 0.20 ± 0.23 °C when sleeping, 0.20, ± 0.33 °C when maintaining cells, and 0.86 ± 0.31 °C when eating; n = 8, 10, 3 bees and W = 37.5, 58.5, 7, respectively; corrected P-values using Holm method = 1.0 in each case) (Fig. 8). Tth was only statistically different from Tsurr in heating bees (Tdiff = 2.62 ± 1.37 °C; n = 11 bees, W = 112, P = 0.0032). Heating bees’ Tdiff was greater than other bees’ Tdiff (vs. sleeping: W = 88, P = 0.0002; vs. maintaining cells: W = 110, P = 0.0006; vs. eating: W = 32, P = 0.044). Tdiff did not differ between sleeping and maintaining cells (W = 39, P = 0.96), nor did Tdiff differ between maintaining cells and eating (W = 2, P = 0.068), but eating bees’ Tdiff was greater than sleeping bees’ (W = 0, P = 0.044). A heating bee’s body temperature visibly differed from her surrounding temperature when using thermal imagery (Figs. 8, 9A and 9B; Movies S9 and S10), and we used this visible difference to initially identify heating bees, prior to analyzing ventilatory rates. Our aim here is to quantitatively confirm this difference (Tdiff) so that we can confidently associate heaters’ telltale heat emission with complementary behaviors (immobility + continuous ventilation) to confirm that the complementary behaviors alone can be used to distinguish heating bees from bees exhibiting the other behaviors. Ventilatory rates are important because a heating bee’s thoracic temperature fluctuates over time (Kleinhenz et al., 2003; Fig. 9C; Movie S11), and a relatively hot thorax does not necessarily mean a worker is actively performing as a heater, but could instead be transitioning into another behavioral state (Fig. 9D; Movie S12).
Time spent exhibiting a behavior inside cells (beginning, middle and end of stay) did not affect relative body temperature (Tdiff; ANOVA-Type statistic = 0.48, df = 1.7, P = 0.58).
Reliability of observing posterior tip of abdomen for identifying behaviors
We tested how reliable watching only the posterior tip of a bee’s abdomen is for identifying a behavior when a bee is inside a cell. Fifty-four human subjects correctly identified when honey bee workers were sleeping 86.6% of the time (n = 461 of 540 observations of 11 bees; binomial test, expected = 0.25; z = 32.3, P = 4.0 × 10−5; 13.5% misidentifications, with 8.1% identified as heating), maintaining cells 50.1% (n = 174 of 353 observations of seven bees; z = 10.5, P = 4.0 × 10−5; most common misidentification: 49.2% eating), eating 70.4% (n = 76 of 108 observations of two bees; z = 10.8, P = 4.0 × 10−5; most common misidentification: 31.5% maintaining cells), and heating 73.0% (n = 446 of 617 observations of 12 bees; z = 27.1, P = 4.0 × 10−5; most common misidentification: 18.5% maintaining cells). Participants typically reported difficulty determining behavior due to blurriness of abdomen (one video) or jostling of bee by other bees (1–2 videos). All percentages are means of percentages across bees to address effect of bee, some behaviors of which were more difficult than others to identify. For this reason, percentages may not sum perfectly to 100.
Of the behaviors we recorded inside comb cells, sleep made up 16.9% of the observations, second only to maintaining cells. Maintaining cells and eating were easily identified when observing movement of body or mouthparts, and contents of the cell. Sleeping and heating bees lacked large movements of body or head, and were distinguished from each other using ventilatory rates (discontinuous vs. continuous pulses of the abdomen, respectively) and body surface temperature (relative to surrounding surface temperature). When visibility was restricted (i.e., when the contents of cells were obscured and only the posterior tips of honey bees’ abdomens were visible), maintaining cells and eating were difficult to distinguish from each other, but sleeping and heating were identifiable based on ventilatory rates and lack of major body motions alone (86.6% and 73.0% of observations were correctly identified, respectively). We used these two indicators to initially identify sleeping bees and, despite the relative ease of using thermography to distinguish heating bees, the same two indicators (ventilatory rate and lack of major body motions) appear most reliable to identify heating as well. We base this on the fact that a heating bee’s temperature can fluctuate, or confusion can arise when bees transition from one behavior to the next (Figs. 9C and 9D; Movies S11 and S12). We also base this on the high reliability of identifying heating bees in our limited-visibility reliability test, which we expect would increase by training observers for longer than 20–30 min.
This study’s findings match or differ from other studies in revealing ways. Foragers sleep more during the night than during the day (Kaiser, 1988; Sauer et al., 2003; Sauer, Herrmann & Kaiser, 2004; Eban-Rothschild & Bloch, 2008; Klein et al., 2008), but in this study sleep did not occur more at night (Fig. 5), suggesting that we were likely observing younger workers (e.g., cell cleaners and nurse bees). These younger “hive” bees sleep primarily in cells, and behave arrhythmically (Sauer, Menna-Barreto & Kaiser, 1998; Sauer et al., 1999; Eban-Rothschild & Bloch, 2008; Klein et al., 2008, 2014; but see a report of day-night differences inside cells in Moore et al. (1998)). Bees were sleeping in 16.9% of observations, which falls within the wide range of caste-dependent sleep observed inside cells by Klein et al. (2008) (1.6% observations of foragers—39.4% of cell cleaners). Comparisons with Lindauer (1952) are not feasible because he recorded data from only two individuals under relatively normal conditions, did not distinguish discontinuously ventilating or restful states from superficially similar behavioral states, and did not specify whether calculations were based on idleness exhibited within versus outside cells. Sleeping bees’ surface temperatures did not differ from their surroundings, and were slightly higher (Tth = 34.7 ± 0.8 °C, n = 8 bees) than were reported in “resting” bees by Kleinhenz et al. (2003), which were also measured inside cells (32.7 ± 0.1 °C–33.4 ± 0.3 °C, n = 5 bees). These resting bees exhibited discontinuous ventilation, with inter-bout durations lasting up to 58 s (vs. 34.5 s ± 12.7 s, lasting up to 337 s in this study). Heating bees are typically notably hotter than their surroundings, but the contrast was not as extreme in this study (Tdiff = 2.6 ± 1.4 °C; n = 11 bees) as it was in Kleinhenz et al. (2003) (4.2 ± 1.6 °C; n = 8 bees), but the body temperatures were equally high in both studies (Tth = 38.7 ± 1.6 °C, n = 11 bees; 38.3 ± 1.6 °C, n = 8 bees).
Limitations of study
Our observation hive approximated natural conditions in that it was kept in a relatively dark and warm room, featured combs spaced natural distances apart, contained food and brood, the queen was free-roaming, and an entrance allowed full access outdoors. Despite these similarities to natural nests, we supplied the colony with food ad libitum, and comb was limited to narrow slices attached on one side to a plastic window. Reports by Gontarski and Geschke (as communicated by Von Frisch (1967), p. 7), suggest that 500 or 500–1,000 members are sufficient for developing the same division of labor as in normal colonies, but we cannot know if our tiny colony (800–1,000 bees) developed a natural division of labor during this short study. It is important to note that the cells visible along one edge of each comb from which we collected our data may present behavioral biases, which would affect results related to the proportions of behaviors exhibited in cells reported in our surveys. Contents removed from edge cells to increase visibility of comb cells could have caused increased cell cleaning and building activity. We wanted to increase the likelihood of observing sleep in the visible cells, so our emptying of edge cells could have caused a higher rate of discontinuous ventilation within these edge cells. Small numbers of brood or small size of comb could have resulted in unnatural rates of heating, as well. Our limited-visibility reliability test for predicting behaviors featured a lateral view of abdominal tips (Fig. 4) when the typical view would be posterior view of abdominal tips (Fig. 1A). The limited-visibility test included only three bees eating, and the sole training video devoted to eating did not include eating behavior, only presence of food with description of behavior.
Why sleep inside cells?
Accounting for sleep inside honeycomb may help to resolve contradictory or confusing evidence reported under less natural conditions (Sauer, Menna-Barreto & Kaiser, 1998; Eban-Rothschild & Bloch, 2008). If we can rely on discontinuous ventilation + absence of major body motions as markers of sleep in limited-visibility, in-cell situations, the youngest adult workers spend more time asleep than later in life (Klein et al., 2008). Sleeping more earlier in life is normal across animals, and much research has considered the current utility of this standard feature of sleep ontogeny. Cell cleaners and nurse bees sleep primarily inside cells that are located in or close to brood comb, and this could be for a variety of functionally interesting reasons (see Klein et al., 2008). Comb cells may protect sleeping adults from being disturbed, which could reduce the damaging effects associated with sleep fragmentation. Comb cells could provide warmth for regenerative or cognitive processes, or serve as a site that reduces sleepers’ interference with other workers bustling about the comb. Alternatively, sleeping in cells could be a nonadaptive behavior, during which honey bees simply use comb cells as a default site between acts of cell maintenance, nursing, or heating.
Poets, philosophers and scientists have long pondered the societal marvels of honey bee colonies (Preston, 2006), and making visible the bees’ activities is a pursuit that has changed our understanding of what nonhuman animals are capable of. Activities, like sleeping, can be difficult to access, particularly when performed inside honeycomb hidden within a dark tree hollow. What specific benefits are conferred by bees sleeping inside cells awaits further investigation, and likely will depend on technical innovations involving noninvasive imaging of standard hives or natural nests, or testing sleep and sleep loss in noncircadian subjects.
The best view of a honey bee inside a honeycomb cell is typically restricted to the tip of its abdomen, under the best of circumstances. We hypothesized that even with such constraints, the capacity to identify sleep and other behaviors can be high, based on brief observations of the ventilatory rates (discernible by timing of abdominal pulsing motions) combined with the presence or absence of major body movements. Viewing bees inside cells using a special hive and filming with an infrared-sensitive camera and thermal camera made identifying all behaviors relatively easy in this study, but identifying sleeping or heating bees was also reliable with the limited visibility available to an observer without this special hive or thermal camera. Simply observing ventilatory movements, as well as larger motions evident in the tip of a bee’s abdomen was sufficient to noninvasively identify sleeping or heating inside comb cells. Cell maintenance was frequently confused with eating under limited visibility conditions, but both were clearly distinguishable from sleeping and heating. Sleeping and heating were accurately identified (86.6% and 73.0% of observations, respectively) by observing ventilatory rates (discontinuous versus continuous, respectively), combined with a lack of major body movements. Although reliability of identifying behaviors was high, the specialized hive we used may have biased proportions of time bees slept, heated, ate, or maintained cells. Sleep appeared frequently enough to suggest that it is an important behavior experienced within honeycomb cells, supporting previous examinations of sleep inside comb cells, and lending credibility to future ventures, which can rely on similarly less invasive manipulations to reveal the dynamics and functions related to sleep in nature.
Infrared-sensitive video of bee sleeping inside cell.
Bee, center, is facing left with venter facing up. Note worker bees maintaining (cleaning or building) other cells.
Infrared-sensitive video of eating inside cell, with mouthparts extended and body less fully inserted in cell.
Bee is facing left with dorsum facing up. Note worker bees maintaining (cleaning or building) other cells.
Infrared-sensitive video of heating inside cell.
Bee, lower left, is facing left with venter facing observer (sideways).
Infrared-sensitive video of sleeping and heating inside cells.
Sleeping bee, center, is facing left with dorsum facing up, and is to be compared with heating bee, at right, facing right with dorsum facing observer (sideways).
Infrared-sensitive video of worker bee inside cell.
Gray box obscures cell innards, and small light gray rectangle marks bee of interest. This was one of 30 modified video clips used to test reliability of identifying inside-cell behavior from what is visible outside the cell. (Behavior? Answer: sleeping)
Infrared-sensitive video of worker bee inside cell.
Gray box obscures cell innards, and small light gray rectangle marks bee of interest. This was one of 30 modified video clips used to test reliability of identifying inside-cell behavior from what is visible outside the cell. (Behavior? Answer: heating)
Infrared-sensitive video of worker bee inside cell.
Gray box obscures cell innards, and small light gray rectangle marks bee of interest. This was one of 30 modified video clips used to test reliability of identifying inside-cell behavior from what is visible outside the cell. (Behavior? Answer: eating)
Infrared-sensitive video of worker bee inside cell.
Gray box obscures cell innards, and small light gray rectangle marks bee of interest. This was one of 30 modified video clips used to test reliability of identifying inside-cell behavior from what is visible outside the cell. (Behavior? Answer: cleaning)
Thermal imaging video of heating bee inside cell.
The thorax is relatively hot, the abdomen is continuously ventilating, but the bee is otherwise immobile. Video plays close to actual time (30 images per second).
Thermal imaging video featuring many acts of heating and cell maintenance (cleaning or building) inside cells.
Video captured 1 image per second.
Thermal imaging video featuring many acts of heating and cell maintenance (cleaning or building) inside cells.
Heaters’ thoracic temperatures fluctuate over time (e.g., bee featured in Fig. 9C is from this video at 10 s after 07:58 h and 2 min 48 s later). Video captured three images per second.
Thermal imaging video of workers maintaining (cleaning or building) cells.
Each bright spot is a relatively hot thorax, but of a bee maintaining, not heating, cells. This video plays close to actual time (30 images per second), allowing the viewer to observe a behavior closely for what it is.
R analyses and visualizations.
This set of R scripts conducts all of the visualization and statistical analyses for the “Slumber in a cell” project.
Dates and times for the surveys performed on all bees exhibiting any of the four behaviors while inside cells.
The behaviors are expressed as totals for each behavior, and as proportions of the total.
Identical to the Dataset_S1.csv spreadsheet, except that it includes the column total.beh.
This column is necessary for executing the statistics script related to the behavior surveys, which needs the totals in long format to conduct a Kruskal–Wallis test.
Worksheet including times for each abdominal pulse for each of the monitored bees, as well as the calculated separations between each pulse.
The columns that end with “by3” are simply the corresponding columns multiplied by 0.3 because the original videos were slowed to 0.3 speed to facilitate observing and marking the abdominal pulses. Results were later restored to the original timestamps. The column “LBB” stands for “Look Between Bouts”, and it excludes any pulse isolated by >5 s. “Event” = “pulse”, as defined in the paper. All “event” (pulse) times and “event” (pulse) separations are measured in milliseconds.
Worksheet containing IDs, behaviors, and surface temperatures of the thorax and surroundings for each bee.
The time announced and time beginning columns are used for cross-referencing bee identifications with other records. Mean Tth and Tsurr were calculated in the indicated columns using both methods of calculation: (1) T2 only, and (2) T1, T2 and T3 averaged. Delta temperatures (differences between Tth and Tsurr) are also indicated. Standard deviations are included for all temperature measurements and means. Below the raw data, we have calculated mean, standard deviation, minimum and maximum durations for each behavior.
Adjusted temperature measurements.
The giftwrap used as the observation hive’s window produced a nonlinear error when recording temperature as temperature increased, so we adjusted absolute temperature measurements. Adjusted temperatures are listed here. Data were collected by Christian Lutsch. | https://peerj.com/articles/9583/ | 21 |
63 | What Is Real Income?
Understanding Real Income
Real Income Formula
Real Wage Rates
What Is Real Income?Real income is how much money an individual or entity makes after accounting for inflation
and is sometimes called real wage
when referring to an individual's income. Individuals often closely track their nominal vs. real income to have the best understanding of their purchasing power.
- Real income, also known as real wage, is how much money an individual or entity makes after adjusting for inflation.
- Real income differs from nominal income, which has no such adjustments.
- Individuals often closely track their nominal vs. real income to have the best understanding of their purchasing power.
- Most real income calculations are based on inflation reported by the Consumer Price Index (CPI).
- Theoretically, when inflation is rising, real income and purchasing power fall by the amount of the inflation increase on a per-dollar basis.
Understanding Real Income Real income is an economic measure that provides an estimation of an individual’s actual purchasing power
in the open market after accounting for inflation. It subtracts an economic inflation rate per dollar from an individual’s income
, typically resulting in a lower value and decreased spending power.
of prices can also occur, which creates a negative inflation rate. Negative inflation or deflation will lead to a higher purchasing power of real income.
Real income differs from nominal
income, which is not adjusted to account for fluctuating prices and living costs. Individuals often closely track their nominal vs. real income to have the best understanding of their purchasing power.
Overall, real income is only an estimate of an individual’s purchasing power since the formula for calculating real income uses a broad collection of goods that may or may not closely match the categories an investor spends within. Moreover, entities may not spend all of their nominal income, avoiding some of the real income’s effects.
There are several ways to calculate real income. Three basic real income formulas include the following:
- Wages - (Wages x Inflation Rate) = Real Income
- Wages / (1 + Inflation Rate) = Real Income
- (1 – Inflation Rate) x Wages = Real Income
All real income/real wage formulas can integrate one of several inflation measures. Three of the most popular inflation measures for consumers include:
The Consumer Price Index (CPI):
The CPI measures the average cost of a specific basket of goods and services, including food and beverages, education, recreation, clothing, transportation, and medical care. In the United States, the Bureau of Labor Statistics
(BLS) publishes CPI numbers monthly and annually.
The PCE (Personal Consumption Expenditure) Price Index:
The PCE Price Index is a second comparable consumer price index. It includes slightly different classifications for goods and services and also has its own adjustments and methodology nuances. The PCE Price Index is used by the Federal Reserve
(Fed) for gauging consumer price inflation and making monetary policy
The GDP Price Index (Deflator):
The GDP Price Index is one of the broadest measures of inflation since it considers everything produced by the U.S. economy, excluding imports
Generally, the three main price indexes will report relatively the same level of inflation. However, analysts of real income can choose any price index measure that they believe best fits their income analysis situation. Special Considerations for Investing Many individuals and businesses invest a significant portion of their income in risk-free investment products
and vehicles that match or exceed the economic inflation rate in order to mitigate the effects of inflation on their income.
There are several risk-free investments that offer a return of approximately 2% or more. These products include high yield
savings accounts, money market accounts, certificates of deposit, Treasuries, and Treasury Inflation-Protected Securities
Beyond that, investors may be willing to take on slightly more risk in order to keep their income yielding at or above inflation. For more sophisticated investors, municipal and corporate bonds
are often used for obtaining 2%+ returns, beating inflation, and helping income to grow steadily over time.
When following real wages, there may be several statistics to consider. A real wage rate can be a basic calculation of an individual’s hourly, weekly, or annual rate after adjusting for inflation.
Having an expectation for a real wage rate can be just as important as a career expectation for a nominal wage rate.
The BLS publishes a monthly real earnings report
, which can be helpful in keeping tabs on real wage rates. The “January 2021 Real Earnings” report, for example, shows the real average hourly earnings rate across all surveyed workers on private nonfarm payrolls
at $11.43 per hour—a 4% increase on January 2020.
The comprehensive BLS report has been created using special methodologies. Individuals looking to calculate their own real wage rate may be better served by adapting the above real income formulas to their own individual situation.
For example, a mid-level manager with a nominal $60,000 per year salary might follow the CPI to calculate their real hourly, weekly, monthly, and annual wage rate. Suppose the CPI reported an inflation rate of 2.4%. Using the simple formula [Wages / (1 + Inflation Rate) = Real Income], this would result in an approximate real wage rate of $58,594—relative to the period in which the $60,000 was calculated.
Calculating real wage rates on an hourly, weekly, and monthly basis can be more complex but still attempted. The mid-level manager could divide his nominal annual wage by the number of hours, weeks, and months per year with a subsequent adjustment. For a monthly assessment, a $60,000 per year salary would translate to $5,000 in nominal pay per month. Adjusting that by the CPI’s monthly change, let's say of -0.01%, the $5,000 would have increased its purchasing power to $5,005. Other takes on the real wage rate might look at the percentage of real to nominal wages or the real vs. nominal wage growth rate. Cost of living
indexes can also provide valuable information on real wage vs. nominal wage rate expectations. These indexes are used to make cost-of-living adjustments
(COLA) for workers, insurance plans, retirement plans, and more.
Overall, inflation’s effect on wages will affect the purchasing power of an individual consumer. When prices are rising in the marketplace but consumers are getting paid the same wage then a discrepancy is created, which leads to an effect on purchasing power. This is why real income decreases when inflation increases and vice versa.
When inflation occurs, a consumer must pay more for a fixed quantity of goods or services. Theoretically, this is why savvy investors seek to hold a significant portion of their income in investments with a 2%+ return. In that case, with inflation at 2% they would be able to maintain their purchasing power at a constant level. For instance, assume a consumer spends approximately $100 per month for a total of $1,200 per year on food during a year when inflation is rising at an annual rate of 1%. Also, assume that the consumer saw no change in their wages. A consumer with a $60,000 annual nominal salary would have lost approximately $595 of purchasing power over a year, or one cent per dollar spent, due to the effects of inflation. In terms of their food purchases, this means the same quantity of food cost them $12 more during the current year compared to the past year. Alternatively, if this consumer isn’t following a strict food budget, they will likely spend approximately $101 per month or $1,212 to get the same amount of food they would have bought in the previous year.
Consumer Price Index (CPI) Definition
The Consumer Price Index measures the average change in prices over time that consumers pay for a basket of goods and services. more
Determining Your Real Rate of Return
Real rate of return adjusts the profit figure from an investment to take into account the effects of inflation. more
Inflation is a decrease in the purchasing power of money, reflected in a general increase in the prices of goods and services in an economy. more
Indexation is a method of linking the price or value of an asset to a price or price index of some type to adjust for inflation. more
Nominal Gross Domestic Product
Nominal gross domestic product measures the value of all finished goods and services produced by a country at their current market prices. more
Personal Consumption Expenditures (PCE)
Personal consumption expenditures (PCEs) are imputed household expenditures for a defined period of time used as the basis for the PCE Price Index. more
Investopedia is part of the Dotdash | https://googleweblight.com/sp?hl&geid=NSTNR&u=https://www.investopedia.com/terms/r/realincome.asp | 21 |
14 | The image is from Wikipedia Commons
Slavery in the Ottoman Empire
Slavery in the Ottoman Empire was a legal and significant part of the Ottoman Empire's economy and traditional society. The main sources of slaves were wars and politically organized enslavement expeditions in North and East Africa, Eastern Europe, the Balkans, and the Caucasus. It has been reported that the selling price of slaves decreased after large military operations. In Constantinople (present-day Istanbul), the administrative and political center of the Ottoman Empire, about a fifth of the 16th- and 17th-century population consisted of slaves. Customs statistics of these centuries suggest that Istanbul's additional slave imports from the Black Sea may have totaled around 2.5 million from 1453 to 1700.
Even after several measures to ban slavery in the late 19th century, the practice continued largely unabated into the early 20th century. As late as 1908, female slaves were still sold in the Ottoman Empire. Sexual slavery was a central part of the Ottoman slave system throughout the history of the institution.
A member of the Ottoman slave class, called a kul in Turkish, could achieve high status. Eunuch harem guards and janissaries are some of the better known positions a slave could hold, but female slaves were actually often supervised by them.
A large percentage of officials in the Ottoman government were bought slaves, raised free, and integral to the success of the Ottoman Empire from the 14th century into the 19th. Many slave officials themselves owned numerous slaves, although the Sultan himself owned by far the most. By raising and specially training slaves as officials in palace schools such as Enderun, where they were taught to serve the Sultan and other educational subjects, the Ottomans created administrators with intricate knowledge of government, and fanatic loyalty. The male establishment of this society created links of recorded information, even though speculation is present for European writings.[further explanation needed] However, women played and held the most important roles within the Harem institution.
Early Ottoman slavery
In the mid-14th century, Murad I built an army of slaves, referred to as the Kapıkulu. The new force was based on the Sultan's right to a fifth of the war booty, which he interpreted to include captives taken in battle. The captives were trained in the sultan's personal service. The devşirme system could be considered a form of slavery because the Sultans had absolute power over them. However, as the 'servant' or 'kul' of the sultan, they had high status within the Ottoman society because of their training and knowledge. They could become the highest officers of the state and the military elite, and most recruits were privileged and remunerated. Though ordered to cut all ties with their families, a few succeeded in dispensing patronage at home. Christian parents might thus implore, or even bribe, officials to take their sons. Indeed, Bosnian and Albanian Muslims successfully requested their inclusion in the system.
Slaves were traded in special marketplaces called "Esir" or "Yesir" that were located in most towns and cities, central to the Ottoman Empire. It is said that Sultan Mehmed II "the Conqueror" established the first Ottoman slave market in Constantinople in the 1460s, probably where the former Byzantine slave market had stood. According to Nicolas de Nicolay, there were slaves of all ages and both sexes, most were displayed naked to be thoroughly checked – especially children and young women – by possible buyers.
Ottoman slavery in Central and Eastern Europe
In the devşirme, which connotes "draft", "blood tax" or "child collection", young Christian boys from the Balkans and Anatolia were taken from their homes and families, converted to Islam, and enlisted into the most famous branch of the Kapıkulu, the Janissaries, a special soldier class of the Ottoman army that became a decisive faction in the Ottoman invasions of Europe. Most of the military commanders of the Ottoman forces, imperial administrators, and de facto rulers of the Empire, such as Sokollu Mehmed Pasha, were recruited in this way. By 1609, the Sultan's Kapıkulu forces increased to about 100,000.
A Hutterite chronicle reports that in 1605, during the Long Turkish War, some 240 Hutterites were abducted from their homes in Upper Hungary by the Ottoman Turkish army and their Tatar allies, and sold into Ottoman slavery. Many worked in the palace or for the Sultan personally.
Domestic slavery was not as common as military slavery. On the basis of a list of estates belonging to members of the ruling class kept in Edirne between 1545 and 1659, the following data was collected: out of 93 estates, 41 had slaves. The total number of slaves in the estates was 140; 54 female and 86 male. 134 of them bore Muslim names, 5 were not defined, and 1 was a Christian woman. Some of these slaves appear to have been employed on farms. In conclusion, the ruling class, because of extensive use of warrior slaves and because of its own high purchasing capacity, was undoubtedly the single major group keeping the slave market alive in the Ottoman Empire.
Rural slavery was largely a phenomenon endemic to the Caucasus region, which was carried to Anatolia and Rumelia after the Circassian migration in 1864. Conflicts frequently emerged within the immigrant community and the Ottoman Establishment intervened on the side of the slaves at selective times.
The Crimean Khanate maintained a massive slave trade with the Ottoman Empire and the Middle East until the early eighteenth century. In a series of slave raids euphemistically known as the "harvesting of the steppe", Crimean Tatars enslaved East Slavic peasants. The Polish–Lithuanian Commonwealth and Russia suffered a series of Tatar invasions, the goal of which was to loot, pillage, and capture slaves, the Slavic languages even developed a term for the Ottoman slavery (Polish: jasyr, based on Turkish and Arabic words for capture - esir or asir). The borderland area to the south-east was in a state of semi-permanent warfare until the 18th century. It is estimated that up to 75% of the Crimean population consisted of slaves or freed slaves. The 17th century Ottoman writer and traveller Evliya Çelebi estimated that there were about 400,000 slaves in the Crimea but only 187,000 free Muslims. Polish historian Bohdan Baranowski assumed that in the 17th century the Polish–Lithuanian Commonwealth (present-day Poland, Ukraine and Belarus) lost an average of 20,000 yearly and as many as one million in all years combined from 1500 to 1644.
Prices and taxes
A study of the slave market of Ottoman Crete produces details about the prices of slaves. Factors such as age, skin color, virginity etc. significantly influenced prices. The most expensive slaves were those between 10 and 35 years of age, with the highest prices for European virgin girls 13–25 years of age and teenaged boys. The cheaper slaves were those with disabilities and sub-Saharan Africans. Prices in Crete ranged between 65 and 150 "esedi guruş" (see Kuruş). But even the lowest prices were affordable to only high income persons. For example, in 1717 a 12-year-old boy with mental disabilities was sold for 27 guruş, an amount that could buy in the same year 462 kg (1,019 lb) of lamb meat, 933 kg (2,057 lb) of bread or 1,385 l (366 US gal) of milk. In 1671 a female slave was sold in Crete for 350 guruş, while at the same time the value of a large two-floor house with a garden in Chania was 300 guruş. There were various taxes to be paid on the importation and selling of slaves. One of them was the "pençik" or "penç-yek" tax, literally meaning "one fifth". This taxation was based on verses of the Quran, according to which one fifth of the spoils of war belonged to God, to the Prophet and his family, to orphans, to those in need and to travelers. The Ottomans probably started collecting pençik at the time of Sultan Murad I (1362–1389). Pençik was collected both in money and in kind, the latter including slaves as well. Tax was not collected in some cases of war captives. With war captives, slaves were given to soldiers and officers as a motive to participate in war.
The recapture of runaway slaves was a job for private individuals called "yavacis". Whoever managed to find a runaway slave would collect a fee of "good news" from the yavaci and the latter took this fee plus other expenses from the slaves' owner. Slaves could also be rented, inherited, pawned, exchanged or given as gifts.
Barbary slave raids
For centuries, large vessels on the Mediterranean relied on European galley slaves supplied by Ottoman and Barbary slave traders. Hundreds of thousands of Europeans were captured by Barbary pirates and sold as slaves in North Africa and the Ottoman Empire between the 16th and 19th centuries. These slave raids were conducted largely by Arabs and Berbers rather than Ottoman Turks. However, during the height of the Barbary slave trade in the 16th, 17th, 18th centuries, the Barbary states were subject to Ottoman jurisdiction and for exception of Morocco were ruled by Ottoman pashas. Furthermore, many slaves captured by the Barbary corsairs were sold eastward into Ottoman territories before, during, and after Barbary's period of Ottoman rule.
Notable occasions include the Turkish Abductions
As there were restrictions on the enslavement of Muslims and of "People of the Book" (Jews and Christians) living under Muslim rule, pagan areas in Africa became a popular source of slaves. Known as the Zanj (Bantu), these slaves originated mainly from the African Great Lakes region as well as from Central Africa. The Zanj were employed in households, on plantations and in the army as slave-soldiers. Some could ascend to become high-rank officials, but in general Zanj were inferior to European and Caucasian slaves.[failed verification][need quotation to verify]
One way for Zanj slaves to serve in high-ranking roles involved becoming one of the African eunuchs of the Ottoman palace. This position was used as a political tool by Sultan Murad III (r. 1574–1595) as an attempt to destabilize the Grand Vizier by introducing another source of power to the capital.
After being purchased by a member of the Ottoman court, Mullah Ali was introduced to the first chief Black eunuch, Mehmed Aga. Due to Mehmed Aga's influence, Mullah Ali was able to make connections with prominent colleges and tutors of the day, including Hoca Sadeddin Efendi (1536/37-1599), the tutor of Murad III. Through the network he had built with the help of his education and the black eunuchs, Mullah Ali secured several positions early on. He worked as a teacher in Istanbul, a deputy judge, and an inspector of royal endowments. In 1620, Mullah Ali was appointed as chief judge of the capital and in 1621 he became the kadiasker, or chief judge, of the European provinces and the first black man to sit on the imperial council. At this time, he had risen to such power that a French ambassador described him as the person who truly ran the empire.
Although Mullah Ali was often challenged because of his blackness and his connection to the African eunuchs, he was able to defend himself through his powerful network of support and his own intellectual productions. As a prominent scholar, he wrote an influential book in which he used logic and the Quran to debunk stereotypes and prejudice against dark-skinned people and to delegitimize arguments for why Africans should be slaves.
Today, thousands of Afro Turks, the descendants of the Zanj slaves in the Ottoman Empire, continue to live in modern Turkey. An Afro-Turk, Mustafa Olpak, founded the first officially recognised organisation of Afro-Turks, the Africans' Culture and Solidarity Society (Afrikalılar Kültür ve Dayanışma Derneği) in Ayvalık. Olpak claims that about 2,000 Afro-Turks live in modern Turkey.
East African slaves
The Upper Nile Valley and Abyssinia were also significant sources of slaves in the Ottoman Empire. Although the Christian Abyssinians defeated the Ottoman invaders, they did not tackle enslavement of southern pagans as long as they were paid taxes by the slave traders. Pagans and muslims from southern Ethiopian areas such as kaffa and jimma were taken north to Ottoman Egypt and also to ports on the Red Sea for export to Arabia and the Persian Gulf. In 1838, it was estimated that 10,000 to 12,000 slaves were arriving in Egypt annually using this route. A significant number of these slaves were young women, and European travelers in the region recorded seeing large numbers of Ethiopian slaves in the Arab world at the time. The Swiss traveler Johann Louis Burckhardt estimated that 5,000 Ethiopian slaves passed through the port of Suakin alone every year, headed for Arabia, and added that most of them were young women who ended up being prostituted by their owners. The English traveler Charles M. Doughty later (in the 1880s) also recorded Ethiopian slaves in Arabia, and stated that they were brought to Arabia every year during the Hajj pilgrimage. In some cases, female Ethiopian slaves were preferred to male ones, with some Ethiopian slave cargoes recording female-to-male slave ratios of two to one.
Slaves in the Imperial Harem
Very little is actually known about the Imperial Harem, and much of what is thought to be known is actually conjecture and imagination. There are two main reasons for the lack of accurate accounts on this subject. The first was the barrier imposed by the people of the Ottoman society – the Ottoman people did not know much about the machinations of the Imperial Harem themselves, due to it being physically impenetrable, and because the silence of insiders was enforced. The second was that any accounts from this period were from European travelers, who were both not privy to the information, and also inherently presented a Western bias and potential for misinterpretation by being outsiders to the Ottoman culture. Despite the acknowledged biases by many of these sources themselves, scandalous stories of the Imperial Harem and the sexual practices of the sultans were popular, even if they were not true. Accounts from the seventeenth century drew from both a newer, seventeenth century trend as well as a more traditional style of history-telling; they presented the appearance of debunking previous accounts and exposing new truths, while proceeding to propagate old tales as well as create new ones. However, European accounts from captives who served as pages in the imperial palace, and the reports, dispatches, and letters of ambassadors resident in Istanbul, their secretaries, and other members of their suites proved to be more reliable than other European sources. And further, of this group of more reliable sources, the writings of the Venetians in the sixteenth century surpassed all others in volume, comprehensiveness, sophistication, and accuracy.
The concubines of the Ottoman Sultan consisted chiefly of purchased slaves. The Sultan's concubines were generally of Christian origin (usually European or Circassian). Most of the elites of the Harem Ottoman Empire included many women, such as the sultan's mother, preferred concubines, royal concubines, children (princes/princess), and administrative personnel. The administrative personnel of the palace were made up of many high-ranking women officers, they were responsible for the training of Jariyes for domestic chores. The mother of a Sultan, though technically a slave, received the extremely powerful title of Valide Sultan which raised her to the status of a ruler of the Empire (see Sultanate of Women). The mother of the Sultan played a substantial role in decision-making for the Imperial Harem. One notable example was Kösem Sultan, daughter of a Greek Christian priest, who dominated the Ottoman Empire during the early decades of the 17th century. Roxelana (also known as Hürrem Sultan), another notable example, was the favorite wife of Suleiman the Magnificent. Many historians who study the Ottoman Empire, rely on the factual evidence of observers of the 16th and 17th century Islam. The tremendous growth of the Harem institution reconstructed the careers and roles of women in the dynasty power structure. There were harem women who were the mothers, legal wives, consorts, Kalfas, and concubines of the Ottoman Sultan. Only a handful of these harem women were freed from slavery and married their spouses. These women were : Hurrem Sulan, Nurbanu Sulan, Saifye Sultan, Kosem Sulan, Gulnus Sul, Perestu Sultan, and Bezmiara Kadin. The Queen mothers who held the title Valide Sultan had only five of them that were freed slaves after they were concubines to the Sultan.
The concubines were guarded by enslaved eunuchs, themselves often from pagan Africa. The eunuchs were headed by the Kizlar Agha ("agha of the [slave] girls"). While Islamic law forbade the emasculation of a man, Ethiopian Christians had no such compunctions; thus, they enslaved and emasculated members of territories to the south and sold the resulting eunuchs to the Ottoman Porte. The Coptic Orthodox Church participated extensively in the slave trade of eunuchs. Coptic priests sliced the penis and testicles off boys around the age of eight in a castration operation.
The eunuch boys were then sold in the Ottoman Empire. The majority of Ottoman eunuchs endured castration at the hands of the Copts at Abou Gerbe monastery on Mount Ghebel Eter. Slave boys were captured from the African Great Lakes region and other areas in Sudan like Darfur and Kordofan then sold to customers in Egypt. During the operation, the Coptic clergyman chained the boys to tables and after slicing their sexual organs off, stuck bamboo catheters into the genital area, then submerged them in sand up to their necks. The recovery rate was 10 percent. The resulting eunuchs fetched large profits in contrast to eunuchs from other areas.
Ottoman Sexual Slavery
Female sexual slavery was extremely common in the Ottoman empire and any child of a female slave was just as legitimate as any child born of a free woman. This means that any child of a female slave could not be sold or given away. However, due to extreme poverty, some Circassian slaves and free people in the lower classes of Ottoman society felt forced to sell their children into slavery; this provided a potential benefit for the children as well, as slavery also held the opportunity for social mobility. If a harem slave became pregnant, it also became illegal for her to be further sold in slavery, and she would gain her freedom upon her current owner's death. Slavery in and of itself was long tied with the economic and expansionist activities of the Ottoman empire. There was a major decrease in slave acquisition by the late eighteenth century as a result of the lessoning of expansionist activities. War efforts were a great source of slave procurement, so the Ottoman empire had to find other methods of obtaining slaves because they were a major source of income within the empire. The Caucasian War caused a major influx of Circassian slaves into the Ottoman market and a person of modest wealth could purchase a slave with a few pieces of gold. At a time, Circassian slaves became the most abundant in the imperial harem.
Circassians, Syrians, and Nubians were the three primary races of females who were sold as sex slaves (Cariye) in the Ottoman Empire. Circassian girls were described as fair and light-skinned and were frequently enslaved by Crimean Tatars then sold to Ottoman Empire to live and serve in a Harem. They were the most expensive, reaching up to 500 pounds sterling, and the most popular with the Turks. Second in popularity were Syrian girls, which came largely from coastal regions in Anatolia. Their price could reach up to 30 pounds sterling. Nubian girls were the cheapest and least popular, fetching up to 20 pounds sterling. Sex roles and symbolism in Ottoman society functioned as a normal action of power. The palace Harem excluded enslaved women from the rest of society.
Throughout the 18th and 19th centuries, sexual slavery was not only central to Ottoman practice but a critical component of imperial governance and elite social reproduction. Boys could also become sexual slaves, though usually they worked in places like bathhouses (hammam) and coffeehouses. During this period, historians have documented men indulging in sexual behavior with other men and getting caught. Moreover, the visual illustrations during this period of exposing a sodomite being stigmatized by a group of people with Turkish wind instruments shows the disconnect between sexuality and tradition. However those that were accepted became tellaks (masseurs), köçeks (cross-dressing dancers) or sāqīs (wine pourers) for as long as they were young and beardless. The "Beloveds" were often loved by former Beloveds that were educated and considered upper class.
Some female slaves who were owned by women were sold as sex workers for short periods of time. Women also purchased slaves, but usually not for sexual purposes, and most likely searched for slaves who were loyal, healthy, and had good domestic skills. However, there were accounts of Jewish women owning slaves and indulging in forbidden sexual relationships in Cairo. Beauty was also a valued trait when looking to buy a slave because they often seen as objects to show off to people. While prostitution was against the law, there were very little recorded instances of punishment that came to shari'a courts for pimps, prostitutes, or for the people who sought out their services. Cases that did punish prostitution usually resulted in the expulsion of the prostitute or pimp from the area they were in. However, this does not mean that these people were always receiving light punishments. Sometimes military officials took it upon themselves to enforce extra judicial punishment. This involved pimps being strung up on trees, destruction of brothels, and harassing prostitutes.
Sexual slavery in the Ottoman empire also provided a social function because some slaves gained the status of their owner, or was passed on a distinguished person's lineage. Slaves also had the right to inheritance. Some slaves were treated as family members, and were left with money, items, or were even granted their own freedom. Sexual slavery was a means for social mobility in the Ottoman empire. The imperial harem was similar to a training institution for concubines, and served as a way to get closer to the Ottoman elite. Parents from lower-class concubines especially had better opportunities for social mobility in the imperial harem because they could be trained for marriage to high-ranking military officials. Some of the concubines had a chance for even greater power in Ottoman society if they became favorites of the sultan. The sultan would keep a large number of girls as his concubines in the New Palace, which as a result became known as "the palace of the girls" in the sixteenth and seventeenth centuries. These concubines mainly consisted of young Christian slave girls. Accounts claim that the sultan would keep a concubine in the New Palace for a period of two months, during which time he would do with her as he pleased. They would be considered eligible for the sultan's sexual attention until they became pregnant; if a concubine became pregnant, the sultan may take her as a wife and move her to the Old Palace where they would prepare for the royal child; if she did not become pregnant by the end of the two months, she would be married off to one of the sultan's high-ranking military men. If a concubine became pregnant and gave birth to a daughter, she may still be considered for further sexual attention from the sultan. The harem system was an important part of Ottoman-Egyptian society as well; it attempted to mimic the imperial harem in many ways, including the secrecy of the harem section of the household, where the women were kept hidden away from males that were outside of their own family, the guarding of the women by black eunuchs, and also having the function of training for becoming wives or concubines.
Decline and suppression of Ottoman slavery
Responding to the influence and pressure of European countries in the 19th century, the Empire began taking steps to curtail the slave trade, which had been legally valid under Ottoman law since the beginning of the empire. One of the important campaigns against Ottoman slavery and slave trade was conducted in the Caucasus by the Russian authorities.
A series of decrees were promulgated that initially limited the slavery of white persons, and subsequently that of all races and religions. In 1830, a firman of Sultan Mahmud II gave freedom to white slaves. This category included Circassians, who had the custom of selling their own children, enslaved Greeks who had revolted against the Empire in 1821, and some others. Attempting to suppress the practice, another firman abolishing the trade of Georgians and Circassians was issued in October, 1854.
Later, slave trafficking was prohibited in practice by enforcing specific conditions of slavery in sharia, Islamic law, even though sharia permitted slavery in principle. For example, under one provision, a person who was captured could not be kept a slave if they had already been Muslim prior to their capture. Moreover, they could not be captured legitimately without a formal declaration of war, and only the Sultan could make such a declaration. As late Ottoman Sultans wished to halt slavery, they did not authorize raids for the purpose of capturing slaves, and thereby made it effectively illegal to procure new slaves, although those already in slavery remained slaves.
The Ottoman Empire and 16 other countries signed the 1890 Brussels Conference Act for the suppression of the slave trade. Clandestine slavery persisted into the early 20th century. A circular by the Ministry of Internal Affairs in October 1895 warned local authorities that some steamships stripped Zanj sailors of their "certificates of liberation" and threw them into slavery. Another circular of the same year reveals that some newly freed Zanj slaves were arrested based on unfounded accusations, imprisoned and forced back to their lords.
An instruction of the Ministry of Internal Affairs to the Vali of Bassora of 1897 ordered that the children of liberated slaves be issued separate certificates of liberation to avoid both being enslaved themselves and separated from their parents. George Young, Second Secretary of the British Embassy in Constantinople, wrote in his Corpus of Ottoman Law, published in 1905, that at time of writing the slave trade in the Empire was practiced only as contraband. The trade continued until World War I. Henry Morgenthau, Sr., who served as the U.S. Ambassador in Constantinople from 1913 until 1916, reported in his Ambassador Morgenthau's Story that there were gangs that traded white slaves during those years. He also wrote that Armenian girls were sold as slaves during the Armenian genocide of 1915.
The Young Turks adopted an anti-slavery stance in the early 20th century. Sultan Abdul Hamid II's personal slaves were freed in 1909 but members of his dynasty were allowed to keep their slaves. Mustafa Kemal Atatürk ended legal slavery in the Turkish Republic. Turkey waited until 1933 to ratify the 1926 League of Nations convention on the suppression of slavery. Illegal sales of girls were reported in the early 1930s. Legislation explicitly prohibiting slavery was adopted in 1964.
- This page is based on the Wikipedia article Slavery in the Ottoman Empire; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. | https://thereaderwiki.com/en/Slavery_(Ottoman_Empire) | 21 |
24 | The conditions of slavery in the cities differed significantly from those in the countryside. On relatively isolated plantations, slaves had little contact with free blacks and lower-class whites, and masters maintained direct and effective control; a deep and seemingly unbridgeable chasm yawned between slavery and freedom. In the city, however, a master often could not supervise his slaves closely and at the same time use them profitably. Even if they slept at night in carefully watched backyard barracks, slaves moved about during the day alone, performing errands of various kinds. Thus urban slaves gained numerous opportunities to mingle with free blacks and with whites. In the cities, the line between slavery and freedom became increasingly indistinct. There was a considerable market in the South for common laborers, particularly since, unlike in the North, there were few European immigrants to perform menial chores. In western Georgia, Alabama, Mississippi, and Florida lived what were known as the "Five Civilized Tribes"-the Cherokee, Creek, Seminole, Chickasaw, and Choctaw-most of whom had established settled agricultural societies with successful economies. These tribes were sedentary, had an agricultural economy, and had towns, villages, and wooden homes. They also adopted white culture. The Cherokees in Georgia had formed a particularly stable and sophisticated culture, with their own written language, and a formal constitution that created an independent Cherokee Nation. Even some whites argued that the Cherokees, unlike other tribes, should be allowed to retain their eastern lands, since they had become such a "civilized" society and had, under pressure from missionaries and government agents, given up many of their traditional ways. Georgia, Alabama, and Mississippi began passing laws to regulate the tribes remaining in their states. They received assistance in these efforts from Congress, which in 1830 passed the Removal Act to appropriate money to finance federal negotiations with the southern tribes aimed at relocating them to the West. In Georgia, the Cherokees tried to stop the white encroachments by appealing to the Supreme Court. The Court's decisions in Cherokee Nation v. Georgia and Worcester v. Georgia seemed at least partially to vindicate the tribe. Jackson's longtime hostility toward Native Americans left him with little sympathy for the Cherokees and little patience with the Court. He sent an army of 7,000 under General Winfield Scott to round them up and drive them westward at bayonet point. The basis of the Seminole War was the Seminole Indians resisting relocation. Though many died in the process, some managed to resist the pressures to relocate. Opposition to the Bank came from two very different groups: the "soft-money" faction and the "hard money" faction. Advocates of the soft money-people who wanted more currency in circulation and believed that issuing bank notes unsupported by gold and silber was the best way to circulate more currency-consisted largely of state bankers and their allies. The hard-money people believed that gold and silver were the only basis for money. The soft-money advocates were believers in rapid economic growth and speculation; the hard-money forces embraced older ideas of "public virtue" and looked with suspicion on expansion and speculation. Jackson supported the hard-money position. He made it clear that he would not favor renewing the charter of the Bank of the United States, which was due to expire in 1836. Congress passed the recharter bill; Jackson, predictably, vetoed it; and the Bank's supporters in Congress failed to override the veto. He decided to remove the government's deposits from the Bank. Jackson's secretary of the Treasury believed that such an action would destabilize the financial system and refused to give the order. Jackson fired him and appointed a new one. When the new secretary similarly balked, Jackson fired him too and named a third, more compliant secretary: Attorney General Roger B. Taney, his close friend and loyal ally. Taney began placing the government's deposits not in the Bank of the United States, as it had in the past, but in a number of state banks. When the Bank of the United States died in 1836, the country lost a valuable, albeit flawed, financial institution and was left with a fragmented and chronically unstable banking system that would plague the economy for more than a century. The national culture in America developed through economic, cultural, social, and political factors. Before 1800, most of the literature in America was heavily influenced by the British. Noah Webster argued that American students should be educated as patriots, their minds filled with nationalistic, American thoughts. To encourage a distinctive American culture and help unify the new nation, Webster insisted on a simplified and Americanized system of spelling. His American Spelling Book, first published in 1783 and commonly known as the "blue-backed speller", eventually sold over 100 million copies. In addition, his school dictionary, issued in 1806, was republished in many editions and was eventually enlarged to become An American Dictionary of the English Language. His speller and his dictionary established a national standard of words and usages. But there were few opportunities for would-be American authors to get their work before the public. Printers preferred to publish popular works by English writers; magazine publishers filled their pages largely with items clipped from British periodicals. Only those American writers willing to pay the cost and bear the risk of publishing their own works could compete for public attention. Barlow published an epic poem, The Columbiad, in 1807, in an effort to convey the special character of American civilization. The acclaim it received helped to encourage other native writers. Among the most ambitious was the Philadelphian Charles Brockden Brown. Like many Americans, he was attracted to the relatively new literary form of the novel, had become popular in England in the late eighteenth century and had been successfully imported to America. His obsession with originality led him to produce a body of work characterized by a fascination with horror and deviant behavior. Perhaps as a result, his novels failed to develop a large people following. Washington Irving won wide acclaim for his satirical histories of early American life and his powerful fables of society in the New World. His popular folktales including the Legend of Sleepy Hollow and Rip Van Winkle made him the widely acknowledged leader of American literary life in his era. Perhaps the most influential works by American authors in the early republic were not poems, novels, or stories, but works of history that glorified the nation's past. Mercy Otis Warren published History of the Revolution, which emphasized the heroism of the American struggle. Mason Weems published Life of Washington. Weems had little interest in historical accuracy. He portrayed the aristocratic former president as a homespun man possessing simple republican virtues. | https://quizlet.com/353232031/apush-fall-exam-flash-cards/ | 21 |
32 | One popular macroeconomic analysis metric to compare economic productivity and standards of living between countries is purchasing power parity (PPP). PPP is an economic theory that compares different countries' currencies through a "basket of goods" approach, not to be confused with the Paycheck Protection Program created by the CARES Act.
According to this concept, two currencies are in equilibrium—known as the currencies being at par—when a basket of goods is priced the same in both countries, taking into account the exchange rates.
- Purchasing power parity (PPP) is a popular metric used by macroeconomic analysts that compares different countries' currencies through a "basket of goods" approach.
- Purchasing power parity (PPP) allows for economists to compare economic productivity and standards of living between countries.
- Some countries adjust their gross domestic product (GDP) figures to reflect PPP.
Calculating Purchasing Power Parity
The relative version of PPP is calculated with the following formula:
S=P2P1where:S= Exchange rate of currency 1 to currency 2P1= Cost of good X in currency 1
Click Play to Learn How to Calculate Purchasing Power Parity
Comparing Nations' Purchasing Power Parity
To make a meaningful comparison of prices across countries, a wide range of goods and services must be considered. However, this one-to-one comparison is difficult to achieve due to the sheer amount of data that must be collected and the complexity of the comparisons that must be drawn. To help facilitate this comparison, the University of Pennsylvania and the United Nations joined forces to establish the International Comparison Program (ICP) in 1968.
With this program, the PPPs generated by the ICP have a basis from a worldwide price survey that compares the prices of hundreds of various goods and services. The program helps international macroeconomists estimate global productivity and growth.
Every few years, the World Bank releases a report that compares the productivity and growth of various countries in terms of PPP and U.S. dollars. Both the International Monetary Fund (IMF) and the Organization for Economic Cooperation and Development (OECD) use weights based on PPP metrics to make predictions and recommend economic policy. The recommended economic policies can have an immediate short-term impact on financial markets.
Also, some forex traders use PPP to find potentially overvalued or undervalued currencies. Investors who hold stock or bonds of foreign companies may use the survey's PPP figures to predict the impact of exchange-rate fluctuations on a country's economy, and thus the impact on their investment.
Pairing Purchasing Power Parity With Gross Domestic Product
In contemporary macroeconomics, gross domestic product (GDP) refers to the total monetary value of the goods and services produced within one country. Nominal GDP calculates the monetary value in current, absolute terms. Real GDP adjusts the nominal gross domestic product for inflation.
However, some accounting goes even further, adjusting GDP for the PPP value. This adjustment attempts to convert nominal GDP into a number more easily comparable between countries with different currencies.
To better understand how GDP paired with purchase power parity works, suppose it costs $10 to buy a shirt in the U.S., and it costs €8.00 to buy an identical shirt in Germany. To make an apples-to-apples comparison, we must first convert the €8.00 into U.S. dollars. If the exchange rate was such that the shirt in Germany costs $15.00, the PPP would, therefore, be 15/10, or 1.5.
In other words, for every $1.00 spent on the shirt in the U.S., it takes $1.50 to obtain the same shirt in Germany buying it with the euro.
Drawbacks of Purchasing Power Parity
Since 1986, The Economist has playfully tracked the price of McDonald's Corp.’s (MCD) Big Mac hamburger across many countries. Their study results in the famed "Big Mac Index". In "Burgernomics"—a prominent 2003 paper that explores the Big Mac Index and PPP—authors Michael R. Pakko and Patricia S. Pollard cited the following factors to explain why the purchasing power parity theory is not a good reflection of reality.
Goods that are unavailable locally must be imported, resulting in transport costs. These costs include not only fuel but import duties as well. Imported goods will consequently sell at a relatively higher price than do identical locally sourced goods.
Government sales taxes such as the value-added tax (VAT) can spike prices in one country, relative to another.
Tariffs can dramatically augment the price of imported goods, where the same products in other countries will be comparatively cheaper.
The Big Mac's price factors input costs that are not traded. These factors include such items as insurance, utility costs, and labor costs. Therefore, those expenses are unlikely to be at parity internationally.
Goods might be deliberately priced higher in a country. In some cases, higher prices are because a company may have a competitive advantage over other sellers. The company may have a monopoly or be part of a cartel of companies that manipulate prices, keeping them artificially high.
The Bottom Line
While it's not a perfect measurement metric, purchase power parity does allow for the possibility of comparing pricing between countries that have differing currencies. | https://www.investopedia.com/updates/purchasing-power-parity-ppp/ | 21 |
21 | Before 19th century, the idea of static universe filled with ordinary, visible matter and radiation was widely accepted among the astronomers. Even when the first relativistic model of the universe was developed by Einstein in 1917, he added an extra term to the field equations in order to stabilize universe and make it static. This term was called cosmological constant. Cosmological constant is the energy density of the vacuum that acts against gravitational attracting force. But when astronomical observations revealed the expansion of the universe, and astronomers hypothesized that our universe had begun by a Big Bang, Einstein came to this idea that there was not any need to the cosmological constant in his equations, because the expansion of the universe could be aftermath of the Big Bang, preventing gravitational attracting force to collapse our universe.
The expansion of our universe was expected to be decelerating, but today’s observations show that the expansion of our universe is accelerating, and in order to explain this accelerated expansion of the universe, cosmologists hypothesized the term ‘dark energy’ that is a fluid with negative pressure filled our universe, and it works against gravity and accelerates the expansion of today’s universe. There are different models of dark energy; the first one is again cosmological constant, because it exerts repulsive force; and also there are dynamic and evolving models of dark energy, e.g. scalar field.
On the other hand astronomers have found evidence that more than 85 percent of mass in our universe is invisible; they call it ‘dark matter’.
In addition to these two strange components, dark matter and dark energy, cosmologists have added a bizarre phenomenon to our universe in order to solve issues with Big Bang model of universe; this phenomenon is called inflation or exponential expansion of universe just after the Big Bang.
So according to the modern cosmology, in addition to the visible (ordinary) matter and radiation, our universe contains two other mysterious components, dark matter and dark energy; and our universe is not a static universe, it is dynamic, evolutionary universe; and it has gone into different phases through its expansion over time. Just after the Big Bang, when it was in planck scale (the smallest scale in the universe), universe expanded exponentially to an astronomical scale, that is called inflationary era. Inflation happened in a very brief period of time and when it was ended the age of our universe was about 10-33 to 10-32 seconds. After inflation, radiation appeared in the universe; then dark matter and ordinary matter dominated our universe. So the rapid expansion of the universe due to the inflation, was slowed down by the positive pressure of these fluids ( radiation, dark matter, and ordinary matter). But again through the appearance of dark energy, the expansion of the universe began to speed up and today’s expansion of the universe is accelerating.
So the Big Bang model of the universe (with dark matter, dark energy, and inflation) is based on general theory of relativity and astronomical observations. The big issues with this model is the Big Bang itself, a singular point that gives birth to a planck scale universe, and inflation where general theory of relativity meets quantum mechanics.
According to the inflationary cosmology, universe experiences an accelerated, exponential expansion in its early stages, at about t∼ 10−35 s just after the Big Bang . This idea is introduced to solve the key problems of the ordinary Big Bang theory. The most important problems with the Big Bang theory without inflation are as follows:
• Flatness problem:
Observations shows that total energy density of the universe today, is close to its today’s critical value (or total density parameter today Ω0∼ 1); this means that our universe is flat. But according to the first Friedmann equation, any deviation of the mass/energy density from its critical value at a given time, causes deviations in curvature of the universe:
This deviation increases with time for a universe started with a Big Bang and filled with matter or radiation. So the mass/energy density of the early universe must be very closer to its critical value, than it is today.
Inflation (or exponential expansion ) of the early universe, resolve this issue by driving mass/energy density to be extremely close to its critical value at the end of the inflation, while the universe grows rapidly from planck scales to astronomical scales:
Where ti and tf are the times when inflation starts and ends, respectively; a(ti ) and a(tf ) are scale factors of the universe at the beginning and ending of the inflation. Hi is the Hubble parameter at the beginning of the inflation which is constant during the inflation. N is called number of e-foldings which is a large number.
An important point here is that inflation theory resolves the issue of positive deviations of mass/energy density; for negative deviations, dark matter is assumed to save the flat universe.
The universe is isotropic and homogeneous in large scale; it looks the same on opposite sides of the sky (opposite horizons); so there should have been communications between points with distances larger than particle horizon, in the past. Inflation of the early universe resolves this problem, too: The rapid exponential expansion of the universe from planck scales to the astronomical scales means that regions of the observable universe which are separated in the sky today, were much closer together before the inflation and they were in contact by light signals.
•Inflation theory resolves other problems too: It explains why we cannot observe any magnetic monopole in the sky; it explains the existence of galaxies and other structures and so the living beings in the universe, by producing small density fluctuations that can later in the history of the universe provide the seeds to cause matter to begin to clump together to form the galaxies and other observed structures.
Dark energy is a hypothesized term given to a mysterious force that accelerates the expansion of the universe; it works like anti-gravity. While gravity is an attractive force which draws mass together in a very local level, dark energy is a repulsive force.
Before 1990s most astronomers believed that expansion of the universe which started by the Big Bang, was decelerating and in the future it may turn into contraction; but during 1990s the Hubble Space Telescope and ground-based telescopes allowed astronomers to see almost the edges of the universe; they detected many supernova explosions; they saw that the light coming from these stars had the same characteristics as the light coming from local supernovas, as they reached their maximum brightness and faded away; so there is no differences between distant supernovas and local ones; the only difference is their brightness that is a good hint to determine their distances; if you know how a supernova works you can determine its intrinsic brightness and by comparing it with its appearance brightness you can determine how far away it is. By doing so and measuring the red-shifts of supernovae in different distances astronomers found that how our universe was expanding in different times of its history; they concluded that universe has gone into different phases of expansion through its history. After the big bang our universe was slowing down that is expected because of gravity force; but then it begins to speed up and today’s expansion rate of universe is more than its value at any point in the past. What caused the universe to shift from expected decelerating phase to an unexpected accelerating phase? In order to explain this phenomenon cosmologists hypothesized dark energy that is acting against gravitational attraction and speeding up the expansion of the universe. But what is dark energy? Cosmologists have different ideas about it with different names: Cosmological Constant, Vacuum Energy, Vacuum Pressure, and Scalar Field, for examples.
We live in a flat FRW (Fridmann-Robertson-Walker) universe, where the mass/energy density of its components (fluids) are constrained by first Friedmann’ equation:
In this relation ρi represents different fluids, and the linear combination of mass/energy densities in relation above is valid if the species (or fluids) evolve independently. The evolution of mass/energy density of each fluid for a adiabatically expanding universe is:
So the second Friedmann’s equation of the universe would be:
If the expansion of the universe is accelerating the second derivative of the scale factor over time, must be larger than zero. So the dominated fluid must have negative pressure to comply this relation:
ρi + 3pi < 0 ⇒ 1 + 3wi < 0 ⇒ wi <−1/3
Matter is pressure-less (wm= 0) and radiation with wr =1/3 has a positive pressure, so none of them could satisfy the relation above. So there is a need for another fluid with negative pressure to explain the inflation of the early universe and accelerating expansion of the today’s universe that is proved by the observations. This fluid is called dark energy. Cosmological constant (or vacuum energy density), is the first and simplest model of dark energy; but there are another models of dark energy, called quintessential models. Unlike the cosmological constant which has the same value everywhere in space for all the time, quintessence is a dynamical, evolving component of universe, with possibility to be spatially inhomogeneous.
The vacuum energy is overabundant, causing the expansion of the universe to accelerate; it is completely defined by one number, its magnitude. The value of energy density of vacuum, based on the result of different theories, is 1050 − 10120 times larger than the magnitude allowed by cosmology.
There are different models of quintessential approach to negative pressure (dark energy and inflation), for instance scalar field. There are also, alternatives to early inflation of universe. For instance varying speed of light scenario that assumes the speed of light in the very early universe was much greater than it is today; or the cyclic theory which assumes that the big bang is not the beginning of space and time. | https://detailedsolution.com/accelerated-expansion-of-the-universe/ | 21 |
14 | Open government is the governing doctrine which holds that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight. In its broadest construction, it opposes reason of state and other considerations, which have tended to legitimize extensive state secrecy. The origins of Open Government arguments can be dated to the time of the European Enlightenment, during which philosophers debated the proper construction of a then nascent democratic society. It is also increasingly being associated with the concept of democratic reform.
See also Accountability
The concept of Open Government is broad in scope but is most often connected to ideas of government transparency and accountability. One definition, published by The Quality of Government Institute at the University of Gothenburg in Sweden, limits government openness to information released by the government, or the extent to which citizens can request and receive information that is not already published. Harlan Yu and David G. Robinson specify the distinction between Open Data and open government in their paper “The New Ambiguity of “Open Government”. They define Open Government in terms of service delivery and public accountability. They argue that technology can be used to facilitate disclosure of information, but that the use of open data technologies does not necessarily equate accountability.
The Organisation for Economic Co-operation and Development (OECD) approaches open government through the following categories: whole of government coordination, civic engagement and access to information, budget transparency, integrity and the fight against corruption, use of technology, and local development.
The term 'open government' originated in the United States after World War II. Wallace Parks, who served on a subcommittee on Government Information created by the U.S. Congress, introduce the term in his 1957 article “The Open Government Principle: Applying the Right to Know under the Constitution.” After this and after the passing of the Freedom of Information Act (FOIA) in 1966, federal courts began using the term as a synonym for government transparency.
Although this was the first time that ‘open government’ was introduced the concept of transparency and accountability in government can be traced back to Ancient Greece in fifth century B.C.E. Athens where different legal institutions regulated the behavior of officials and offered a path for citizens to express their grievances towards them. One such institution, the euthyna, held officials to a standard of “straightness” and enforced that they give an account in front of an Assembly of citizens about everything that they did that year.
In more recent history, the idea that government should be open to public scrutiny and susceptible to public opinion dates back to the time of the Enlightenment, when many philosophes made an attack on absolutist doctrines of state secrecy. The passage of formal legislature can also be traced to this time with Sweden, (which then included Finland as a Swedish-governed territory) where free press legislation was enacted as part of its constitution (Freedom of the Press Act, 1766).
Influenced by Enlightenment thought, the revolutions in America (1776) and France (1789), enshrined provisions and requirements for public budgetary accounting and freedom of the press in constitutional articles. In the nineteenth century, attempts by Metternichean statesmen to row back on these measures were vigorously opposed by a number of eminent liberal politicians and writers, including Bentham, Mill and Acton.
Open government is widely seen to be a key hallmark of contemporary democratic practice and is often linked to the passing of freedom of information legislation. Scandinavian countries claim to have adopted the first freedom of information legislation, dating the origins of its modern provisions to the eighteenth century and Finland continuing the presumption of openness after gaining independence in 1917, passing its Act on Publicity of Official Documents in 1951 (superseded by new legislation in 1999).
An emergent development also involves the increasing integration of software and mechanisms that allow citizens to become more directly involved in governance, particularly in the area of legislation. Some refer to this phenomenon as e-participation, which has been described as "the use of information and communication technologies to broaden and deepen political participation by enabling citizens to connect with one another and with their elected representatives".
Morocco's new constitution of 2011, outlined several goals the government wishes to achieve in order to guarantee the citizens right to information. The world has been offering support to the government in order to enact these reforms through the Transparency and Accountability Development Policy Loan (DPL). This loan is part of a joint larger program between the European Union and the African Development Bank to offer financial and technical support to governments attempting to implement reforms.
As of 2010, section 35 of Kenya's constitution ensures citizens’ rights to government information. The article states “35.(1) Every citizen has the right of access to — (a) information held by the State; and (b) information held by another person and required for the exercise or protection of any right or fundamental freedom ... (3) The State shall publish and publicize any important information affecting the nation.” Important government data is now freely available through the Kenya Open Data Initiative.
Taiwan started its e-government program in 1998 and since then has had a series of laws and executive orders to enforce open government policies. The Freedom of Government Information Law of 2005, stated that all government information must be made public. Such information includes budgets, administrative plans, communication of government agencies, subsidies. Since then it released its open data platform, data.gov.tw. The Sunflower Movement of 2014, emphasized the value that Taiwanese citizens place on openness and transparency. A white paper published by the National Development Council with policy goals for 2020 explores ways to increase citizen participation and use open data for further government transparency.
The Philippines passed the Freedom of Information Order in 2016, outlining guidelines to practice government transparency and full public disclosure. In accordance to its General Appropriations Act of 2012, the Philippine government requires government agencies to display a “transparency seal” on their websites, which contains information about the agency's functions, annual reports, officials, budgets, and projects.
The Right to Information (RTI) movement in India, created the RTI law in 2005 after environmental movements demanded the release of information regarding environmental deterioration due to industrialization. Another catalyst for the RTI law and other similar laws in southeast Asia, may have been due to multilateral agencies offering aid and loans in exchange for more transparency or “democratic” policies.
In the Netherlands, large social unrest and the growing influence of televisions in the 1960s led to a push for more government openness. Access to information legislation was passed in 1980 and since then further emphasis has been placed on measuring the performance of government agencies. Particularly, the government of the Netherlands adopted the Open Government in Action (Open overheid in actie) Plan for 2016-2017, which outlines nine concrete commitments to the open government standards set by the OECD.
In 2009, President Obama released a Memorandum on Transparency and Open Government and started the Open Government Initiative. In his memorandum put forward his administration's goal to strengthen democracy through a transparent, participatory and collaborative government. The initiative has goals of a transparent and collaborative government, in which to end secrecy in Washington, while improving effectiveness through increased communication between citizens and government officials. Movements for government transparency in recent American history started in the 1950s after World War II because federal departments and agencies had started limiting information availability as a reaction to global hostilities during the war and due to fear of Cold War spies. Agencies were given the right to deny access to information "for good cause found" or "in the public interest". These policies made it difficult for congressional committees to get access to records and documents, which then led to explorations of possible legislative solutions.
Since the early 2000s, transparency has been an important part of Chile's Anti-Corruption and Probity Agenda and State Modernization Agenda. In 2008, Chile passed the Transparency Law has led to further open government reforms. Chile published its open government action plan for 2016-18 as part of its membership of the Open Government Partnership (OGP).
Arguments for and against
Transparency in government is often credited with generating government accountability, which supporters argue leads to reduction in government corruption, bribery and other malfeasance. Some commentators contend that an open, transparent government allows for the dissemination of information, which in turn helps produce greater knowledge and societal progress.
Public opinion can also be shifted when people have access to see the result of a certain policy. The United States government has at times forbid journalists to publish photographs of soldiers' coffins, an apparent attempt to manage emotional reactions that might heighten public criticism of ongoing wars; nonetheless, many believe that emotionally charged images can be valuable information. Similarly, some opponents of the death penalty have argued that executions should be televised so the public can "see what is being done in their name and with their tax dollars."
Government transparency is beneficial for efficient democracy, as information helps citizens form meaningful conclusions about upcoming legislation and vote for them in the next election. According to the Carnegie Endowment for International Peace, greater citizen participation in government is linked to government transparency.
Advocates of open government often argue that civil society, rather than government legislation, offers the best route to more transparent administration. They point to the role of whistleblowers reporting from inside the government bureaucracy (individuals like Daniel Ellsberg or Paul van Buitenen). They argue that an independent and inquiring press, printed or electronic, is often a stronger guarantor of transparency than legislative checks and balances.
The contemporary doctrine of open government finds its strongest advocates in non-governmental organizations keen to counter what they see as the inherent tendency of government to lapse, whenever possible, into secrecy. Prominent among these NGOs are bodies like Transparency International or the Open Society Institute. They argue that standards of openness are vital to the ongoing prosperity and development of democratic societies.
Government indecision, poor performance and gridlock are among the risks of government transparency, according to some critics. Political commentator David Frum wrote in 2014 that, “instead of yielding more accountability, however, these reforms [transparency reforms] have yielded more lobbying, more expense, more delay, and more indecision.” Jason Grumet argues that government officials cannot properly deliberate, collaborate and compromise when everything they are doing is being watched.
Privacy is another concern. Citizens may incur "adverse consequences, retribution or negative repercussions" from information provided by governments. Teresa Scassa, a law professor at the University of Ottawa, outlined three main possible privacy challenges in a 2014 article. First is the difficulty of balancing further transparency of government, while also protecting the privacy of personal information, or information about identifiable individuals that is in the hands of the government. Second is dealing with distinctions between data protection regulations between private and public sector actors because governments may access information collected by private companies which are not controlled by as stringent laws. Third is the release of "Big data", which may appear anonymized can be reconnected to specific individuals using sophisticated algorithms.
Intelligence gathering, especially to identify violent threats (whether domestic or foreign), must often be done clandestinely. Frum wrote in 2014 that "the very same imperatives that drive states to collect information also require them to deny doing so. These denials matter even when they are not believed."
Moral certitude undergirds much transparency advocacy, but a number of scholars question whether it is possible for us to have that certitude. They have also highlighted how transparency can support certain neoliberal imperatives.
Technology and open government
See also Open Data
Governments and organizations are using new technologies as a tool for increased transparency. Examples include use of open data platforms to publish information online and the theory of open source governance.
Open Government Data (OGD), a term which refers specifically to the public publishing of government datasets, is often made available through online platforms such as data.gov.uk or www.data.gov. Proponents of OGD argue that easily accessible data pertaining to governmental institutions allows for further citizen engagement within political institutions. OGD principles require that data is complete, primary, timely, accessible, machine processable, non-discriminatory, non-proprietary, and license free.
Public and private sector platforms provide an avenue for citizens to engage while offering access to transparent information that citizens have come to expect. Numerous organizations have worked to consolidate resources for citizens to access government (local, state and federal) budget spending, stimulus spending, lobbyist spending, legislative tracking, and more.
- Open Government Partnership - OGP was an organization launched in 2011 to allow domestic reformers to make their own governments across the world more open, accountable, and responsive to citizens. Since 2011, OGP has grown to 75 participating countries today whose government and civil societies work together to develop and implement open government reforms.
- Code for All - Code for All is a non-partisan, non-profit international network of organizations who believe technology leads to new opportunities for citizens to lead a more prominent role in the political sphere and have a positive impact on their communities. The organizations relies on technology to improve government transparency and engage citizens.
- Sunlight Foundation - The Sunlight Foundation is a nonprofit, nonpartisan organization founded in 2006 that uses civic tech, open data, and policy analysis to make information from government and politics more transparent to everyone. Their ultimate vision is to increase democratic participation and achieve changes on political money flow and who can influence government. While their work began with an intent to focus only on the US Congress, their work now influences the local, state, federal, and international levels.
- Open Government Pioneers UK is an example of a civil society led initiative using open source approaches to support citizens and civil society organisations use open government as a way to secure progress towards the Sustainable Development Goals. It uses an Open Wiki to plan the development of an open government civil society movement across the UK's home nations.
- Lathrop, Daniel; Ruma, Laurel, eds. (February 2010). Open Government: Transparency, Collaboration and Participation in Practice. O'Reilly Media. ISBN 978-0-596-80435-0. OL 24435672M.
- Araya, Daniel (2015-11-17). Smart Cities as Democratic Ecologies. Springer. ISBN 9781137377203.
- Yu, Harlan; Robinson, David G. (February 28, 2012). "The New Ambiguity of 'Open Government'". UCLA L. Rev. 59. SSRN 2012489.
- "Open Government".
- von Dornum, Deirdre Dionysia (June 1997). "The Straight and the Crooked: Legal Accountability in Ancient Greece". Columbia Law Review. 97 (5): 1483–1518. doi:10.2307/1123441. JSTOR 1123441.
- Jurgen Habermas, The Structural Transformation of the Public Sphere (1962, trans., Cambridge Massachusetts, 1989)
- Reinhart Koselleck, Critique and Crisis (1965, trans., Cambridge Massachusetts, 1988)
- Lamble, Stephen (February 2002). Freedom of Information, a Finnish clergyman's gift to democracy. 97. Freedom of Information Review. pp. 2–8. Archived from the original on 2010-10-01.
- Zaigham, Mahmood (2013). Developing E-Government Projects: Frameworks and Methodologies: Frameworks and Methodologies. Hershey, PA: IGI Global. ISBN 9781466642454.
- Carlos, Nunes Silva (2017). New Approaches, Methods, and Tools in Urban E-Planning. Hershey, PA: IGI Global. p. 169. ISBN 9781522559993.
- "Morocco's Constitution of 2011" (PDF).
- "Renewed Support for Morocco's Goal to Make Government more Accountable to Citizens". worldbank.org. October 22, 2015.
- "The Constitution of Kenya" (PDF). Archived from the original (PDF) on 2018-03-04.
- Tseng, Po-yu; Lee, Mei-chun. "Taiwan Open Government Report".
- "Executive Order No. 02" (PDF).
- "Philippine Transparency Seal". Republic of the Philippines Department of Budget and Management. May 15, 2019. Retrieved May 22, 2019.
- "Kalpavriksh". Kalpavriksh.org. 2018. Archived from the original on 2015-01-28.
- Singh, Shekhar (2010). The Genesis and Evolution of the Right to Information Regime in India (PDF). New Delhi.
- Madhavan, Esha. "Revisiting the making of India's Right to Information Act: The Continuing Relevance of a Consultative and Collaborative Process of Lawmaking Analyzed from a Multi-Stakeholder Governance Perspective" (PDF). Berkman Center for Internet & Society at Harvard University.
- Meijer, Albert (January 7, 2015). "Government Transparency in Historical Perspective: From the Ancient Regime to Open Data in The Netherlands". International Journal of Public Administration. 38 (3): 189–199. doi:10.1080/01900692.2014.934837.
- OECD (2017). OECD Public Governance Reviews Towards an Open Government in Kazakhstan. Paris: OECD Publishing. p. 57. ISBN 9789264279377.
- Obama, Barack (January 21, 2009). "Memorandum -- Transparency and Open Government". obamawhitehouse.archives.gov. Retrieved May 2, 2018.
- Pyrozhenko, Vadym (June 2–4, 2011). "Implementing Open Government: Exploring the Ideological Links between Open Government and the Free and Open Source Software Movement" (PDF). Syracuse University. Retrieved October 24, 2016.
- Relyea, Harold C.; Kolakowski, Michael W. (2007). "Access to Government Information in the United States" (PDF). Archived from the original (PDF) on 2017-03-01.
- Guillán, Aránzazu (2015). "Open government and transparency reform in Chile: Balancing leadership, ambition and implementation capacity". U4 Report; CHR. Michelsen Institute. 2015:2.
- "Chile Open Government Action Plan 2016-2018" (PDF). www.ogp.com. Retrieved May 3, 2018.
- Schauer, Frederick (2011), "Transparency in Three Dimensions" (PDF), University of Illinois Law Review, 2011 (4): 1339–1358, retrieved 2011-10-16
- Bumiller, Elisabeth (2009-12-07). "U.S. lifts photo ban on military coffins". The New York Times. ISSN 0362-4331. Retrieved 2019-11-29.
- Shemtob, Zachary B.; Lat, David (2011-07-29). "Opinion | Why Executions Should Be Televised". The New York Times. ISSN 0362-4331. Retrieved 2019-11-29.
- "Transparency and Open Government". The White House. Archived from the original on 2016-12-15. Retrieved 2016-12-16.
- Carothers, Thomas. "Accountability, Transparency, Participation, and Inclusion: A New Development Consensus?". Carnegie Endowment for International Peace. Retrieved 2016-12-16.
- J. Michael, The Politics of Secrecy: Confidential Government and the Public's Right to Know (London, 1990)
- A.G. Theoharis, ed., A Culture of Secrecy: the Government Versus the People's Right to Know (Kansas, 1998)
- Bass, Gary; Brian, Danielle; Eisen, Norman (November 2014). "Why Critics of Transparency are Wrong". www.brookings.edu.
- Frum, David (September 2014). "The Transparency Trap". theatlantic.com. Retrieved May 2, 2018.
- Grumet, Jason (October 2, 2014). "When sunshine doesn't always disinfect the government". washingtonpost.com. Retrieved May 2, 2018.
- Scassa, Teresa (June 18, 2014). "Privacy and Open Government". Future Internet. 6 (2): 397–413. doi:10.3390/fi6020397. ISSN 1999-5903.
- Frum, David (2014-04-16). "We Need More Secrecy". The Atlantic. Retrieved 2019-11-28.
- Garsten, C. (2008), Transparency in a New Global Order:Unveiling Organizational Visions, Edward Elger
- "Open Government Data". oecd.org. Retrieved May 2, 2018.
- Scassa, Teresa (June 18, 2014). "Privacy and Open Government". Future Internet. Future Internet. Retrieved October 25, 2016.
- Gomes, Alvaro; Soares, Delfina (October 2014). Open government data initiatives in Europe: northern versus southern countries analysis. ICEGOV '14 Proceedings of the 8th International Conference on Theory and Practice of Electronic Governance. pp. 342–350. doi:10.1145/2691195.2691246. ISBN 9781605586113.
- Giordano Koch & Maximilian Rapp: Open Government Platforms in Municipality Areas: Identifying elemental design principles, In: Public Management im Paradigmenwechsel, Trauner Verlag, 2012.
- "Open Government Partnership". Open Government Partnership. Retrieved 2016-12-16.
- "Code for All". Code for All. Retrieved 2016-12-17.
- "Sunlight Foundation". Sunlight Foundation. Retrieved 2016-12-17.
- "Open Government Pioneers UK". Opengovpioneers. Retrieved 2017-05-21.
- Jane E. Fountain (2001), Building the Virtual State: Information Technology and Institutional Change, Washington, D.C: Brookings Institution Press
- Beth Simone Noveck (2009), Wiki government: how technology can make government better, democracy stronger, and citizens more powerful, Washington, D.C: Brookings Institution Press, ISBN 9780815702757, OL 23153089M
- Jay Nath (2011). "Reimagining government in the digital age". National Civic Review. 100 (3).
- Tom McClean (2011). "Not with a Bang but a Whimper: The Politics of Accountability and Open Data in the UK". American Political Science Association 2011 Annual Meeting Paper. SSRN 1899790.
- April Manatt (2011). Hear Us Now? A California Survey of Digital Technology's Role in Civic Engagement and Local Government. New America Foundation. Archived from the original on 2012-07-25. Retrieved 2012-06-06.
- C. Freeland (August 18, 2011). "Remaking Government in a Wiki Age". New York Times.
- Bernd W. Wirtz und Steven Birkmeyer (2015): Open Government: Origin, Development, and Conceptual Perspectives, in: International Journal of Public Administration Volume 38, Issue 5, 2015.
|Wikimedia Commons has media related to Open government.| | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Open_government | 21 |
14 | Deoxyribonucleic acid (/ - -/, (listen); DNA) is a molecule composed of two polynucleotide chains that coil around each other to form a double helix carrying genetic instructions for the development, functioning, growth and reproduction of all known organisms and many viruses. DNA and ribonucleic acid (RNA) are nucleic acids. Alongside proteins, lipids and complex carbohydrates (polysaccharides), nucleic acids are one of the four major types of macromolecules that are essential for all known forms of life.
|Part of a series on|
The two DNA strands are known as polynucleotides as they are composed of simpler monomeric units called nucleotides. Each nucleotide is composed of one of four nitrogen-containing nucleobases (cytosine [C], guanine [G], adenine [A] or thymine [T]), a sugar called deoxyribose, and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds (known as the phospho-diester linkage) between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together, according to base pairing rules (A with T and C with G), with hydrogen bonds to make double-stranded DNA. The complementary nitrogenous bases are divided into two groups, pyrimidines and purines. In DNA, the pyrimidines are thymine and cytosine; the purines are adenine and guanine.
Both strands of double-stranded DNA store the same biological information. This information is replicated as and when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences. The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (or bases). It is the sequence of these four nucleobases along the backbone that encodes genetic information. RNA strands are created using DNA strands as a template in a process called transcription, where DNA bases are exchanged for their corresponding bases except in the case of thymine (T), for which RNA substitutes uracil (U). Under the genetic code, these RNA strands specify the sequence of amino acids within proteins in a process called translation.
Within eukaryotic cells, DNA is organized into long structures called chromosomes. Before typical cell division, these chromosomes are duplicated in the process of DNA replication, providing a complete set of chromosomes for each daughter cell. Eukaryotic organisms (animals, plants, fungi and protists) store most of their DNA inside the cell nucleus as nuclear DNA, and some in the mitochondria as mitochondrial DNA or in chloroplasts as chloroplast DNA. In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm, in circular chromosomes. Within eukaryotic chromosomes, chromatin proteins, such as histones, compact and organize DNA. These compacting structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.
DNA is a long polymer made from repeating units called nucleotides, each of which is usually symbolized by a single letter: either A, T, C, or G. Chargaff's rules state that DNA from any species of any organism should have a 1:1 protein stoichiometry ratio (base pair rule) of purine and pyrimidine bases (i.e., A+T=G+C) and, more specifically, that the amount of guanine should be equal to cytosine and the amount of adenine should be equal to thymine. The structure of DNA is dynamic along its length, being capable of coiling into tight loops and other shapes. In all species it is composed of two helical chains, bound to each other by hydrogen bonds. Both chains are coiled around the same axis, and have the same pitch of 34 ångströms (3.4 nm). The pair of chains have a radius of 10 Å (1.0 nm). According to another study, when measured in a different solution, the DNA chain measured 22–26 Å (2.2–2.6 nm) wide, and one nucleotide unit measured 3.3 Å (0.33 nm) long. Although each individual nucleotide is very small, a DNA polymer can be very large and may contain hundreds of millions of nucleotides, such as in chromosome 1. Chromosome 1 is the largest human chromosome with approximately 220 million base pairs, and would be 85 mm long if straightened.
DNA does not usually exist as a single strand, but instead as a pair of strands that are held tightly together. These two long strands coil around each other, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside, and a base linked to a sugar and to one or more phosphate groups is called a nucleotide. A biopolymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.
The backbone of the DNA strand is made from alternating phosphate and sugar groups. The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These are known as the 3′-end (three prime end), and 5′-end (five prime end) carbons, the prime symbol being used to distinguish these carbon atoms from those of the base to which the deoxyribose forms a glycosidic bond. Therefore, any DNA strand normally has one end at which there is a phosphate group attached to the 5′ carbon of a ribose (the 5′ phosphoryl) and another end at which there is a free hydroxyl group attached to the 3′ carbon of a ribose (the 3′ hydroxyl). The orientation of the 3′ and 5′ carbons along the sugar-phosphate backbone confers directionality (sometimes called polarity) to each DNA strand. In a nucleic acid double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime end (5′ ), and three prime end (3′), with the 5′ end having a terminal phosphate group and the 3′ end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA.
The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases. The four bases found in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine, forming A-T and G-C base pairs.
The nucleobases are classified into two types: the purines, A and G, which are fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T. A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.
Modified bases occur in DNA. The first of these recognised was 5-methylcytosine, which was found in the genome of Mycobacterium tuberculosis in 1925. The reason for the presence of these noncanonical bases in bacterial viruses (bacteriophages) is to avoid the restriction enzymes present in bacteria. This enzyme system acts at least in part as a molecular immune system protecting bacteria from infection by viruses. Modifications of the bases cytosine and adenine, the more common and modified DNA bases, plays vital roles in the epigenetic control of gene expression in plants and animals.
Listing of non-canonical bases found in DNA
- Modified Adenosine
- Modified Guanine
- Modified Cytosine
- Modified Thymidine
- Uracil and modifications
- Base J
- 2,6-Diaminopurine (2-Aminoadenine)
Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. One groove, the major groove, is 22 ångströms (2.2 nm) wide and the other, the minor groove, is 12 Å (1.2 nm) wide. The width of the major groove means that the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove. This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.
In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix (from six-carbon ring to six-carbon ring) is called a Watson-Crick base pair. DNA with high GC-content is more stable than DNA with low GC-content. A Hoogsteen base pair (hydrogen bonding the 6-carbon ring to the 5-carbon ring) is a rare variation of base-pairing. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature. As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in organisms.
ssDNA vs. dsDNA
As noted above, most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double-stranded (dsDNA) structure is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart—a process known as melting—to form two single-stranded DNA (ssDNA) molecules. Melting occurs at high temperature, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).
The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the "melting temperature", which is the temperature at which 50% of the double-strand molecules are converted to single-strand molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands. In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.
In the laboratory, the strength of this interaction can be measured by finding the temperature necessary to break half of the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules have no single common shape, but some conformations are more stable than others.
Sense and antisense
A DNA sequence is called a "sense" sequence if it is the same as that of a messenger RNA copy that is translated into protein. The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear. One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.
A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes. In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription, while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.
DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound. If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases. These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.
Alternative DNA structures
DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms. The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.
The first published reports of A-DNA X-ray diffraction patterns—and also B-DNA—used analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA. An alternative analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions. In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.
Although the B-DNA form is most common under the conditions found in cells, it is not a well-defined conformation but a family of related DNA conformations that occur at the high hydration levels present in cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.
Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes. Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form. These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.
Alternative DNA chemistry
For many years, exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1, was announced, though the research was disputed, and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.
At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3′ ends of chromosomes. These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected. In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.
These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases, known as a guanine tetrad, form a flat plate. These flat four-base units then stack on top of each other to form a stable G-quadruplex structure. These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit. Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.
In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins. At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.
In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible. Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.
Several artificial nucleobases have been synthesized, and successfully incorporated in the eight-base DNA analogue named Hachimoji DNA. Dubbed S, B, P, and Z, these artificial bases are capable of bonding with each other in a predictable way (S–B and P–Z), maintain the double helix structure of DNA, and be transcribed to RNA. Their existence could be seen as an indication that there is nothing special about the four natural nucleobases that evolved on Earth. On the other hand, DNA is tightly related to RNA which does not only act as a transcript of DNA but also performs as moleular machines many tasks in cells. For this purpose it has to fold into a structure. It has been shown that to allow to create all possible structures at least four bases are required for the corresponding RNA, while a higher number is also possible but this would be against the natural Principle of least effort.
Chemical modifications and altered DNA packaging
Base modifications and DNA packaging
The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.
For one example, cytosine methylation produces 5-methylcytosine, which is important for X-inactivation of chromosomes. The average level of methylation varies between organisms—the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine. Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations. Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain, and the glycosylation of uracil to produce the "J-base" in kinetoplastids.
DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases. On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks. A typical human cell contains about 150,000 bases that have suffered oxidative damage. Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations. These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer. DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.
Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations. As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen. Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication. Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.
DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes. The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.
Genes and genomes
Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid. The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.
In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences. The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species, represent a long-standing puzzle known as the "C-value enigma". However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.
Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes but are important for the function and stability of chromosomes. An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation. These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.
Transcription and translation
A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).
In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43 combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA, and TAG codons.
Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5′ to 3′ direction, different mechanisms are used to copy the antiparallel strands of the double helix. In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.
Extracellular nucleic acids
Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 μg/L, and its concentration in natural aquatic environments may be as high at 88 μg/L. Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer; it may provide nutrients; and it may act as a buffer to recruit or titrate ions or antibiotics. Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm; it may contribute to biofilm formation; and it may contribute to the biofilm's physical strength and resistance to biological stress.
Under the name of environmental DNA eDNA has seen increased use in the natural sciences as a survey tool for ecology, monitoring the movements and presence of species in water, air, or on land, and assessing an area's biodiversity.
Interactions with proteins
All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.
Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes, this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved. The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence. Chemical modifications of these basic amino acid residues include methylation, phosphorylation, and acetylation. These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription. Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA. These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.
A distinct group of DNA-binding proteins is the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination, and DNA repair. These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.
In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription. Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.
As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes. Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.
Nucleases and ligases
Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5′-GATATC-3′ and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system. In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.
Enzymes called DNA ligases can rejoin cut or broken DNA strands. Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.
Topoisomerases and helicases
Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break. Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix. Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.
Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands. These enzymes are essential for most processes where enzymes need to access the DNA bases.
Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products is created based on existing polynucleotide chains—which are called templates. These enzymes function by repeatedly adding a nucleotide to the 3′ hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5′ to 3′ direction. In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.
In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3′ to 5′ exonuclease activity is activated and the incorrect base removed. In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.
RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres. For example, HIV reverse transcriptase is an enzyme for AIDS virus replication. Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure. It synthesizes telomeres at the ends of chromosomes. Telomeres prevent fusion of the ends of neighboring chromosomes and protect chromosome ends from damage.
Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.
A DNA helix usually does not interact with other segments of DNA, and in human cells, the different chromosomes even occupy separate areas in the nucleus called "chromosome territories". This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.
Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins. Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.
The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51. The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA. A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA. Only strands of like polarity exchange DNA during recombination. There are two types of cleavage: east-west cleavage and north–south cleavage. The north–south cleavage nicks both strands of DNA, while the east–west cleavage has one strand of DNA intact. The formation of a Holliday junction during recombination makes it possible for genetic diversity, genes to exchange on chromosomes, and expression of wild-type viral genomes.
DNA contains the genetic information that allows all forms of life to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material. RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes. This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes. However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution. Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old, but these claims are controversial.
Building blocks of DNA (adenine, guanine, and related organic molecules) may have been formed extraterrestrially in outer space. Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.
Uses in technology
Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector. The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research, or be grown in agriculture.
Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, also called DNA fingerprinting. In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA. However, identification can be complicated if the scene is contaminated with DNA from several people. DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys, and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.
The development of forensic science and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defense to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime.
DNA profiling is also used successfully to positively identify victims of mass casualty incidents, bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.
DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth, but there are new methods to test paternity while a mother is still pregnant.
DNA enzymes or catalytic DNA
Deoxyribozymes, also called DNAzymes or catalytic DNA, were first discovered in 1994. They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction. The most extensively studied class of DNAzymes is RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific), the CA1-3 DNAzymes (copper-specific), the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific). The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in cells.
Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning, and database theory. String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides. The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function. Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally. Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.
DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties. DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra. Nanomechanical devices and algorithmic self-assembly have also been demonstrated, and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.
History and anthropology
Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny. This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology.
DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein". In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.
In 1909, Phoebus Levene identified the base, sugar, and phosphate nucleotide unit of the RNA (then named "yeast nucleic acid"). In 1929, Levene identified deoxyribose sugar in "thymus nucleic acid" (DNA). Levene suggested that DNA consisted of a string of four nucleotide units linked together through the phosphate groups ("tetranucleotide hypothesis"). Levene thought the chain was short and the bases repeated in a fixed order. In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template". In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form. This system provided the first clear suggestion that DNA carries genetic information.
In 1933, while studying virgin sea urchin eggs, Jean Brachet suggested that DNA is found in the cell nucleus and that RNA is present exclusively in the cytoplasm. At the time, "yeast nucleic acid" (RNA) was thought to occur only in plants, while "thymus nucleic acid" (DNA) only in animals. The latter was thought to be a tetramer, with the function of buffering cellular pH.
In 1943, Oswald Avery, along with co-workers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle, supporting Griffith's suggestion (Avery–MacLeod–McCarty experiment). Late in 1951, Francis Crick started working with James Watson at the Cavendish Laboratory within the University of Cambridge. DNA's role in heredity was confirmed in 1952 when Alfred Hershey and Martha Chase in the Hershey–Chase experiment showed that DNA is the genetic material of the enterobacteria phage T2.
In May 1952, Raymond Gosling, a graduate student working under the supervision of Rosalind Franklin, took an X-ray diffraction image, labeled as "Photo 51", at high hydration levels of DNA. This photo was given to Watson and Crick by Maurice Wilkins and was critical to their obtaining the correct structure of DNA. Franklin told Crick and Watson that the backbones had to be on the outside. Before then, Linus Pauling, and Watson and Crick, had erroneous models with the chains inside and the bases pointing outwards. Her identification of the space group for DNA crystals revealed to Crick that the two DNA strands were antiparallel.
In February 1953, Linus Pauling and Robert Corey proposed a model for nucleic acids containing three intertwined chains, with the phosphates near the axis, and the bases on the outside. Watson and Crick completed their model, which is now accepted as the first correct model of the double-helix of DNA. On 28 February 1953 Crick interrupted patrons' lunchtime at The Eagle pub in Cambridge to announce that he and Watson had "discovered the secret of life".
The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it. The structure was reported in a letter titled "MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid", in which they said, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material." This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method. Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure.
In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine. Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.
In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis". Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the Meselson–Stahl experiment. Further work by Crick and co-workers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley, and Marshall Warren Nirenberg to decipher the genetic code. These findings represent the birth of molecular biology.
- Autosome – Any chromosome other than a sex chromosome
- Comparison of nucleic acid simulation software
- Crystallography – scientific study of crystal structure
- DNA Day – Holiday celebrated on April 25
- DNA-encoded chemical library
- DNA microarray – Collection of microscopic DNA spots attached to a solid surface
- Genetic disorder – Health problem caused by one or more abnormalities in the genome
- Genetic genealogy – The use of DNA testing in combination with traditional genealogical methods to infer relationships between individuals and find ancestors
- Haplotype – Group of genes from one parent
- Meiosis – Type of cell division in sexually-reproducing organisms used to produce gametes
- Nucleic acid notation – Universal notation using the Roman characters A, C, G, and T to call the four DNA nucleotides
- Nucleic acid sequence – Succession of nucleotides in a nucleic acid
- Pangenesis – former theory that inheritance was based on particles from all parts of the body
- Ribosomal DNA
- Southern blot
- X-ray scattering techniques
- Xeno nucleic acid
- "deoxyribonucleic acid". Merriam-Webster Dictionary.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P (2014). Molecular Biology of the Cell (6th ed.). Garland. p. Chapter 4: DNA, Chromosomes and Genomes. ISBN 978-0-8153-4432-2. Archived from the original on 14 July 2014.
- Purcell A. "DNA". Basic Biology. Archived from the original on 5 January 2017.
- "Uracil". Genome.gov. Retrieved 21 November 2019.
- Russell P (2001). iGenetics. New York: Benjamin Cummings. ISBN 0-8053-4553-1.
- Saenger W (1984). Principles of Nucleic Acid Structure. New York: Springer-Verlag. ISBN 0-387-90762-9.
- Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Peter W (2002). Molecular Biology of the Cell (Fourth ed.). New York and London: Garland Science. ISBN 0-8153-3218-1. OCLC 145080076. Archived from the original on 1 November 2016.
- Irobalieva RN, Fogg JM, Catanese DJ, Catanese DJ, Sutthibutpong T, Chen M, Barker AK, Ludtke SJ, Harris SA, Schmid MF, Chiu W, Zechiedrich L (October 2015). "Structural diversity of supercoiled DNA". Nature Communications. 6: 8440. Bibcode:2015NatCo...6.8440I. doi:10.1038/ncomms9440. ISSN 2041-1723. PMC 4608029. PMID 26455586.
- Watson JD, Crick FH (April 1953). "Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid" (PDF). Nature. 171 (4356): 737–38. Bibcode:1953Natur.171..737W. doi:10.1038/171737a0. ISSN 0028-0836. PMID 13054692. S2CID 4253007. Archived (PDF) from the original on 4 February 2007.
- Mandelkern M, Elias JG, Eden D, Crothers DM (October 1981). "The dimensions of DNA in solution". Journal of Molecular Biology. 152 (1): 153–61. doi:10.1016/0022-2836(81)90099-1. ISSN 0022-2836. PMID 7338906.
- Gregory SG, Barlow KF, McLay KE, Kaul R, Swarbreck D, Dunham A, et al. (May 2006). "The DNA sequence and biological annotation of human chromosome 1". Nature. 441 (7091): 315–21. Bibcode:2006Natur.441..315G. doi:10.1038/nature04727. PMID 16710414.
- Berg J, Tymoczko J, Stryer L (2002). Biochemistry. W.H. Freeman and Company. ISBN 0-7167-4955-6.
- IUPAC-IUB Commission on Biochemical Nomenclature (CBN) (December 1970). "Abbreviations and Symbols for Nucleic Acids, Polynucleotides and their Constituents. Recommendations 1970". The Biochemical Journal. 120 (3): 449–54. doi:10.1042/bj1200449. ISSN 0306-3283. PMC 1179624. PMID 5499957. Archived from the original on 5 February 2007.
- Ghosh A, Bansal M (April 2003). "A glossary of DNA structures from A to Z". Acta Crystallographica Section D. 59 (Pt 4): 620–26. doi:10.1107/S0907444903003251. ISSN 0907-4449. PMID 12657780.
- Created from PDB 1D65
- Yakovchuk P, Protozanova E, Frank-Kamenetskii MD (2006). "Base-stacking and base-pairing contributions into thermal stability of the DNA double helix". Nucleic Acids Research. 34 (2): 564–74. doi:10.1093/nar/gkj454. ISSN 0305-1048. PMC 1360284. PMID 16449200.
- Tropp BE (2012). Molecular Biology (4th ed.). Sudbury, Mass.: Jones and Barlett Learning. ISBN 978-0-7637-8663-2.
- Carr S (1953). "Watson-Crick Structure of DNA". Memorial University of Newfoundland. Archived from the original on 19 July 2016. Retrieved 13 July 2016.
- Verma S, Eckstein F (1998). "Modified oligonucleotides: synthesis and strategy for users". Annual Review of Biochemistry. 67: 99–134. doi:10.1146/annurev.biochem.67.1.99. ISSN 0066-4154. PMID 9759484.
- Johnson TB, Coghill RD (1925). "Pyrimidines. CIII. The discovery of 5-methylcytosine in tuberculinic acid, the nucleic acid of the tubercle bacillus". Journal of the American Chemical Society. 47: 2838–44. doi:10.1021/ja01688a030. ISSN 0002-7863.
- Weigele P, Raleigh EA (October 2016). "Biosynthesis and Function of Modified Bases in Bacteria and Their Viruses". Chemical Reviews. 116 (20): 12655–12687. doi:10.1021/acs.chemrev.6b00114. ISSN 0009-2665. PMID 27319741.
- Kumar S, Chinnusamy V, Mohapatra T (2018). "Epigenetics of Modified DNA Bases: 5-Methylcytosine and Beyond". Frontiers in Genetics. 9: 640. doi:10.3389/fgene.2018.00640. ISSN 1664-8021. PMC 6305559. PMID 30619465.
- Carell T, Kurz MQ, Müller M, Rossa M, Spada F (April 2018). "Non-canonical Bases in the Genome: The Regulatory Information Layer in DNA". Angewandte Chemie. 57 (16): 4296–4312. doi:10.1002/anie.201708228. PMID 28941008.
- Wing R, Drew H, Takano T, Broka C, Tanaka S, Itakura K, Dickerson RE (October 1980). "Crystal structure analysis of a complete turn of B-DNA". Nature. 287 (5784): 755–58. Bibcode:1980Natur.287..755W. doi:10.1038/287755a0. PMID 7432492. S2CID 4315465.
- Pabo CO, Sauer RT (1984). "Protein-DNA recognition". Annual Review of Biochemistry. 53: 293–321. doi:10.1146/annurev.bi.53.070184.001453. PMID 6236744.
- Nikolova EN, Zhou H, Gottardo FL, Alvey HS, Kimsey IJ, Al-Hashimi HM (2013). "A historical account of Hoogsteen base-pairs in duplex DNA". Biopolymers. 99 (12): 955–68. doi:10.1002/bip.22334. PMC 3844552. PMID 23818176.
- Clausen-Schaumann H, Rief M, Tolksdorf C, Gaub HE (April 2000). "Mechanical stability of single DNA molecules". Biophysical Journal. 78 (4): 1997–2007. Bibcode:2000BpJ....78.1997C. doi:10.1016/S0006-3495(00)76747-6. PMC 1300792. PMID 10733978.
- Chalikian TV, Völker J, Plum GE, Breslauer KJ (July 1999). "A more unified picture for the thermodynamics of nucleic acid duplex melting: a characterization by calorimetric and volumetric techniques". Proceedings of the National Academy of Sciences of the United States of America. 96 (14): 7853–58. Bibcode:1999PNAS...96.7853C. doi:10.1073/pnas.96.14.7853. PMC 22151. PMID 10393911.
- deHaseth PL, Helmann JD (June 1995). "Open complex formation by Escherichia coli RNA polymerase: the mechanism of polymerase-induced strand separation of double helical DNA". Molecular Microbiology. 16 (5): 817–24. doi:10.1111/j.1365-2958.1995.tb02309.x. PMID 7476180. S2CID 24479358.
- Isaksson J, Acharya S, Barman J, Cheruku P, Chattopadhyaya J (December 2004). "Single-stranded adenine-rich DNA and RNA retain structural characteristics of their respective double-stranded conformations and show directional differences in stacking pattern" (PDF). Biochemistry. 43 (51): 15996–6010. doi:10.1021/bi048221v. PMID 15609994. Archived (PDF) from the original on 10 June 2007.
- Designation of the two strands of DNA Archived 24 April 2008 at the Wayback Machine JCBN/NC-IUB Newsletter 1989. Retrieved 7 May 2008
- Hüttenhofer A, Schattner P, Polacek N (May 2005). "Non-coding RNAs: hope or hype?". Trends in Genetics. 21 (5): 289–97. doi:10.1016/j.tig.2005.03.007. PMID 15851066.
- Munroe SH (November 2004). "Diversity of antisense regulation in eukaryotes: multiple mechanisms, emerging patterns". Journal of Cellular Biochemistry. 93 (4): 664–71. doi:10.1002/jcb.20252. PMID 15389973. S2CID 23748148.
- Makalowska I, Lin CF, Makalowski W (February 2005). "Overlapping genes in vertebrate genomes". Computational Biology and Chemistry. 29 (1): 1–12. doi:10.1016/j.compbiolchem.2004.12.006. PMID 15680581.
- Johnson ZI, Chisholm SW (November 2004). "Properties of overlapping genes are conserved across microbial genomes". Genome Research. 14 (11): 2268–72. doi:10.1101/gr.2433104. PMC 525685. PMID 15520290.
- Lamb RA, Horvath CM (August 1991). "Diversity of coding strategies in influenza viruses". Trends in Genetics. 7 (8): 261–66. doi:10.1016/0168-9525(91)90326-L. PMC 7173306. PMID 1771674.
- Benham CJ, Mielke SP (2005). "DNA mechanics" (PDF). Annual Review of Biomedical Engineering. 7: 21–53. doi:10.1146/annurev.bioeng.6.062403.132016. PMID 16004565. S2CID 1427671. Archived from the original (PDF) on 1 March 2019.
- Champoux JJ (2001). "DNA topoisomerases: structure, function, and mechanism" (PDF). Annual Review of Biochemistry. 70: 369–413. doi:10.1146/annurev.biochem.70.1.369. PMID 11395412. S2CID 18144189.
- Wang JC (June 2002). "Cellular roles of DNA topoisomerases: a molecular perspective". Nature Reviews Molecular Cell Biology. 3 (6): 430–40. doi:10.1038/nrm831. PMID 12042765. S2CID 205496065.
- Basu HS, Feuerstein BG, Zarling DA, Shafer RH, Marton LJ (October 1988). "Recognition of Z-RNA and Z-DNA determinants by polyamines in solution: experimental and theoretical studies". Journal of Biomolecular Structure & Dynamics. 6 (2): 299–309. doi:10.1080/07391102.1988.10507714. PMID 2482766.
- Franklin RE, Gosling RG (6 March 1953). "The Structure of Sodium Thymonucleate Fibres I. The Influence of Water Content" (PDF). Acta Crystallogr. 6 (8–9): 673–77. doi:10.1107/S0365110X53001939. Archived (PDF) from the original on 9 January 2016.
Franklin RE, Gosling RG (1953). "The structure of sodium thymonucleate fibres. II. The cylindrically symmetrical Patterson function" (PDF). Acta Crystallogr. 6 (8–9): 678–85. doi:10.1107/S0365110X53001940.
- Franklin RE, Gosling RG (April 1953). "Molecular configuration in sodium thymonucleate" (PDF). Nature. 171 (4356): 740–41. Bibcode:1953Natur.171..740F. doi:10.1038/171740a0. PMID 13054694. S2CID 4268222. Archived (PDF) from the original on 3 January 2011.
- Wilkins MH, Stokes AR, Wilson HR (April 1953). "Molecular structure of deoxypentose nucleic acids" (PDF). Nature. 171 (4356): 738–40. Bibcode:1953Natur.171..738W. doi:10.1038/171738a0. PMID 13054693. S2CID 4280080. Archived (PDF) from the original on 13 May 2011.
- Leslie AG, Arnott S, Chandrasekaran R, Ratliff RL (October 1980). "Polymorphism of DNA double helices". Journal of Molecular Biology. 143 (1): 49–72. doi:10.1016/0022-2836(80)90124-2. PMID 7441761.
- Baianu IC (1980). "Structural Order and Partial Disorder in Biological systems". Bull. Math. Biol. 42 (4): 137–41. doi:10.1007/BF02462372. S2CID 189888972.
- Hosemann R, Bagchi RN (1962). Direct analysis of diffraction by matter. Amsterdam – New York: North-Holland Publishers.
- Baianu IC (1978). "X-ray scattering by partially disordered membrane systems" (PDF). Acta Crystallogr A. 34 (5): 751–53. Bibcode:1978AcCrA..34..751B. doi:10.1107/S0567739478001540.
- Wahl MC, Sundaralingam M (1997). "Crystal structures of A-DNA duplexes". Biopolymers. 44 (1): 45–63. doi:10.1002/(SICI)1097-0282(1997)44:1<45::AID-BIP4>3.0.CO;2-#. PMID 9097733.
- Lu XJ, Shakked Z, Olson WK (July 2000). "A-form conformational motifs in ligand-bound DNA structures". Journal of Molecular Biology. 300 (4): 819–40. doi:10.1006/jmbi.2000.3690. PMID 10891271.
- Rothenburg S, Koch-Nolte F, Haag F (December 2001). "DNA methylation and Z-DNA formation as mediators of quantitative differences in the expression of alleles". Immunological Reviews. 184: 286–98. doi:10.1034/j.1600-065x.2001.1840125.x. PMID 12086319. S2CID 20589136.
- Oh DB, Kim YG, Rich A (December 2002). "Z-DNA-binding proteins can act as potent effectors of gene expression in vivo". Proceedings of the National Academy of Sciences of the United States of America. 99 (26): 16666–71. Bibcode:2002PNAS...9916666O. doi:10.1073/pnas.262672699. PMC 139201. PMID 12486233.
- Palmer J (2 December 2010). "Arsenic-loving bacteria may help in hunt for alien life". BBC News. Archived from the original on 3 December 2010. Retrieved 2 December 2010.
- Bortman H (2 December 2010). "Arsenic-Eating Bacteria Opens New Possibilities for Alien Life". Space.com. Archived from the original on 4 December 2010. Retrieved 2 December 2010.
- Katsnelson A (2 December 2010). "Arsenic-eating microbe may redefine chemistry of life". Nature News. doi:10.1038/news.2010.645. Archived from the original on 12 February 2012.
- Cressey D (3 October 2012). "'Arsenic-life' Bacterium Prefers Phosphorus after all". Nature News. doi:10.1038/nature.2012.11520. S2CID 87341731.
- Greider CW, Blackburn EH (December 1985). "Identification of a specific telomere terminal transferase activity in Tetrahymena extracts". Cell. 43 (2 Pt 1): 405–13. doi:10.1016/0092-8674(85)90170-9. PMID 3907856.
- Nugent CI, Lundblad V (April 1998). "The telomerase reverse transcriptase: components and regulation". Genes & Development. 12 (8): 1073–85. doi:10.1101/gad.12.8.1073. PMID 9553037.
- Wright WE, Tesmer VM, Huffman KE, Levene SD, Shay JW (November 1997). "Normal human chromosomes have long G-rich telomeric overhangs at one end". Genes & Development. 11 (21): 2801–09. doi:10.1101/gad.11.21.2801. PMC 316649. PMID 9353250.
- Created from Archived 17 October 2016 at the Wayback Machine
- Burge S, Parkinson GN, Hazel P, Todd AK, Neidle S (2006). "Quadruplex DNA: sequence, topology and structure". Nucleic Acids Research. 34 (19): 5402–15. doi:10.1093/nar/gkl655. PMC 1636468. PMID 17012276.
- Parkinson GN, Lee MP, Neidle S (June 2002). "Crystal structure of parallel quadruplexes from human telomeric DNA". Nature. 417 (6891): 876–80. Bibcode:2002Natur.417..876P. doi:10.1038/nature755. PMID 12050675. S2CID 4422211.
- Griffith JD, Comeau L, Rosenfield S, Stansel RM, Bianchi A, Moss H, de Lange T (May 1999). "Mammalian telomeres end in a large duplex loop". Cell. 97 (4): 503–14. CiteSeerX 10.1.1.335.2649. doi:10.1016/S0092-8674(00)80760-6. PMID 10338214. S2CID 721901.
- Seeman NC (November 2005). "DNA enables nanoscale control of the structure of matter". Quarterly Reviews of Biophysics. 38 (4): 363–71. doi:10.1017/S0033583505004087. PMC 3478329. PMID 16515737.
- Warren M (21 February 2019). "Four new DNA letters double life's alphabet". Nature. 566 (7745): 436. Bibcode:2019Natur.566..436W. doi:10.1038/d41586-019-00650-8. PMID 30809059.
- Hoshika S, Leal NA, Kim MJ, Kim MS, Karalkar NB, Kim HJ, et al. (22 February 2019). "Hachimoji DNA and RNA: A genetic system with eight building blocks (paywall)". Science. 363 (6429): 884–887. Bibcode:2019Sci...363..884H. doi:10.1126/science.aat0971. PMC 6413494. PMID 30792304.
- Burghardt B, Hartmann AK (February 2007). "RNA secondary structure design". Physical Review E. 75 (2): 021920. arXiv:physics/0609135. Bibcode:2007PhRvE..75b1920B. doi:10.1103/PhysRevE.75.021920. PMID 17358380. S2CID 17574854.
- Hu Q, Rosenfeld MG (2012). "Epigenetic regulation of human embryonic stem cells". Frontiers in Genetics. 3: 238. doi:10.3389/fgene.2012.00238. PMC 3488762. PMID 23133442.
- Klose RJ, Bird AP (February 2006). "Genomic DNA methylation: the mark and its mediators". Trends in Biochemical Sciences. 31 (2): 89–97. doi:10.1016/j.tibs.2005.12.008. PMID 16403636.
- Bird A (January 2002). "DNA methylation patterns and epigenetic memory". Genes & Development. 16 (1): 6–21. doi:10.1101/gad.947102. PMID 11782440.
- Walsh CP, Xu GL (2006). "Cytosine methylation and DNA repair". Current Topics in Microbiology and Immunology. 301: 283–315. doi:10.1007/3-540-31390-7_11. ISBN 3-540-29114-8. PMID 16570853.
- Kriaucionis S, Heintz N (May 2009). "The nuclear DNA base 5-hydroxymethylcytosine is present in Purkinje neurons and the brain". Science. 324 (5929): 929–30. Bibcode:2009Sci...324..929K. doi:10.1126/science.1169786. PMC 3263819. PMID 19372393.
- Ratel D, Ravanat JL, Berger F, Wion D (March 2006). "N6-methyladenine: the other methylated base of DNA". BioEssays. 28 (3): 309–15. doi:10.1002/bies.20342. PMC 2754416. PMID 16479578.
- Gommers-Ampt JH, Van Leeuwen F, de Beer AL, Vliegenthart JF, Dizdaroglu M, Kowalak JA, Crain PF, Borst P (December 1993). "beta-D-glucosyl-hydroxymethyluracil: a novel modified base present in the DNA of the parasitic protozoan T. brucei". Cell. 75 (6): 1129–36. doi:10.1016/0092-8674(93)90322-H. hdl:1874/5219. PMID 8261512. S2CID 24801094.
- Created from PDB 1JDG Archived 22 September 2008 at the Wayback Machine
- Douki T, Reynaud-Angelin A, Cadet J, Sage E (August 2003). "Bipyrimidine photoproducts rather than oxidative lesions are the main type of DNA damage involved in the genotoxic effect of solar UVA radiation". Biochemistry. 42 (30): 9221–26. doi:10.1021/bi034593c. PMID 12885257.
- Cadet J, Delatour T, Douki T, Gasparutto D, Pouget JP, Ravanat JL, Sauvaigo S (March 1999). "Hydroxyl radicals and DNA base damage". Mutation Research. 424 (1–2): 9–21. doi:10.1016/S0027-5107(99)00004-4. PMID 10064846.
- Beckman KB, Ames BN (August 1997). "Oxidative decay of DNA". The Journal of Biological Chemistry. 272 (32): 19633–36. doi:10.1074/jbc.272.32.19633. PMID 9289489.
- Valerie K, Povirk LF (September 2003). "Regulation and mechanisms of mammalian double-strand break repair". Oncogene. 22 (37): 5792–812. doi:10.1038/sj.onc.1206679. PMID 12947387.
- Johnson G (28 December 2010). "Unearthing Prehistoric Tumors, and Debate". The New York Times. Archived from the original on 24 June 2017.
If we lived long enough, sooner or later we all would get cancer.
- Alberts B, Johnson A, Lewis J, et al. (2002). "The Preventable Causes of Cancer". Molecular biology of the cell (4th ed.). New York: Garland Science. ISBN 0-8153-4072-9. Archived from the original on 2 January 2016.
A certain irreducible background incidence of cancer is to be expected regardless of circumstances: mutations can never be absolutely avoided, because they are an inescapable consequence of fundamental limitations on the accuracy of DNA replication, as discussed in Chapter 5. If a human could live long enough, it is inevitable that at least one of his or her cells would eventually accumulate a set of mutations sufficient for cancer to develop.
- Bernstein H, Payne CM, Bernstein C, Garewal H, Dvorak K (2008). "Cancer and aging as consequences of un-repaired DNA damage". In Kimura H, Suzuki A (eds.). New Research on DNA Damage. New York: Nova Science Publishers. pp. 1–47. ISBN 978-1-60456-581-2. Archived from the original on 25 October 2014.
- Hoeijmakers JH (October 2009). "DNA damage, aging, and cancer". The New England Journal of Medicine. 361 (15): 1475–85. doi:10.1056/NEJMra0804615. PMID 19812404.
- Freitas AA, de Magalhães JP (2011). "A review and appraisal of the DNA damage theory of ageing". Mutation Research. 728 (1–2): 12–22. doi:10.1016/j.mrrev.2011.05.001. PMID 21600302.
- Ferguson LR, Denny WA (September 1991). "The genetic toxicology of acridines". Mutation Research. 258 (2): 123–60. doi:10.1016/0165-1110(91)90006-H. PMID 1881402.
- Stephens TD, Bunde CJ, Fillmore BJ (June 2000). "Mechanism of action in thalidomide teratogenesis". Biochemical Pharmacology. 59 (12): 1489–99. doi:10.1016/S0006-2952(99)00388-3. PMID 10799645.
- Jeffrey AM (1985). "DNA modification by chemical carcinogens". Pharmacology & Therapeutics. 28 (2): 237–72. doi:10.1016/0163-7258(85)90013-0. PMID 3936066.
- Braña MF, Cacho M, Gradillas A, de Pascual-Teresa B, Ramos A (November 2001). "Intercalators as anticancer drugs". Current Pharmaceutical Design. 7 (17): 1745–80. doi:10.2174/1381612013397113. PMID 11562309.
- Venter JC, Adams MD, Myers EW, Li PW, Mural RJ, Sutton GG, et al. (February 2001). "The sequence of the human genome". Science. 291 (5507): 1304–51. Bibcode:2001Sci...291.1304V. doi:10.1126/science.1058040. PMID 11181995.
- Thanbichler M, Wang SC, Shapiro L (October 2005). "The bacterial nucleoid: a highly organized and dynamic structure". Journal of Cellular Biochemistry. 96 (3): 506–21. doi:10.1002/jcb.20519. PMID 15988757.
- Wolfsberg TG, McEntyre J, Schuler GD (February 2001). "Guide to the draft human genome". Nature. 409 (6822): 824–26. Bibcode:2001Natur.409..824W. doi:10.1038/35057000. PMID 11236998.
- Gregory TR (January 2005). "The C-value enigma in plants and animals: a review of parallels and an appeal for partnership". Annals of Botany. 95 (1): 133–46. doi:10.1093/aob/mci009. PMC 4246714. PMID 15596463.
- Birney E, Stamatoyannopoulos JA, Dutta A, Guigó R, Gingeras TR, Margulies EH, et al. (June 2007). "Identification and analysis of functional elements in 1% of the human genome by the ENCODE pilot project". Nature. 447 (7146): 799–816. Bibcode:2007Natur.447..799B. doi:10.1038/nature05874. PMC 2212820. PMID 17571346.
- Created from PDB 1MSW Archived 6 January 2008 at the Wayback Machine
- Pidoux AL, Allshire RC (March 2005). "The role of heterochromatin in centromere function". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 360 (1455): 569–79. doi:10.1098/rstb.2004.1611. PMC 1569473. PMID 15905142.
- Harrison PM, Hegyi H, Balasubramanian S, Luscombe NM, Bertone P, Echols N, Johnson T, Gerstein M (February 2002). "Molecular fossils in the human genome: identification and analysis of the pseudogenes in chromosomes 21 and 22". Genome Research. 12 (2): 272–80. doi:10.1101/gr.207102. PMC 155275. PMID 11827946.
- Harrison PM, Gerstein M (May 2002). "Studying genomes through the aeons: protein families, pseudogenes and proteome evolution". Journal of Molecular Biology. 318 (5): 1155–74. doi:10.1016/S0022-2836(02)00109-2. PMID 12083509.
- Albà M (2001). "Replicative DNA polymerases". Genome Biology. 2 (1): REVIEWS3002. doi:10.1186/gb-2001-2-1-reviews3002. PMC 150442. PMID 11178285.
- Tani K, Nasu M (2010). "Roles of Extracellular DNA in Bacterial Ecosystems". In Kikuchi Y, Rykova EY (eds.). Extracellular Nucleic Acids. Springer. pp. 25–38. ISBN 978-3-642-12616-1.
- Vlassov VV, Laktionov PP, Rykova EY (July 2007). "Extracellular nucleic acids". BioEssays. 29 (7): 654–67. doi:10.1002/bies.20604. PMID 17563084. S2CID 32463239.
- Finkel SE, Kolter R (November 2001). "DNA as a nutrient: novel role for bacterial competence gene homologs". Journal of Bacteriology. 183 (21): 6288–93. doi:10.1128/JB.183.21.6288-6293.2001. PMC 100116. PMID 11591672.
- Mulcahy H, Charron-Mazenod L, Lewenza S (November 2008). "Extracellular DNA chelates cations and induces antibiotic resistance in Pseudomonas aeruginosa biofilms". PLOS Pathogens. 4 (11): e1000213. doi:10.1371/journal.ppat.1000213. PMC 2581603. PMID 19023416.
- Berne C, Kysela DT, Brun YV (August 2010). "A bacterial extracellular DNA inhibits settling of motile progeny cells within a biofilm". Molecular Microbiology. 77 (4): 815–29. doi:10.1111/j.1365-2958.2010.07267.x. PMC 2962764. PMID 20598083.
- Whitchurch CB, Tolker-Nielsen T, Ragas PC, Mattick JS (February 2002). "Extracellular DNA required for bacterial biofilm formation". Science. 295 (5559): 1487. doi:10.1126/science.295.5559.1487. PMID 11859186.
- Hu W, Li L, Sharma S, Wang J, McHardy I, Lux R, Yang Z, He X, Gimzewski JK, Li Y, Shi W (2012). "DNA builds and strengthens the extracellular matrix in Myxococcus xanthus biofilms by interacting with exopolysaccharides". PLOS ONE. 7 (12): e51905. Bibcode:2012PLoSO...751905H. doi:10.1371/journal.pone.0051905. PMC 3530553. PMID 23300576.
- Hui L, Bianchi DW (February 2013). "Recent advances in the prenatal interrogation of the human fetal genome". Trends in Genetics. 29 (2): 84–91. doi:10.1016/j.tig.2012.10.013. PMC 4378900. PMID 23158400.
- Foote AD, Thomsen PF, Sveegaard S, Wahlberg M, Kielgast J, Kyhn LA, et al. (2012). "Investigating the potential use of environmental DNA (eDNA) for genetic monitoring of marine mammals". PLOS ONE. 7 (8): e41781. Bibcode:2012PLoSO...741781F. doi:10.1371/journal.pone.0041781. PMC 3430683. PMID 22952587.
- "Researchers Detect Land Animals Using DNA in Nearby Water Bodies".
- Sandman K, Pereira SL, Reeve JN (December 1998). "Diversity of prokaryotic chromosomal proteins and the origin of the nucleosome". Cellular and Molecular Life Sciences. 54 (12): 1350–64. doi:10.1007/s000180050259. PMID 9893710. S2CID 21101836.
- Dame RT (May 2005). "The role of nucleoid-associated proteins in the organization and compaction of bacterial chromatin". Molecular Microbiology. 56 (4): 858–70. doi:10.1111/j.1365-2958.2005.04598.x. PMID 15853876. S2CID 26965112.
- Luger K, Mäder AW, Richmond RK, Sargent DF, Richmond TJ (September 1997). "Crystal structure of the nucleosome core particle at 2.8 A resolution". Nature. 389 (6648): 251–60. Bibcode:1997Natur.389..251L. doi:10.1038/38444. PMID 9305837. S2CID 4328827.
- Jenuwein T, Allis CD (August 2001). "Translating the histone code" (PDF). Science. 293 (5532): 1074–80. doi:10.1126/science.1063127. PMID 11498575. S2CID 1883924. Archived (PDF) from the original on 8 August 2017.
- Ito T (2003). "Nucleosome assembly and remodeling". Current Topics in Microbiology and Immunology. 274: 1–22. doi:10.1007/978-3-642-55747-7_1. ISBN 978-3-540-44208-0. PMID 12596902.
- Thomas JO (August 2001). "HMG1 and 2: architectural DNA-binding proteins". Biochemical Society Transactions. 29 (Pt 4): 395–401. doi:10.1042/BST0290395. PMID 11497996.
- Grosschedl R, Giese K, Pagel J (March 1994). "HMG domain proteins: architectural elements in the assembly of nucleoprotein structures". Trends in Genetics. 10 (3): 94–100. doi:10.1016/0168-9525(94)90232-1. PMID 8178371.
- Iftode C, Daniely Y, Borowiec JA (1999). "Replication protein A (RPA): the eukaryotic SSB". Critical Reviews in Biochemistry and Molecular Biology. 34 (3): 141–80. doi:10.1080/10409239991209255. PMID 10473346.
- Created from PDB 1LMB Archived 6 January 2008 at the Wayback Machine
- Myers LC, Kornberg RD (2000). "Mediator of transcriptional regulation". Annual Review of Biochemistry. 69: 729–49. doi:10.1146/annurev.biochem.69.1.729. PMID 10966474.
- Spiegelman BM, Heinrich R (October 2004). "Biological control through regulated transcriptional coactivators". Cell. 119 (2): 157–67. doi:10.1016/j.cell.2004.09.037. PMID 15479634.
- Li Z, Van Calcar S, Qu C, Cavenee WK, Zhang MQ, Ren B (July 2003). "A global transcriptional regulatory role for c-Myc in Burkitt's lymphoma cells". Proceedings of the National Academy of Sciences of the United States of America. 100 (14): 8164–69. Bibcode:2003PNAS..100.8164L. doi:10.1073/pnas.1332764100. PMC 166200. PMID 12808131.
- Created from PDB 1RVA Archived 6 January 2008 at the Wayback Machine
- Bickle TA, Krüger DH (June 1993). "Biology of DNA restriction". Microbiological Reviews. 57 (2): 434–50. doi:10.1128/MMBR.57.2.434-450.1993. PMC 372918. PMID 8336674.
- Doherty AJ, Suh SW (November 2000). "Structural and mechanistic conservation in DNA ligases". Nucleic Acids Research. 28 (21): 4051–58. doi:10.1093/nar/28.21.4051. PMC 113121. PMID 11058099.
- Schoeffler AJ, Berger JM (December 2005). "Recent advances in understanding structure-function relationships in the type II topoisomerase mechanism". Biochemical Society Transactions. 33 (Pt 6): 1465–70. doi:10.1042/BST20051465. PMID 16246147.
- Tuteja N, Tuteja R (May 2004). "Unraveling DNA helicases. Motif, structure, mechanism and function" (PDF). European Journal of Biochemistry. 271 (10): 1849–63. doi:10.1111/j.1432-1033.2004.04094.x. PMID 15128295.
- Joyce CM, Steitz TA (November 1995). "Polymerase structures and function: variations on a theme?". Journal of Bacteriology. 177 (22): 6321–29. doi:10.1128/jb.177.22.6321-6329.1995. PMC 177480. PMID 7592405.
- Hubscher U, Maga G, Spadari S (2002). "Eukaryotic DNA polymerases" (PDF). Annual Review of Biochemistry. 71: 133–63. doi:10.1146/annurev.biochem.71.090501.150041. PMID 12045093. S2CID 26171993. Archived from the original (PDF) on 26 January 2021.
- Johnson A, O'Donnell M (2005). "Cellular DNA replicases: components and dynamics at the replication fork". Annual Review of Biochemistry. 74: 283–315. doi:10.1146/annurev.biochem.73.011303.073859. PMID 15952889.
- Tarrago-Litvak L, Andréola ML, Nevinsky GA, Sarih-Cottin L, Litvak S (May 1994). "The reverse transcriptase of HIV-1: from enzymology to therapeutic intervention". FASEB Journal. 8 (8): 497–503. doi:10.1096/fasebj.8.8.7514143. PMID 7514143. S2CID 39614573.
- Martinez E (December 2002). "Multi-protein complexes in eukaryotic gene transcription". Plant Molecular Biology. 50 (6): 925–47. doi:10.1023/A:1021258713850. PMID 12516863. S2CID 24946189.
- Created from PDB 1M6G Archived 10 January 2010 at the Wayback Machine
- Cremer T, Cremer C (April 2001). "Chromosome territories, nuclear architecture and gene regulation in mammalian cells". Nature Reviews Genetics. 2 (4): 292–301. doi:10.1038/35066075. PMID 11283701. S2CID 8547149.
- Pál C, Papp B, Lercher MJ (May 2006). "An integrated view of protein evolution". Nature Reviews Genetics. 7 (5): 337–48. doi:10.1038/nrg1838. PMID 16619049. S2CID 23225873.
- O'Driscoll M, Jeggo PA (January 2006). "The role of double-strand break repair – insights from human genetics". Nature Reviews Genetics. 7 (1): 45–54. doi:10.1038/nrg1746. PMID 16369571. S2CID 7779574.
- Vispé S, Defais M (October 1997). "Mammalian Rad51 protein: a RecA homologue with pleiotropic functions". Biochimie. 79 (9–10): 587–92. doi:10.1016/S0300-9084(97)82007-X. PMID 9466696.
- Neale MJ, Keeney S (July 2006). "Clarifying the mechanics of DNA strand exchange in meiotic recombination". Nature. 442 (7099): 153–58. Bibcode:2006Natur.442..153N. doi:10.1038/nature04885. PMC 5607947. PMID 16838012.
- Dickman MJ, Ingleston SM, Sedelnikova SE, Rafferty JB, Lloyd RG, Grasby JA, Hornby DP (November 2002). "The RuvABC resolvasome". European Journal of Biochemistry. 269 (22): 5492–501. doi:10.1046/j.1432-1033.2002.03250.x. PMID 12423347. S2CID 39505263.
- Joyce GF (July 2002). "The antiquity of RNA-based evolution". Nature. 418 (6894): 214–21. Bibcode:2002Natur.418..214J. doi:10.1038/418214a. PMID 12110897. S2CID 4331004.
- Orgel LE (2004). "Prebiotic chemistry and the origin of the RNA world". Critical Reviews in Biochemistry and Molecular Biology. 39 (2): 99–123. CiteSeerX 10.1.1.537.7679. doi:10.1080/10409230490460765. PMID 15217990.
- Davenport RJ (May 2001). "Ribozymes. Making copies in the RNA world". Science. 292 (5520): 1278a–1278. doi:10.1126/science.292.5520.1278a. PMID 11360970. S2CID 85976762.
- Szathmáry E (April 1992). "What is the optimum size for the genetic alphabet?". Proceedings of the National Academy of Sciences of the United States of America. 89 (7): 2614–18. Bibcode:1992PNAS...89.2614S. doi:10.1073/pnas.89.7.2614. PMC 48712. PMID 1372984.
- Lindahl T (April 1993). "Instability and decay of the primary structure of DNA". Nature. 362 (6422): 709–15. Bibcode:1993Natur.362..709L. doi:10.1038/362709a0. PMID 8469282. S2CID 4283694.
- Vreeland RH, Rosenzweig WD, Powers DW (October 2000). "Isolation of a 250 million-year-old halotolerant bacterium from a primary salt crystal". Nature. 407 (6806): 897–900. Bibcode:2000Natur.407..897V. doi:10.1038/35038060. PMID 11057666. S2CID 9879073.
- Hebsgaard MB, Phillips MJ, Willerslev E (May 2005). "Geologically ancient DNA: fact or artefact?". Trends in Microbiology. 13 (5): 212–20. doi:10.1016/j.tim.2005.03.010. PMID 15866038.
- Nickle DC, Learn GH, Rain MW, Mullins JI, Mittler JE (January 2002). "Curiously modern DNA for a "250 million-year-old" bacterium". Journal of Molecular Evolution. 54 (1): 134–37. Bibcode:2002JMolE..54..134N. doi:10.1007/s00239-001-0025-x. PMID 11734907. S2CID 24740859.
- Callahan MP, Smith KE, Cleaves HJ, Ruzicka J, Stern JC, Glavin DP, House CH, Dworkin JP (August 2011). "Carbonaceous meteorites contain a wide range of extraterrestrial nucleobases". Proceedings of the National Academy of Sciences of the United States of America. 108 (34): 13995–98. Bibcode:2011PNAS..10813995C. doi:10.1073/pnas.1106493108. PMC 3161613. PMID 21836052.
- Steigerwald J (8 August 2011). "NASA Researchers: DNA Building Blocks Can Be Made in Space". NASA. Archived from the original on 23 June 2015. Retrieved 10 August 2011.
- ScienceDaily Staff (9 August 2011). "DNA Building Blocks Can Be Made in Space, NASA Evidence Suggests". ScienceDaily. Archived from the original on 5 September 2011. Retrieved 9 August 2011.
- Marlaire R (3 March 2015). "NASA Ames Reproduces the Building Blocks of Life in Laboratory". NASA. Archived from the original on 5 March 2015. Retrieved 5 March 2015.
- Hunt, Katie (17 February 2021). "World's oldest DNA sequenced from a mammoth that lived more than a million years ago". CNN News. Retrieved 17 February 2021.
- Callaway, Ewen (17 February 2021). "Million-year-old mammoth genomes shatter record for oldest ancient DNA - Permafrost-preserved teeth, up to 1.6 million years old, identify a new kind of mammoth in Siberia". Nature. 590 (7847): 537–538. doi:10.1038/d41586-021-00436-x. PMID 33597786. Retrieved 17 February 2021.
- Goff SP, Berg P (December 1976). "Construction of hybrid viruses containing SV40 and lambda phage DNA segments and their propagation in cultured monkey cells". Cell. 9 (4 PT 2): 695–705. doi:10.1016/0092-8674(76)90133-1. PMID 189942. S2CID 41788896.
- Houdebine LM (2007). "Transgenic animal models in biomedical research". Target Discovery and Validation Reviews and Protocols. Methods in Molecular Biology. 360. pp. 163–202. doi:10.1385/1-59745-165-7:163. ISBN 978-1-59745-165-9. PMID 17172731.
- Daniell H, Dhingra A (April 2002). "Multigene engineering: dawn of an exciting new era in biotechnology". Current Opinion in Biotechnology. 13 (2): 136–41. doi:10.1016/S0958-1669(02)00297-5. PMC 3481857. PMID 11950565.
- Job D (November 2002). "Plant biotechnology in agriculture". Biochimie. 84 (11): 1105–10. doi:10.1016/S0300-9084(02)00013-5. PMID 12595138.
- Curtis C, Hereward J (29 August 2017). "From the crime scene to the courtroom: the journey of a DNA sample". The Conversation. Archived from the original on 22 October 2017. Retrieved 22 October 2017.
- Collins A, Morton NE (June 1994). "Likelihood ratios for DNA identification". Proceedings of the National Academy of Sciences of the United States of America. 91 (13): 6007–11. Bibcode:1994PNAS...91.6007C. doi:10.1073/pnas.91.13.6007. PMC 44126. PMID 8016106.
- Weir BS, Triggs CM, Starling L, Stowell LI, Walsh KA, Buckleton J (March 1997). "Interpreting DNA mixtures" (PDF). Journal of Forensic Sciences. 42 (2): 213–22. doi:10.1520/JFS14100J. PMID 9068179. S2CID 14511630.
- Jeffreys AJ, Wilson V, Thein SL (1985). "Individual-specific 'fingerprints' of human DNA". Nature. 316 (6023): 76–79. Bibcode:1985Natur.316...76J. doi:10.1038/316076a0. PMID 2989708. S2CID 4229883.
- Colin Pitchfork – first murder conviction on DNA evidence also clears the prime suspect Forensic Science Service Accessed 23 December 2006
- "DNA Identification in Mass Fatality Incidents". National Institute of Justice. September 2006. Archived from the original on 12 November 2006.
- "Paternity Blood Tests That Work Early in a Pregnancy" New York Times June 20, 2012 Archived 24 June 2017 at the Wayback Machine
- Breaker RR, Joyce GF (December 1994). "A DNA enzyme that cleaves RNA". Chemistry & Biology. 1 (4): 223–29. doi:10.1016/1074-5521(94)90014-0. PMID 9383394.
- Chandra M, Sachdeva A, Silverman SK (October 2009). "DNA-catalyzed sequence-specific hydrolysis of DNA". Nature Chemical Biology. 5 (10): 718–20. doi:10.1038/nchembio.201. PMC 2746877. PMID 19684594.
- Carmi N, Shultz LA, Breaker RR (December 1996). "In vitro selection of self-cleaving DNAs". Chemistry & Biology. 3 (12): 1039–46. doi:10.1016/S1074-5521(96)90170-2. PMID 9000012.
- Torabi SF, Wu P, McGhee CE, Chen L, Hwang K, Zheng N, Cheng J, Lu Y (May 2015). "In vitro selection of a sodium-specific DNAzyme and its application in intracellular sensing". Proceedings of the National Academy of Sciences of the United States of America. 112 (19): 5903–08. Bibcode:2015PNAS..112.5903T. doi:10.1073/pnas.1420361112. PMC 4434688. PMID 25918425.
- Baldi P, Brunak S (2001). Bioinformatics: The Machine Learning Approach. MIT Press. ISBN 978-0-262-02506-5. OCLC 45951728.
- Gusfield D (15 January 1997). Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. Cambridge University Press. ISBN 978-0-521-58519-4.
- Sjölander K (January 2004). "Phylogenomic inference of protein molecular function: advances and challenges". Bioinformatics. 20 (2): 170–79. CiteSeerX 10.1.1.412.943. doi:10.1093/bioinformatics/bth021. PMID 14734307.
- Mount DM (2004). Bioinformatics: Sequence and Genome Analysis (2nd ed.). Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. ISBN 0-87969-712-1. OCLC 55106399.
- Rothemund PW (March 2006). "Folding DNA to create nanoscale shapes and patterns" (PDF). Nature. 440 (7082): 297–302. Bibcode:2006Natur.440..297R. doi:10.1038/nature04586. PMID 16541064. S2CID 4316391.
- Andersen ES, Dong M, Nielsen MM, Jahn K, Subramani R, Mamdouh W, Golas MM, Sander B, Stark H, Oliveira CL, Pedersen JS, Birkedal V, Besenbacher F, Gothelf KV, Kjems J (May 2009). "Self-assembly of a nanoscale DNA box with a controllable lid". Nature. 459 (7243): 73–76. Bibcode:2009Natur.459...73A. doi:10.1038/nature07971. hdl:11858/00-001M-0000-0010-9362-B. PMID 19424153. S2CID 4430815.
- Ishitsuka Y, Ha T (May 2009). "DNA nanotechnology: a nanomachine goes live". Nature Nanotechnology. 4 (5): 281–82. Bibcode:2009NatNa...4..281I. doi:10.1038/nnano.2009.101. PMID 19421208.
- Aldaye FA, Palmer AL, Sleiman HF (September 2008). "Assembling materials with DNA as the guide". Science. 321 (5897): 1795–99. Bibcode:2008Sci...321.1795A. doi:10.1126/science.1154533. PMID 18818351. S2CID 2755388.
- Wray GA (2002). "Dating branches on the tree of life using DNA". Genome Biology. 3 (1): REVIEWS0001. doi:10.1186/gb-2001-3-1-reviews0001. PMC 150454. PMID 11806830.
- Panda D, Molla KA, Baig MJ, Swain A, Behera D, Dash M (May 2018). "DNA as a digital information storage device: hope or hype?". 3 Biotech. 8 (5): 239. doi:10.1007/s13205-018-1246-7. PMC 5935598. PMID 29744271.
- Akram F, Haq IU, Ali H, Laghari AT (October 2018). "Trends to store digital data in DNA: an overview". Molecular Biology Reports. 45 (5): 1479–1490. doi:10.1007/s11033-018-4280-y. PMID 30073589. S2CID 51905843.
- Miescher F (1871). "Ueber die chemische Zusammensetzung der Eiterzellen" [On the chemical composition of pus cells]. Medicinisch-chemische Untersuchungen (in German). 4: 441–60.
Ich habe mich daher später mit meinen Versuchen an die ganzen Kerne gehalten, die Trennung der Körper, die ich einstweilen ohne weiteres Präjudiz als lösliches und unlösliches Nuclein bezeichnen will, einem günstigeren Material überlassend. (Therefore, in my experiments I subsequently limited myself to the whole nucleus, leaving to a more favorable material the separation of the substances, that for the present, without further prejudice, I will designate as soluble and insoluble nuclear material ("Nuclein")
- Dahm R (January 2008). "Discovering DNA: Friedrich Miescher and the early years of nucleic acid research". Human Genetics. 122 (6): 565–81. doi:10.1007/s00439-007-0433-0. PMID 17901982. S2CID 915930.
- Kossel A (1879). "Ueber Nucleïn der Hefe" [On nuclein in yeast]. Zeitschrift für physiologische Chemie (in German). 3: 284–91.
- Kossel A (1880). "Ueber Nucleïn der Hefe II" [On nuclein in yeast, Part 2]. Zeitschrift für physiologische Chemie (in German). 4: 290–95.
- Kossel A (1881). "Ueber die Verbreitung des Hypoxanthins im Thier- und Pflanzenreich" [On the distribution of hypoxanthins in the animal and plant kingdoms]. Zeitschrift für physiologische Chemie (in German). 5: 267–71.
- Kossel A (1881). Trübne KJ (ed.). Untersuchungen über die Nucleine und ihre Spaltungsprodukte [Investigations into nuclein and its cleavage products] (in German). Strassburg, Germany. p. 19.
- Kossel A (1882). "Ueber Xanthin und Hypoxanthin" [On xanthin and hypoxanthin]. Zeitschrift für physiologische Chemie. 6: 422–31.
- Albrect Kossel (1883) "Zur Chemie des Zellkerns" Archived 17 November 2017 at the Wayback Machine (On the chemistry of the cell nucleus), Zeitschrift für physiologische Chemie, 7 : 7–22.
- Kossel A (1886). "Weitere Beiträge zur Chemie des Zellkerns" [Further contributions to the chemistry of the cell nucleus]. Zeitschrift für Physiologische Chemie (in German). 10: 248–64.
On p. 264, Kossel remarked presciently: Der Erforschung der quantitativen Verhältnisse der vier stickstoffreichen Basen, der Abhängigkeit ihrer Menge von den physiologischen Zuständen der Zelle, verspricht wichtige Aufschlüsse über die elementaren physiologisch-chemischen Vorgänge. (The study of the quantitative relations of the four nitrogenous bases—[and] of the dependence of their quantity on the physiological states of the cell—promises important insights into the fundamental physiological-chemical processes.)
- Jones ME (September 1953). "Albrecht Kossel, a biographical sketch". The Yale Journal of Biology and Medicine. 26 (1): 80–97. PMC 2599350. PMID 13103145.
- Levene PA, Jacobs WA (1909). "Über Inosinsäure". Berichte der Deutschen Chemischen Gesellschaft (in German). 42: 1198–203. doi:10.1002/cber.190904201196.
- Levene PA, Jacobs WA (1909). "Über die Hefe-Nucleinsäure". Berichte der Deutschen Chemischen Gesellschaft (in German). 42 (2): 2474–78. doi:10.1002/cber.190904202148.
- Levene P (1919). "The structure of yeast nucleic acid". J Biol Chem. 40 (2): 415–24. doi:10.1016/S0021-9258(18)87254-4.
- Cohen JS, Portugal FH (1974). "The search for the chemical structure of DNA" (PDF). Connecticut Medicine. 38 (10): 551–52, 554–57. PMID 4609088.
- Koltsov proposed that a cell's genetic information was encoded in a long chain of amino acids. See:
- Кольцов, Н. К. (12 December 1927). Физико-химические основы морфологии [The physical-chemical basis of morphology] (Speech). 3rd All-Union Meeting of Zoologist, Anatomists, and Histologists (in Russian). Leningrad, U.S.S.R.
- Reprinted in: Кольцов, Н. К. (1928). "Физико-химические основы морфологии" [The physical-chemical basis of morphology]. Успехи экспериментальной биологии (Advances in Experimental Biology) series B (in Russian). 7 (1): ?.
- Reprinted in German as: Koltzoff NK (1928). "Physikalisch-chemische Grundlagen der Morphologie" [The physical-chemical basis of morphology]. Biologisches Zentralblatt (in German). 48 (6): 345–69.
- In 1934, Koltsov contended that the proteins that contain a cell's genetic information replicate. See: Koltzoff N (October 1934). "The structure of the chromosomes in the salivary glands of Drosophila". Science. 80 (2075): 312–13. Bibcode:1934Sci....80..312K. doi:10.1126/science.80.2075.312. PMID 17769043.
From page 313: "I think that the size of the chromosomes in the salivary glands [of Drosophila] is determined through the multiplication of genonemes. By this term I designate the axial thread of the chromosome, in which the geneticists locate the linear combination of genes; … In the normal chromosome there is usually only one genoneme; before cell-division this genoneme has become divided into two strands."
- Soyfer VN (September 2001). "The consequences of political dictatorship for Russian science". Nature Reviews Genetics. 2 (9): 723–29. doi:10.1038/35088598. PMID 11533721. S2CID 46277758.
- Griffith F (January 1928). "The Significance of Pneumococcal Types". The Journal of Hygiene. 27 (2): 113–59. doi:10.1017/S0022172400031879. PMC 2167760. PMID 20474956.
- Lorenz MG, Wackernagel W (September 1994). "Bacterial gene transfer by natural genetic transformation in the environment". Microbiological Reviews. 58 (3): 563–602. doi:10.1128/MMBR.58.3.563-602.1994. PMC 372978. PMID 7968924.
- Brachet J (1933). "Recherches sur la synthese de l'acide thymonucleique pendant le developpement de l'oeuf d'Oursin". Archives de Biologie (in Italian). 44: 519–76.
- Burian R (1994). "Jean Brachet's Cytochemical Embryology: Connections with the Renovation of Biology in France?" (PDF). In Debru C, Gayon J, Picard JF (eds.). Les sciences biologiques et médicales en France 1920–1950. Cahiers pour I'histoire de la recherche. 2. Paris: CNRS Editions. pp. 207–20.
- Astbury WT, Bell FO (1938). "Some recent developments in the X-ray study of proteins and related structures" (PDF). Cold Spring Harbor Symposia on Quantitative Biology. 6: 109–21. doi:10.1101/sqb.1938.006.01.013. Archived from the original (PDF) on 14 July 2014.
- Astbury WT (1947). "X-ray studies of nucleic acids". Symposia of the Society for Experimental Biology (1): 66–76. PMID 20257017. Archived from the original on 5 July 2014.
- Avery OT, Macleod CM, McCarty M (February 1944). "Studies on the Chemical Nature of the Substance Inducing Transformation of Pneumococcal Types: Induction of Transformation by a Desoxyribonucleic Acid Fraction Isolated from Pneumococcus Type III". The Journal of Experimental Medicine. 79 (2): 137–58. doi:10.1084/jem.79.2.137. PMC 2135445. PMID 19871359.
- Hershey AD, Chase M (May 1952). "Independent functions of viral protein and nucleic acid in growth of bacteriophage". The Journal of General Physiology. 36 (1): 39–56. doi:10.1085/jgp.36.1.39. PMC 2147348. PMID 12981234.
- The B-DNA X-ray pattern on the right of this linked image Archived 25 May 2012 at archive.today
- Schwartz J (2008). In pursuit of the gene : from Darwin to DNA. Cambridge, Mass.: Harvard University Press.
- Pauling L, Corey RB (February 1953). "A Proposed Structure For The Nucleic Acids". Proceedings of the National Academy of Sciences of the United States of America. 39 (2): 84–97. Bibcode:1953PNAS...39...84P. doi:10.1073/pnas.39.2.84. PMC 1063734. PMID 16578429.
- Regis E (2009). What Is Life?: investigating the nature of life in the age of synthetic biology. Oxford: Oxford University Press. p. 52. ISBN 978-0-19-538341-6.
- "Double Helix of DNA: 50 Years". Nature Archives. Archived from the original on 5 April 2015.
- "Original X-ray diffraction image". Oregon State Library. Archived from the original on 30 January 2009. Retrieved 6 February 2011.
- "The Nobel Prize in Physiology or Medicine 1962". Nobelprize.org.
- Maddox B (January 2003). "The double helix and the 'wronged heroine'" (PDF). Nature. 421 (6921): 407–08. Bibcode:2003Natur.421..407M. doi:10.1038/nature01399. PMID 12540909. S2CID 4428347. Archived (PDF) from the original on 17 October 2016.
- Crick F (1955). A Note for the RNA Tie Club (PDF) (Speech). Cambridge, England. Archived from the original (PDF) on 1 October 2008.
- Meselson M, Stahl FW (July 1958). "The Replication of DNA in Escherichia Coli". Proceedings of the National Academy of Sciences of the United States of America. 44 (7): 671–82. Bibcode:1958PNAS...44..671M. doi:10.1073/pnas.44.7.671. PMC 528642. PMID 16590258.
- "The Nobel Prize in Physiology or Medicine 1968". Nobelprize.org.
- Pray L (2008). "Discovery of DNA structure and function: Watson and Crick". Nature Education. 1 (1): 100.
- Berry A, Watson J (2003). DNA: the secret of life. New York: Alfred A. Knopf. ISBN 0-375-41546-7.
- Calladine CR, Drew HR, Luisi BF, Travers AA (2003). Understanding DNA: the molecule & how it works. Amsterdam: Elsevier Academic Press. ISBN 0-12-155089-3.
- Carina D, Clayton J (2003). 50 years of DNA. Basingstoke: Palgrave Macmillan. ISBN 1-4039-1479-6.
- Judson HF (1979). The Eighth Day of Creation: Makers of the Revolution in Biology (2nd ed.). Cold Spring Harbor Laboratory Press. ISBN 0-671-22540-5.
- Olby RC (1994). The path to the double helix: the discovery of DNA. New York: Dover Publications. ISBN 0-486-68117-3. First published in October 1974 by MacMillan, with foreword by Francis Crick; the definitive DNA textbook, revised in 1994 with a nine-page postscript.
- Olby R (January 2003). "Quiet debut for the double helix". Nature. 421 (6921): 402–05. Bibcode:2003Natur.421..402O. doi:10.1038/nature01397. PMID 12540907.
- Olby RC (2009). Francis Crick: A Biography. Plainview, N.Y: Cold Spring Harbor Laboratory Press. ISBN 978-0-87969-798-3.
- Micklas D (2003). DNA Science: A First Course. Cold Spring Harbor Press. ISBN 978-0-87969-636-8.
- Ridley M (2006). Francis Crick: discoverer of the genetic code. Ashland, OH: Eminent Lives, Atlas Books. ISBN 0-06-082333-X.
- Rosenfeld I (2010). DNA: A Graphic Guide to the Molecule that Shook the World. Columbia University Press. ISBN 978-0-231-14271-7.
- Schultz M, Cannon Z (2009). The Stuff of Life: A Graphic Guide to Genetics and DNA. Hill and Wang. ISBN 978-0-8090-8947-5.
- Stent GS, Watson J (1980). The Double Helix: A Personal Account of the Discovery of the Structure of DNA. New York: Norton. ISBN 0-393-95075-1.
- Watson J (2004). DNA: The Secret of Life. Random House. ISBN 978-0-09-945184-6.
- Wilkins M (2003). The third man of the double helix the autobiography of Maurice Wilkins. Cambridge, England: University Press. ISBN 0-19-860665-6.
|Library resources about |
|Wikiquote has quotations related to: DNA|
|Wikiversity has learning resources about DNA|
|Wikimedia Commons has media related to DNA.|
- DNA at Curlie
- DNA binding site prediction on protein
- DNA the Double Helix Game From the official Nobel Prize web site
- DNA under electron microscope
- Dolan DNA Learning Center
- Double Helix: 50 years of DNA, Nature
- Proteopedia DNA
- Proteopedia Forms_of_DNA
- ENCODE threads explorer ENCODE home page. Nature
- Double Helix 1953–2003 National Centre for Biotechnology Education
- Genetic Education Modules for Teachers – DNA from the Beginning Study Guide
- PDB Molecule of the Month DNA
- "Clue to chemistry of heredity found". The New York Times, June 1953. First American newspaper coverage of the discovery of the DNA structure
- DNA from the Beginning Another DNA Learning Center site on DNA, genes, and heredity from Mendel to the human genome project.
- The Register of Francis Crick Personal Papers 1938 – 2007 at Mandeville Special Collections Library, University of California, San Diego
- Seven-page, handwritten letter that Crick sent to his 12-year-old son Michael in 1953 describing the structure of DNA. See Crick's medal goes under the hammer, Nature, 5 April 2013. | https://library.kiwix.org/wikipedia_en_top_maxi/A/DNA | 21 |
19 | Describe and define bookkeeping and accounting
Explain the general purposes and functions of accounting
Explain the differences between management and financial accounting
Describe the main elements of financial accounting information – assets, liabilities, revenue and expenses
Identify the main financial statements and their purposes
Apply the essential numerical skills required for bookkeeping and accounting
Explain the relationship between the accounting equation and double-entry bookkeeping
Record transactions in the appropriate ledger accounts using the double-entry bookkeeping system
Balance off ledger accounts at the end of an accounting period
Produce a trial balance, balance sheet and a profit and loss account
Discover how money flows in personal and business environments and develop the skills to manage your finances with this online accounting and bookkeeping course from the Open University.
You’ll master common terms, basic maths and gain the ability to put your knowledge into practice. After this course you’ll be able to perfectly balance your books and understand how concepts of profit and loss lead to revenue or debt.
Develop ideas through conversation
This course is not facilitated. Learners are encouraged to support one another, share personal experiences, and see new perspectives.
The reasons and objectives of management and financial accounting, including stewardship, control and accountability
Key terminology including income and expenses, assets and liabilities, profit and loss statements and the balance sheet
This accounting course is for anyone wanting an introduction to bookkeeping and financial accounting. It might be of particular interest to small business owners, people who are self employed or those wanting to better manage their own finance. You don’t need any previous experience. | https://www.mooc.cn/course/21179.html | 21 |
27 | The process of predicting the future event based on the historical (past) data is technically called forecasting. Although forecasts are critically important but they are never accurate as managers would like.
2 Evaluating the forecast accuracy:
Although forecasts are critically important they are never accurate so we have to measure the accuracy of the forecast value, whether the forecast values are either accurate or not so accurate. Mean Absolute Deviation (MAD), Mean Absolute Percentage Error (MAPE), and Mean Square Error (MSE) are used to test the accuracy of the forecast value and they are calculated by using the following relation.
n = Number of the forecast period.
MSE = Mean Square Error
MAD = Mean Absolute Deviation
MAPE = Mean Absolute Percentage Error.
Forecasting models having a minimum value of MAD, MSE and MAPE are better than the model that has a maximum value of MAD, MSE, and MAPE i.e. how much the value of MAD, MSE, and MAPE is less; the forecast model is more accurate. The MSE is frequently used to measure the accuracy of the forecast value than others.
Note: in forecasting, the estimated value or predicted value is also denoted by Ft i.e.
Note: If the question separates the warm-up and forecasting sample, then the following points should be considered while making the forecast.
Warm-up and forecasting sample:
Before using the forecasting model to forecast future events, at first, divide the given time series data into two parts. The first part of the data is used to fit the model and this first part is called the warm-up sample. The second part of the data is used to test the accuracy of the model and this part is called the forecasting sample. There are no statistical rules to divide the data into warm-up and forecasting samples. But as far as possible, keep at least six data points or two complete seasons of seasonal data in the warm-up sample. If there are fewer data then it is not necessary to separate the time series data into warm-up and forecasting sample.
3. Forecasting models:
The following mathematical models are used to forecast the future value.
i) Naïve Model:
It is the simplest model of forecasting. In this model, the forecast value of any period (t+1) is the actual value of the previous period (t). The general mathematical model is
= Forecast value at time (t+1).
yt = Observed data (actual data) at time‘t’
By this naïve model, we can forecast the future value only for one period ahead.
ii) Moving Average Method or Simple Moving Average Method:
In this method, we can use 2-period moving average or 3-period moving average or 4-period moving average or 5-period moving average, or any convenient period moving average method.
Here forecast value is the mean of the last n data point. For example
• For 3-period moving average:
The forecast value of the fourth period is the mean of the previous three-period observed values. i.e.
The forecast value of the fifth period is the mean of the previous three periods’ observed value. i.e.
• For 5-period moving average method:
The forecast value of the sixth period is the mean of the previous five periods’ observed values. i.e.
The forecast value of the seventh period is the mean of the previous five periods’ observed values. i.e.
In a similar way, we can also use any convenient period moving average method.
By this moving average method, we can forecast the future value only for one period ahead.
iii) Trend fitting model:
This model is already discussed in the time series analysis. This model is also known as the regression model or linear model.
In forecasting, when the question mentioned about the forecasting and warm-up sample at that time, only a warm-up sample is used to fit the linear model and a forecasting sample is used to test the accuracy of the model. When the question did not mention the warm-up and forecasting sample at that time use all the data to fit the model.
iv) Simple exponential smoothing method:
In this method, the new forecast is equal to the old forecast plus a fraction of the error. The fraction (the Greek letter alpha) is called the smoothing parameter and its value lies between 0 and 1. In mathematical form, the model is
= Forecast value at time (t+1).
= Forecast value at a time ‘t’ OR estimated value at a time ‘t’.
yt = observed value at time ‘t’
= Smoothing parameter and its value lie between 0 and 1.
Note: To start this method, a forecast for the first period is the mean value of the warm-up sample. Or we may take the first period’s actual value as a forecast value of the first period.
You may also like: Time Series Analysis | https://bcisnotes.com/fourthsemester/forecasting/ | 21 |
36 | The Federal Reserve of the United States government is one of the most important financial institutions in the world. It’s the central economic institute for the US and determines much of the country’s economic policy. If you’re wondering how, exactly, the Fed decides on that policy, then you’re in the right place.
Today we’ll take a closer look at the central bank and explain how and why they direct the economy and help act as the arbiters of financial planning for the entire country.
The Federal Reserve was created by Congress in 1913 and has three main goals. It’s meant to oversee employment rates, manage interest rates for loans, and help to maintain steady prices in the market. The board of governors that directs the Fed is appointed by the president. The board of governors is made up of seven members. Meanwhile, the twelve regional Federal Reserve Banks handle the oversight of banks across the country. Banks in the US are bound by law to hold stock in the Federal Reserve Bank of their region.
The Federal Open Market Committee is the body responsible for the majority of economic policy enacted by the Fed. The FOMC is comprised of the board of governors and the president of each of the regional Federal Reserve Banks.
The Federal Reserve itself doesn’t receive funding from the US government and is instead funded by the profits of the Federal Reserve Banks. Decisions made by the Fed don’t need to be overseen by Congress, the president, or any other government agency. In a sense, the Fed is an independent institution that works for the government.
Relationship to the Treasury
The Federal Reserve is unusual in that it is one of the few central banks in the world that doesn’t print its own currency. The United States Treasury is responsible for issuing currency, a responsibility given to the Treasury in the Constitution.
The Treasury has a checking account with the Federal Reserve, which acts as the main account through which the government pays its bills. The Treasury, in turn, creates the physical bills and coins that comprise US currency and sells this money to the Federal Reserve.
The Federal Reserve banks themselves are also unusual in that they are legally in a gray area, operating as something between a private company and a public institution. This is part of why the American central bank can seem so strange to other countries, as it is unique among the world’s financial systems.
So, while the Federal Reserve is part of the government and helps shape economic policy, it is independent enough to be considered its own entity independent of the government that relies on its services. | https://theredreel.com/the-federal-reserve-directs-the-economy-how-do-they-decide-on-policy/ | 21 |
43 | The central problems of an economy comprised of the basic difficulties faced by every economy.
The Content covered in this article:
What are the Central Problems of an Economy:
An economy is a system by which people of an area earn their living. Thus, Every economy whether rich or poor, developed or under-developed must face some central problems. These are:
- What to produce?
- How to produce?
- For whom to produce?
Subscribe to our Youtube Channel
What to Produce:
This problem has the following dimensions:
- What goods are to be produced?
- In what quantity goods are to be produced?
What goods are to be produced:
In this problem, the economy decides the type of goods to be produced. Broadly, the goods can be classified as:
- Capital Goods
- Consumer Goods
Production of both capital and consumer goods is essential for the economy. Capital Goods are goods that help in further production and future growth. For instance, plant and machinery. Consumer Goods are the goods needed for consumption.
If the limited resources available in the economy are used for the production of consumer goods, the present generation will enjoy a better living standard but lack of capital goods would decline future growth. On the other hand, if limited resources are largely used for the production of capital goods, future growth would be high. But, the lack of consumer goods would decline the standard of living of the present generation. Hence, the problem called the “problem of choice” or the “problem of allocation of limited resources” to different uses.
In what quantity goods are to be produced:
In this, an economy has to decide how much goods to be produced whether consumer or capital goods.
As we know the resources are limited in any economy. But, if the resources are used for producing more consumer goods, there would be fewer capital goods production. Moreover, if the resources are used for producing more capital goods, there would be fewer consumer goods.
Here, it is important to note that the loss of a quantity of consumer goods is the cost of producing more capital goods. Likewise, the loss of the quantity of capital goods is the cost of producing more consumer goods. In economics, this loss is regarded as an Opportunity Cost. In other words, the shifting of resources from one use to the other is known as opportunity cost.
Therefore, in the problem of ” what to produce”, an economy has to decide the type and the number of goods to be produced.
How to produce:
“How to produce” refers to the methods or technique of production. Broadly, the techniques of production can be classified as:
- Labour-intensive technique
- Capital-intensive technique
Here, the labour-intensive technique implies more use of labour than capital in production. Moreover, the capital intensive technique implies more use of capital(i.e. machines) rather than labour.
The capital intensive technique provides efficiency resulting in more growth. On the other hand, labour-intensive technique promotes employment in the economy. Hence, the economy has to make a choice between these two techniques which becomes a problem. This is because the labour-intensive technique helps in the reduction of unemployment while the capital intensive technique accelerates GDP growth.
Therefore, in “How to produce” an economy decides the methods or technique of production.
For whom to produce:
It refers to the target consumers in the economy. As we are aware that the resources are limited. Thus, an economy cannot produce goods for all sections of society. Broadly, every economy has two sections in society:
- Rich section
- Poor Section
Generally, to promote social equality, more goods are produced for poor people. It would reduce the rich-poor gap in society by providing a better standard of living to poor people. But, there is a hidden cost of doing it. By producing goods for poor people, the profits of producers would remain low. Furthermore, low profits result in low investment and in further low GDP growth. Therefore, the economy would remain backwards for a long time to come. Thus, there is a problem of choice from social equality or GDP growth.
Thanks for reading the topic.
Please write your feedback in the comment box whatever you want. If you have any questions please ask us by commenting on us. | https://tutorstips.com/central-problems-of-an-economy/ | 21 |
22 | What are hearing tests?
Hearing tests are used to assess your ability to hear different sounds and to determine if there are any problems.
Why are hearing tests needed?
Hearing tests are carried out for two main reasons:
- as a routine part of a baby’s or young child’s developmental checks
- to check the hearing of someone who is experiencing hearing problems or has a hearing impairment
It is important that hearing tests are carried out so that the right support and treatment can be provided.
Read more about why hearing tests are needed.
Hearing tests are carried out at regular intervals during childhood, starting with the new born hearing screening programme (NHSP) within a few weeks of birth.
Your child's hearing may also be checked during a general health review when they are a few years old and before they start school for the first time.
If at any point you are worried about your or your child's hearing, you can ask your doctor for a hearing test.
Read more about when hearing tests are needed.
What happens during a hearing test?
Although your doctor or practice nurse can examine your ears, you will usually be referred to a specialist for a hearing test.
A number of different tests are used to check how well the ears are functioning and their ability to detect different levels of sound.
Common hearing tests include:
- automated otoacoustic emissions (AOAE) tests – a computer attached to an earpiece plays clicking noises and measures the response from the ear
- automated auditory brainstem response (AABR) tests – sensors are placed on the head and neck to check the response of the nerves to sound played through headphones
- pure tone audiometry tests – sounds of different volumes and frequencies are played and a button is pressed when they are heard
- bone conduction tests – a vibrating sensor is placed behind the ear to test how well sound travels through the bones in the ear
The tests used generally differ between children and adults, but they are all completely painless.
The results of some of these tests are recorded on a graph called an audiogram, so that the type of hearing loss can be identified.
Read more about how hearing tests are carried out and hearing and vision tests for children.
Your hearing may be affected if sounds don't reach the inner ear efficiently. This is known as conductive hearing loss. Conductive hearing loss can be caused by problems such as a blockage in your ear canal (such as from ear wax) or in the middle ear (for example, glue ear). An infection of your outer ear (otitis externa) or middle ear (otitis media) may also be responsible. Hearing loss of this type is often temporary and reversible.
If sounds reach the inner ear but are still not heard, the fault lies in the inner ear or, rarely, in the hearing nerve. This is called sensori-neural hearing loss. Inner ear hearing loss may occur for a number of reasons, most commonly as a result of age-related change. Inner ear hearing loss is nearly always permanent.
Hearing tests are used to determine the type of hearing loss that you have.
When should hearing tests be performed?
Hearing tests are carried out regularly, particularly during childhood, to identify any problems as soon as possible.
In the past, many children born with a hearing impairment were not diagnosed until they were 18 months or older. However, identifying hearing loss late can have a negative impact on a child's language development, social skills and self-confidence. If hearing problems are diagnosed early, appropriate support can be provided for the child and their family.
It is also important to identify hearing loss in adults early, as treatment is more likely to be effective the earlier problems are diagnosed.
Later childhood tests
There will also be further opportunities to check your child’s hearing as they get older. For example:
- a child may have their hearing checked as part of their general review when they are about two-and-a-half years old
- all children have a hearing test when they are between four and five years old before they start school
- your doctor can arrange for your child to have a hearing test at any age if you feel that their hearing is not right (see below)
The age at which routine tests or assessments are carried out may vary between different areas. Your doctor or health visitor should be able to advise you.
Reporting problems to your doctor
If you think your child may have a hearing problem, take them to see your doctor as soon as possible. Hearing tests can be used at any time to help diagnose or rule out other health conditions. In some cases, hearing loss may be the cause of delayed speech and language development.
Many children who experience hearing problems turn out to have a very common and temporary condition called glue ear, in which mucus blocks the ear.
Less commonly, other explanations for a child apparently having hearing difficulties include behavioural problems such as attention deficit hyperactivity disorder (ADHD).
Adult hearing tests
Adults can also request a hearing test from their doctor if they are concered about their hearing.
Hearing loss in old age is a common and usually gradual process. It often begins with difficulty hearing other people clearly, particularly when there is a lot of background noise. At first you may not realise that you have a hearing impairment. Other members of your family may be the first ones to notice that you have a problem.
However, there are other reasons why adults might lose their hearing, such as ear infections or prolonged exposure to excessive noise.
You should visit your doctor if you experience hearing loss in one or both ears, or if you have:
- tinnitus – ringing or buzzing in your ears
- vertigo – dizziness or loss of balance
- severe ear pain that lasts for more than 24 hours
- discharge – fluid or blood coming out of the ear
You may also need to have a hearing test if you have a head injury, because it could damage your inner ear.
Older people with permanent hearing loss may benefit from having a hearing aid. If you have a hearing aid fitted, you will receive advice and support from your local audiology department, including advice about changing the battery, repairs and upgrades.
You are more likely to benefit from a hearing aid if your hearing loss is diagnosed early, so you should ask your doctor to arrange a test for you if you are at all concerned about your hearing.
How are hearing tests performed
A hearing test is usually carried out after your ears have been examined and you have been referred to a specialist.
Your doctor or practice nurse will first ask about any symptoms you may be experiencing, such as:
- pain or discharge (fluid)
- tinnitus – noise in one or both ears
- vertigo (dizziness)
- hearing loss
- previous, relevant medical problems
Your ear will be examined using an instrument called an auriscope, also known as an otoscope. An auriscope is a small hand-held torch with a magnifying glass which allows the doctor to see the eardrum and the passageway that leads to it from the outer ear. It can be used to look for:
- discharge – fluid coming out of the ear
- a bulging eardrum – indicating that there is infected fluid in the middle ear
- a retracted eardrum – indicating uninfected fluid in the middle ear (glue ear)
- perforated eardrum – a hole in the eardrum, with or without signs of infection
- ear wax or foreign bodies that might be blocking the ear
Your doctor may also carry out simple tests using their voice to help determine the extent of your hearing loss. If there are any concerns, you or your child may be referred to an ear, nose and throat (ENT) specialist for further tests.
Hearing tests in children
A range of different techniques are used to detect hearing problems. Some hearing tests are only used for children, including:
- automated otoacoustic emissions (AOAE) tests – a computer attached to a small earpiece plays quiet clicking noises and measures the response from your child's ear
- automated auditory brainstem response (AABR) tests – sensors are placed on your child's head and neck to check the response of their nerves to sound played through headphones
- play audiometry tests – sounds of different volumes and frequencies are played to your child and they carry out a simple task when they hear them
Read more about how hearing and vision tests for children are carried out.
However, some tests, such as pure tone audiometry, speech perception and tympanometry (see below) can be used to test adults and well as children.
Hearing tests in adults
There are a number of different ways to test adult hearing. Some of these are briefly described below.
Pure tone audiometry
Pure tone audiometry (PTA) tests the hearing of both ears. During PTA, a machine called an audiometer is used to produce sounds at various volumes and frequencies (pitches). You listen to the sounds through headphones and respond when you hear them by pressing a button.
The speech perception test, also sometimes known as a speech discrimination test or speech audiometry, involves testing your ability to hear words without using any visual information. The words may be played through headphones or a speaker, or spoken by the tester.
Sometimes, you are asked to listen to words while there is a controlled level of background noise.
The eardrum should allow as much sound as possible to pass into the middle ear. If sound is reflected back from the eardrum, hearing will be impaired.
During tympanometry, a small tube is placed at the entrance of your ear and air gently blown down it into the ear. The test can be used to confirm whether the ear is blocked, most commonly by fluid.
Whispered voice test
The whispered voice test is a very simple hearing test. It involves the tester blocking one of your ears and testing your hearing by whispering words at varying volumes. You will be asked to repeat the words out loud as you hear them.
Tuning fork tes
A tuning fork produces sound waves at a fixed pitch when it is gently tapped and can be used to test different aspects of your hearing.
The tester will tap the tuning fork on their elbow or knee to make it vibrate, before holding it at different places around your head.
This test can help determine if you have conductive hearing loss, which is hearing loss caused by sounds not being able to pass freely into the inner ear, or sensori-neural hearing loss where the inner ear or hearing nerve is not working properly.
Bone conduction test
A bone conduction test is often carried out as part of a routine pure tone audiometry (PTA) test in adults.
Bone conduction involves placing a vibrating probe against the mastoid bone behind the ear. It tests how well sounds transmitted through the bone are heard.
Bone conduction is a more sophisticated version of the tuning fork test, and when used together with PTA, it can help determine whether hearing loss comes from the outer and middle ear, the inner ear, or both. | https://www.livehealthily.com/hearing-problems/hearing-tests | 21 |
25 | History Term Paper Jack Conway Mr. Hilgendorf February 25, 2013 Word Count: 3234 Reconstruction: Rebuilding America The United States was founded on the belief that every man has “certain inalienable Rights. ” Not until ninety years later, however, when slavery was abolished did the United States actually offer these “Rights” to all of its citizens. The 19th century was turbulent time of stress and change for America. One of the most controversial dilemmas was the issue of slavery. Slavery was conceived by many to be morally wrong, and it undermined America’s most valued beliefs.
Despite this inconsistency, slavery was still widely supported and permitted out of economic necessity in the South. Slavery divided the nation in half. The economy of the South was primarily agricultural production on plantations. This form of economy made slavery vital to the state of the South. In the North, The economy was primarily industrial, which eliminated the dependency on slavery much earlier. Because of the vastly different economic bases of these two regions, their culture and views of the world begin to shift apart.
On top of economic dissimilarities, conflict between the North and the South grew because of cultural and political differences. After the first openly anti-slave president, Abraham Lincoln, was elected, the South eventually seceded from the Union launching the American Civil War. The South fought to become its own nation while the North fought to bring the nation back. Eventually, because of a significantly larger population, more supplies, and superior logistics, the North won and the South was forced back into the Union. Both sides were hurt badly by the war, losing a substantial number of people and resources.
The South was left in a state of total destruction ranging from lawlessness to austere military regimes, forcing it into economic hardship. The transition from slavery to free labor was far from smooth. The goal of Reconstruction was to restore the southern economy, government, and to give Blacks equal rights under the law. While reconstruction may not have been as successful as many would have hoped, the question remains: in hindsight, based on the economic and cultural conditions of the 19th century, could Reconstruction have been handled more successfully?
Reconstruction did very little for the people of the South. The economy was still in poor shape, racism and violence dramatically increased, and the standard of living for Blacks, who were legally free, was not any better than before. Even though Reconstruction pragmatically failed, given the circumstances of the time, there was no feasible way that it could have been done significantly better. The actual course of Reconstruction was complex and far from easy. After the South was forced back into the Union it had no political power.
All of the slaves were now free as a result of the Emancipation Proclamation. Former Confederates could no longer vote or run for political office. The victorious North then had to decide: Under what terms would the South rejoin the Union? Would plantations stay with their original owners or be divided up among southerners? What would the new role of Blacks be in this new society? How much power and what rights will the freed Blacks have? Lincoln’s plan was to give full amnesty and restoration of all rights “except as to slaves. This plan meant that former Confederates should be given all of their former belongings and rights except for their former slaves, who were now considered free. Lincoln felt that that the best way to deal with the former Confederates was to befriend them and thus eliminate hostilities: ”Am I not destroying my enemies when I make friends of them? ” Lincoln’s plan was considered to be too lenient by the Republican Congress. The eventual compromise was a “ten percent plan” that would allow all southern rights to be restored only after ten percent of the southern population swore an oath to the loyalty of the Union.
By the time each former Confederate state had passed the “ten percent” quota, the Thirteenth, Fourteenth, and Fifteenth Amendments had already passed in a Congress without southern representation. Clearly the North and federal government still held most of the power over the South. The most recalcitrant Confederate states underwent radical reconstruction enforced by a military regime. After Abraham Lincoln’s assassination, Andrew Johnson, his Vice President, replaced him. Johnson1, a southerner, shared Lincoln’s ideas on leniency when it came to reconstructing the South. He wanted minimal emands. At first Radical Republicans were unwilling to spread national power and felt that in order to properly reconstruct the South they had to maintain their authority. However, the centralization of power did not last long as violence grew in southern states and as the desire to preserve the federal system’s pre-war balance weighed heavily on the minds of leading Republicans. Republican Senator James W. Grimes once said in a letter, “We are gradually surrendering all rights of the states,” illustrating that the Union intended to transfer power back to the southern states.
Despite the turmoil caused during Reconstruction, there were some substantial accomplishments. The Thirteenth Amendment was the first of the “Reconstruction Amendments. ” This Amendment made slavery illegal in every part of the United States. The next was the Fourteenth Amendment that made Blacks citizens and prohibited any state from interfering with the “inalienable rights” of citizens. Lastly and possibly the most important accomplishment was the Fifteenth Amendment which gave Blacks, and any male citizen of the United States the right to vote.
Without the above three amendments to the U. S. Constitution, Blacks might still be slaves today and considered legally inferior to whites. Another large benefit of Reconstruction was public education was made available to Blacks for the first time in the South. Black access to education, even if it was underfunded and inferior to that of the whites, was still a huge step forward. Blacks had a tenacity to learn because they were deprived of that privilege for so long.
In 1850, the literacy rate among Blacks ranged from ten to twenty percent, but after 1890, when the public education system included Blacks, their literacy rate jumped to over eighty percent. For period of time after the war ended, Blacks could vote and former Confederates could not. Therefore, Blacks gained some political power in many of the southern states that had both large black and confederate populations. The southern economy began to industrialize taking advantage of local coal, oil, cheap labor, and steel although the industry in the South never was as productive or powerful as it was in the North.
As the “New South” began to develop and industrialize, it began to better train and take care of its newly freed black workers to prevent them from Unionizing. The South provided workers with schools, hospitals, recreational facilities, housing, and offered scholarships for Blacks to attend Booker T. Washington’s Tuskegee College. Even though the rationale for providing these benefits to Blacks was to prevent Unionization, these steps were significant achievements toward Reconstruction. From a purely legal standpoint, Reconstruction accomplished most of its goals; however, it was not without significant faults.
Even though Blacks obtained their rights under the law, the southern government was rebuilt, and the southern economy was redirected, most of those changes were short lived. The swift and radical Reconstruction efforts occurred during that short period in time when Blacks voted in the absence of white Confederates. In the end white Confederates regained power. By 1901 Blacks were left in the same basic position as before Reconstruction. According to Wendell Phillips, a lawyer and abolitionist, the Emancipation Proclamation and Reconstruction Amendments only “free[d] the slaves and ignore[d] the negro. When it came to actually improving the quality of life for freed Blacks, almost nothing was achieved. Even with their once insatiable desire to be free and educated, Blacks became fearful and submissive to whites after years of oppression, preventing them from pursuing their goals, and becoming independent of whites. Because of a surplus of workers and few jobs, Blacks had power under the law, but no real power due to insidious racism, and limited enforcement of the law. Blacks were often blocked from voting by terror and white supremacy groups.
The emergence of industrialization in the South, while economically beneficial, created a gap between the worker and the elite and caused former southern ideas of paternalism towards Blacks to slowly deteriorate. With a loss of compassion, working condition often became extremely harsh, mimicking those in the North. These conditions in factories, also known as “wage slavery,” were often compared to the conditions under slavery and were almost as horrendous. Another large part of the transition from free to paid labor was share cropping.
Large plantation owners rented out small tracts of land to several workers who would use the land for producing crops. These laborers were then given a percent of the crop that they grew. This type of farming is barely more beneficial than slavery. Workers were hardly given enough food to feed themselves after working a full day. Not only were Blacks still stuck in poverty with living conditions that rivaled those of slavery, but also they were now subjected to an exponential growth of race-related violence. After the Emancipation Proclamation, the war effort became more about freeing slaves than anything else.
Whites in both the North and South developed a deep hatred for Blacks that escalated into irrational acts of violence and prejudice. Before the war actually ended, many whites were thinking: “If we are got to be killed up for negroes than we kill everyone [negro] in this town. ” After the North won and the Reconstruction Amendments were passed, securing black rights and citizenship, many southern whites did everything they could to maintain social superiority. Radical political groups in former Confederate states reverted to complete guerilla warfare forming terrorist groups like the Klu Klux Klan.
These groups were meant to frighten Blacks from becoming independent and to make sure that they did not progress. These groups barred them from voting and restricted their freedom with threats of violence. As the following quote from the Maryland Convention Debates illustrates, the state of Blacks was of little concern to the majority of the government leaders including Andrew Johnson, who was more focused on revenge than on the welfare of the newly freed slaves: “There has been no expression… of regard for the negro…. ” This negligence left Blacks to fend for themselves.
Violence against Blacks also occurred in the North. With a surplus of workers and immigrants from Europe, many people in North also developed hatred toward Blacks. As the following quote illustrates, the fear that Blacks would immigrate to the North and steal scarce jobs instilled a deep sense of loathing and caused serious outbursts of violence against Blacks even in cities like New York. “The African race… were literally hunted like wild beasts. ” This kept most Blacks in life threatening fear of white violence and prevented them from advancing their social and economic status.
Thus, even by the end of Reconstruction, freed slaves were still dependent on whites for their well being and had little to no means of advancing or defending themselves. They were now slaves of fear and dependency. White hostility toward Blacks was not the only thing that stunted black advancement. The radical Democratic southern government also traumatized the black community. Towards the end of Reconstruction the Democratic party worked its way back into government power. Most of these politicians were anti black and enforced rules that undermined the legal and political gain of Blacks.
They instituted things like “Black Codes” that regulated black migration and restricted jobs. They also helped set up “separate but equal laws” in the case of Plessey vs Ferguson, essentially making Blacks second class citizens with access to inferior education and public services. Another large failure of Reconstruction was inadequate investment in the rebuilding of the southern economy. While the South did see some industrial growth, and it did restore many of its plantations, the economy was not even close to reaching its former glory and power.
Billions of dollars in slaves, confederate money, and ruined property were wiped out without any financial compensation from the North. The “Retreat” of the Republicans from Reconstruction in 1869 left the South in ruin and the freed slaves jobless. After the “Retreat” had little concern with the South and what happened to it. The overwhelming majority of federal funding was given to the North while less than ten percent was given to the South for Reconstruction. Other ways that the North took advantage of the South occurred in the compromise of 1877.
This controversial “compromise,” also known as “The Great Betrayal” required the South to acknowledge Hayes as the new president in return for economic help and railroad construction. The North did not even follow through with either of its promises. Most Republicans seemed more focused on revenge and politics than the actual course of Reconstruction or the well being of the freed slaves. Even Andrew Johnson exclaimed, “Damn the Negros, I am fighting those traitorous aristocrats, their masters. He was clearly more concerned with avenging the Union than with the welfare of Blacks or the Reconstruction of the South. Clearly from the beginning of Reconstruction, there was great ambivalence on how to proceed in rebuilding the South and repairing the country. At the end of the Civil War, two black leaders had emerged as advocates of reconstruction; however, their ideas and proposed methods were diametrically opposed. While both plans were good in theory, they were also rooted in the unique life experiences of each black leader. These two prominent black spokesmen were W. E. B Dubois and Booker T.
Washington. Dubois was a Radical for his time believing that Blacks should reach for the same status as whites right away. He believed that Blacks should attend liberal art colleges and that ten percent of this population should aspire to become professionals: teachers, lawyers, doctors, etc. Dubois referred to this ten percent as the “Talented Tenth. ” His thinking was that by securing power and prestige in society, Blacks will be able to better their situation using their own will and authority. For example, before the Thirteenth Amendment was passed, he argued the following to Booker T.
Washington: “The power of the ballot we need in self defense else, what shall save us from a second slavery. ” In contrast, Booker T. Washington believed that freed Blacks should start at the bottom with minimal rights and work their way up over generations. Booker thought that Blacks should give up most of their political power and rights for now and seek out more obtainable goals in technical careers like farming. He believed that with each generation of gradual change, Blacks would become more accepted by the whites and rise up in society.
And that becoming more economically independent was the key as this quote demonstrates “At the bottom of education, at the bottom of politics, even at the bottom of religion, there must be for our race economic independence. ” Booker was well liked by both Blacks and whites and had many influential supporters, including Fredrick Douglas who agreed with his stance on Reconstruction as this quotation illustrates, “What shall we do with the negro… nothing. ” As will be seen below, both Washington and Dubois were products of their up bringing and life experiences.
Dubois was born into a financially stable family in the northern United States. He was well educated is renowned as the first Black student to ever attend Harvard. Booker T. Washington, on the other hand, was born a slave in the South and worked his way up putting himself through school. These two vastly different life experiences are reflected in the respective views of Dubois and Washington on avenues to Reconstruction. As it turned out, the actual course of Reconstruction resembled more of Washington’s plan than that of Dubois. Blacks obtained their freedom and legal rights, but not much more than that.
Most Blacks pursued technical skills, but with increased industrialization in the South, those skills became rapidly outdated, leaving them jobless. In my opinion, the plan of Dubois, while it may be seem preferential in hindsight, would probably have failed as well. With Dubois’s plan not only a small percentage of Blacks would have been such prestigious jobs in 19th century cultural circumstances. Furthermore, in attempting to do so a much larger percentage would gave only enraged the whites, who were holding most of the authority and power.
This strategy would only have intensified the already horrendous violence inflicted against Blacks across the nation. In theory, a couple of strategies might have increased the success of Reconstruction. Eliminating the “Ten Percent” plan and making it harder for the South to rejoin the Union, would have given the North a longer period of time to secure black rights and plan for legal protections against violence. Also, more generous and sustained financial support for repairing the southern economy might have gone a long way towards increasing the economic independence of Blacks.
However, after the Civil War, both sides were exhausted financially, physically, and emotionally. This fatigue obviously caused a lack of energy for Reconstruction. This lack of energy was especially true for Northerners, who were not directly affected by Reconstruction efforts in the South and had little to gain from it. The strategy of prolonging the “Ten Percent” plan, while allowing the North to lay more ground rules in the South to help control violence and prohibit things like the “Black Codes,” would not have changed the racist culture and would only have intensified hostilities towards Blacks in the South.
In conclusion, given the dominant and deeply imbedded culture of racism in 19th century America, any efforts toward a swift and immediate Reconstruction of the South faced guaranteed resistance. In that cultural and political context, and in the wake of the previously unimaginable devastation and destruction caused by the Civil War, it is surprising and impressive that Reconstruction was as successful as it actually was. Works Cited African American Quotes. N. p. , n. d. Web. 25 Feb. 2013. <http://africanamericanquotes. org>. Brainy Quote. N. p. , 2001. Web. 23 Feb. 2013. lt;http://www. brainyquote. com>. 1877 Compromise Aborted Reconstruction. N. p. , 1997. Web. 25 Feb. 2013. <http://www. news-reporter. com>. Failures of Reconstruction. N. p. , n. d. Web. 25 Feb. 2013. <http://www. wwnorton. com>. Foner, Eric. Reconstruction Americas Unfinished Revolution. New York: Louisiana Sate, 1984. Print. Franklin, John Hope. Reconstruction after the Civil War. Chicago: U of Chicago, 1994. Print. Fredrickson, George. Big Enough to Be Inconsistent. Boston: Harvard University, 2008. Print. History Engine. N. p. , 1995. Web. 16 Feb. 2013. lt;http://historyengine. richmond. edu>. Litwack, Leon F. Been in the Storm so Long the Aftermath of Slavery. New York: Alfred A. Knopf, 1980. Print. National Archives. N. p. , 4 June 1995. Web. 24 Feb. 2013. <http://www. archives. gov>. PBS. N. p. , 1995. Web. 12 Feb. 2013. <http://www. pbs. org>. Perman, Michael, ed. Major Problems In The Civil War And Reconstruction. Toronto: D. C. Heath and Country, 1991. Print. Randall, J. G, and David Donald. The Civil War and Reconstruction. New York: D. C. Heath and company, 1937. Print. Stampp, Kenneth. The Era of Reconstruction.
New York: Vintage, 1965. Print. Takaki, Ronald. Iron Cages. New York: Alfred A. Knopf, 1979. Print. Think Exit. N. p. , 1999. Web. 20 Feb. 2013. <http://thinkexit. com>. Trefousse, Hans L. Andrew Johnson A Biography. New York: W. w. Norton, 1989. Print. Uohio. N. p. , 1995. Web. 25 Feb. 2013. <http://www. academic. csuonio. edu>. U. S. History. Curtis Publishing, 1995. Web. 10 Feb. 2013. <http://www. ushistory. org>. Zinn, Howard. A People’s History. New York: HarperCollins, 1980. Print. ——————————————– [ 1 ]. www. archives. gov 2 ]. A People’s History by Howard Zinn p 172 [ 3 ]. Reconstruction, Eric Foner, p43 [ 4 ]. Major Problems, Michael Perman, Leon F. Liwack, p386 [ 5 ]. Reconstruction, Eric Foner, p37 [ 6 ]. Reconstruction, Eric Foner, Abraham Lincoln, p35 [ 7 ]. www. brainyquotes. com Abraham Lincoln [ 8 ]. Major Problems, Michael Perman, p210 [ 9 ]. Reconstruction, Eric Foner, p58 [ 10 ]. 1 Major Problems, Michael Perman, p 355 [ 11 ]. Major Problems, Michael Perman, James W. Grimes, p415 [ 12 ]. academic. csuohio. edu [ 13 ]. Reconstruction Civil War, John Hope Franklin, p174 [ 14 ].
Iron Cages, Ronald Takaki, p 198 [ 15 ]. A Peoples History, Howard Zinn, p198 [ 16 ]. Reconstruction, Eric Foner, Wendell Phillips, p35 [ 17 ]. Been In The Storm So Long, Leon F. Litwack, p222 [ 18 ]. Major Problems, Michael Perman, p387 [ 19 ]. Iron Cages, Ronald Takaki, p194 [ 20 ]. Reconstruction, Eric Foner, p 173 [ 21 ]. A Peoples History, Eric Zinn, p192 [ 22 ]. Iron Cages, Ronald Takaki, p200 [ 23 ]. Reconstruction, Eric Foner, p43 [ 24 ]. Reconstruction, Eric Foner, Richard P Fuke, Maryland Convention Debates p41 [ 25 ]. Reconstruction, Eric Foner, p33, New York Times, March 7, 1864 [ 26 ].
Reconstruction, Eric Foner, p423 [ 27 ]. A Peoples History: Howard Zinn, p205 [ 28 ]. Major Problems, Michael Perman, M. Les Benedict, p 415 [ 29 ]. A Peoples History, Eric Zinn, p206 [ 30 ]. www. news-reporter. com [ 31 ]. A Peoples History, Eric Zinn, p206 [ 32 ]. Reconstruction Civil War, John Hope Franklin, p 170 [ 33 ]. Reconstruction, Eric Foner, Andrew Johnson, p48 [ 34 ]. www. pbs. org [ 35 ]. www. thinkexit. com, Frederick Douglass [ 36 ]. Africanamericanquotes. org, Booker T. Washington [ 37 ]. A Peoples History, Eric Zinn, p208 [ 38 ]. Reconstruction, Eric Foner, Frederick Douglass, p67
Cite this Could Reconstruction Have Been More Successful
Could Reconstruction Have Been More Successful. (2016, Sep 28). Retrieved from https://graduateway.com/could-reconstruction-have-been-more-successful/ | https://graduateway.com/could-reconstruction-have-been-more-successful/ | 21 |
22 | It is not enough for a buyer to want or desire an item. He or she must show the ability to pay and then the willingness to pay. So, here it is important to distinguish between demand and quantity demanded. Demand refers to how much of a product or service is desired by buyers. The quantity demanded, in its turn, is the amount of a product that people are willing to buy at a certain price.
If you need assistance with writing your essay, our professional essay writing service is here to help!Find out more
The law of demand states that the higher the price of a product, the fewer people will demand that product, that is, demand for a product varies inversely with its price, all other factors remaining equal. Factors other than a good’s price which affect the amount consumers are willing to buy are called the non-price determinants of demand. The law of demand expresses the relationship between prices and the quantity of goods and services that would be purchased at each and every price. In other words, the higher the price of a product, the lower the quantity demanded.
So what factors except of price alter a consumer’s desire, willingness and ability to pay for different products? Some factors include consumers’ income and tastes, the prices and availability of related products like substitutes or complements (complementary goods), and the item’s usefulness.
Substitutes are goods that satisfy similar needs and which are normally consumed in place of each other. As the price of one substitute declines, demand for the other substitute will decrease. Butter and margarine are close substitutes. If the price of butter goes up, then people will tend to substitute margarine for butter.
Complementary goods are those that are normally consumed together (e.g., cars and tires). An increase in the price of a product will lead to the increase in demand for its complement while a decrease in the price of a product will decrease demand for its complement.
Talking about quantity demanded, there are two ways one of a particular product can change. First, according to the law of demand a change in price leads to a movement along the original demand curve and results in a change in the quantity demanded, that is, more will be purchased but only at a lower price. Second, when one of the non-price factors changes (e.g., a change in income) there will be a change in demand. This change causes a shift of the demand curve either outward or inward in response to a change in a condition other than the good’s price. It means that more or less will be purchased at the same price.
All of the non-price determinants (changes in the size of the market, income for the average consumer, population size, the prices and availability of related goods, consumer preferences) are directly related to consumers. In other words, at any given price, consumers will be willing and able to purchase either more or less.
Let’s take a look at an effect a change in consumer preferences or desire for a particular product leads to. On the one hand, if a product like cut jeans becomes the latest fashion fad, demand at any given price will be increased and the demand curve shifts out. On the other hand, if there is a decline in the size of the market or a product becomes unfashionable then the demand curve shifts in. Thus, the only thing that can change the quantity demanded is a change in the market price, all other things remaining the same. While a change in demand results from changes of any of the non-price determinants, the good’s price being equal.
Economists often look at things graphically. A demand curve shows an inverse relationship between the price and the quantity demanded. The demand curve represents the quantities of a product or service which consumers are willing and able to buy at various prices, all non-price factors being equal. The demand curve slopes downward from left to right based on the law of demand. Or to put it another way, a demand curve shows that the quantity demanded is greater at a lower price and lower at a higher price.
Increased demand can be represented on the graph as the curve being shifted to the right, because at each price, a greater quantity is demanded. An example of this would be more people suddenly wanting more a particular product. On the other hand, if the demand decreases, the opposite happens. Decreased demand can be represented on the graph as the curve being shifted to the left, because at each price the quantity demanded is less. It means that fewer people want to buy this product.
The difference between change in demand and quantity demanded is subtle but important. If the demand of ice cream goes up in summer it is because consumptive demand has truly increased, clearly it is hot. In this case the business can most likely raise prices without suffering a cut in sales. This is a change in the quantity demanded. In winter the business incurs a sales fall at the same price. The only way out of increasing sales is to reduce the price. As a result of a price cut the increased sales of ice cream means that consumer demand has artificially been manipulated. In reality, actual demand is low but extra efforts have to be made to increase sales. This leads to a change in demand.
Since any transaction involves both buyers and sellers, demand is only one aspect of decisions about prices and the amounts of goods traded, supply is the other. So, supply is one of the two key determinants of price and it describes the behavior of sellers.
In economics, supply represents the amounts of items that suppliers are willing and able to offer for sale at different prices at a particular time and place, all non-price determinants being equal. The quantity supplied refers to the amount of a certain product producers are willing to supply at a certain price. A change in the price of the product will cause a change in the quantity supplied.
The law of supply states that the quantity of a commodity supplied varies directly with its price, all other factors that may determine supply remaining the same. The law of supply expresses the relationship between prices and the quantity of goods and services that sellers would offer for sale at each and every price. In other words, the higher the price of a product, the higher the quantity supplied. As the price of a commodity increases relative to price of all other goods, business enterprises switch resources and production from other goods to production of this commodity, increasing the quantity supplied.
Price is an important determinant of the quantity supplied. The law of supply states that the amount offered for sale rises, as the price is higher. The quantity of a certain product producers are willing to offer for sale rises, since their price is higher primarily because they need to cover the increased costs of production.
Thus, according to the law of supply a change in price leads to a movement along the original supply curve and results in a change in the quantity supplied. On the one hand, an upward movement along the curve represents an increase in the quantity supplied as the price is raised. On the other hand, a downward movement along the curve shows a decrease in the quantity supplied as a result of a price reduction.
When one of the factors other than a product’s price changes (e.g., a change in technology) there will be a change in supply. Economists use the term “supply” to refer to the original supply curve. An increase in supply is reflected by a shift of the supply curve to the right. It means that at the same price, sellers are willing to supply more than they were willing to supply before. A decrease in supply is represented by a shift of the original supply curve to the left. It means that at any given price, producers are willing to supply less than they were willing to supply before.
However, there are things other than price which affect the amounts of goods and services suppliers are able to bring into the market. These things are called the non-price determinants of supply.
As it has been mentioned a change in the quantity supplied caused only by a change in the price of the product. A change in supply is caused by a change in the non-price determinants of supply. Based on a new supply schedule, the supply curve moves inward or outward since the prices stay the same and only the quantities supplied change.
Non-price determinants of supply are:
Changes in the cost of production. Production costs relate to the labor costs and other costs of doing business used in production process. The cost of production is probably one of the most important influences on production process. An increase in the costs of any input brings about the lower output, which means that the supply curve will shift inward. Regardless of the price that a firm can charge for its product, price must exceed costs to make a profit. Thus, the supply decision is a decision in response to changes in the cost of production.
Changes in technology. Changes in technology usually result in improved productivity. Improved technology decreases production costs and therefore increases supply.
Changes in the price of resources needed to produce goods and services. If the price of a resource used to produce the product increases, this will increase the production costs and the producer will no longer be willing to offer the same quantity at the same price. He will want to charge a higher price to cover the higher costs. As a result the supply curve will shift inward.
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.View our services
Changes in the expectations of future prices. Changes in producers’ expectations about the future price can cause a change in the current supply (Ñ-Ñнуюча пропозицÑ-Ñ) of products. If producers anticipate a price rise in the future, they may prefer to store their products today and sell them later. As a result, the current supply of a particular product will decrease. In this case a supply curve will shift to the left. It is necessary to keep in mind that supply is not the quantity available for sale.
Changes in the profit opportunities. If a business firm produces more than one product, a change in the price of one product can change the supply of another product. For example, automobile manufacturers can produce both small and large cars. If the price of small cars rises, the producers will produce more small cars to earn higher profits. They will shift the resources of the plant from the production of large cars to the production of small ones. Therefore, the supply of small cars will increase and a supply curve will shift outward. So, profit opportunities encourage producers to produce those goods that have high prices.
Changes in the number of suppliers in the market. Potential producers are producers who can produce a product but don’t do it because of relatively low price. If price of a product rises potential suppliers will switch over production to that product to make more profit. If more producers enter a market, the supply will increase, shifting the supply curve to the right.
In order to understand better the theory of supply and demand it is necessary to know how much buyers and sellers respond to price changes. This responsiveness is called elasticity.
Elasticity varies among products because some products may be more essential to the consumer. A good or service is considered to be highly elastic if a slight change in price leads to a sharp change in the quantity demanded. A price increase of a product or service that isn’t considered a necessity will discourage more consumers to buy the product or service. On the other hand, an inelastic good or service is one in which changes in price bring about only modest changes in the quantity demanded, if any at all. Products that are necessities are more insensitive to price changes because consumers will continue buying these products despite a price rise. It is known as the price elasticity of demand.
In economics, the price elasticity of demand is an elasticity that measures the nature and degree of the relationship between changes in the quantity demanded of a commodity and changes in its price.
One typical application of the concept of elasticity is to consider what happens to consumer demand for a product when prices increase. As the price of a product rises, consumers will usually demand less of that product, perhaps by consuming less, substituting another product for it, and so on. The greater the extent to which demand falls as price rises, the greater the price elasticity of demand is.
Demand is called elastic if a small change in price has a relatively large effect on the quantity demanded.
The number and quality of substitutes for a product are the basic influence on price elasticity of demand. If the prices of substitutes remain the same, a rise in the product’s price will discourage consumers from buying this product. On the other hand, if there is a price cut in the product, consumers will substitute other items for this product. Thus, the demand for this product tends to be elastic. In general, demand is elastic for non-essential commodities (visits to theatres or concerts, holidays, parties, etc.)
However, there are some goods that consumers cannot consume less of, and cannot find substitutes for even if prices rise. Some goods and services that are necessities, relatively inexpensive and difficult to find substitutes are said to have inelastic demand. To put it another way, a change in price results in a relatively small effect on the quantity demanded.
The elasticity of demand also deals with the effect of a price change on the seller’s total revenue, which is the amount paid by the buyers and received by the sellers of products. When the price elasticity of demand for a product is elastic, the percentage change in quantity is greater than the percentage change in price. Hence, when the price is raised, the total revenue of producers falls, and the total revenue of producers rises, when the price is decreased. When the price elasticity of demand for a product is inelastic, the percentage change in quantity is smaller than the percentage change in price. Therefore, when the price is raised, the total revenue of producers rises and the total revenue of producers decreases, when there is a good’s price fall.
The price elasticity of supply, in its turn, is the degree of proportionality with which the amount of a commodity offered for sale changes in response to a given change in the going price. In other words elasticity of supply is a measure of how much the quantity supplied of a particular product responds to a change in the price of that product.
Elasticity of supply works similar to elasticity of demand. If a change in price results in a large change in the quantity supplied, supply is considered elastic. On the other hand, if a great change in price brings about a small change in the quantity supplied, supply is called inelastic.
Here are the determinants of price elasticity of supply:
the ability of producers to change the amount of goods they produce
time period needed to alter the output.
Elasticity of supply is different in the short run and the long run. The quantity of a product supplied in the short run differs from the amount produced, as manufacturers have stocks of finished products as well as raw materials which they have to build up or reduce. In the long run quantity supplied and quantity produced are equal but it takes time to adjust supply to current demand and going prices. For example, supply of many goods can be increased over time by allocating alternative resources, investing in an expansion of production capacity, or developing competitive products that can substitute for hot items. Hence, supply is more elastic in the long run than in the short run.
Cite This Work
To export a reference to this article please select a referencing stye below:
Related ServicesView all
DMCA / Removal Request
If you are the original writer of this essay and no longer wish to have your work published on UKEssays.com then please: | https://www.ukessays.com/essays/economics/the-market-forces-of-supply-and-demand-economics-essay.php | 21 |
23 | Black Thursday is a term that is used to identify 29 October 1929, the day when the stock market in the United States plummeted, resulting in what is known as the Wall Street Crash of 1929. On this date, many fortunes were lost, leading to a widespread panic that resulted in many people attempting to pull their assets out of banks and other lending institutions. This action in turn resulted in the closing of many banks, which only served to increase the panic, and led to the Great Depression of the 1930s.
Prior to Black Thursday, citizens in the United States had enjoyed a period of great prosperity. The nation emerged from World War I in an excellent economic condition, which led to the creation and expansion of a number of business enterprises. People who had never considered investing in the past began to purchase stocks and bonds, some of which had dubious backing. At first, this was not a problem, as older ventures were funded with the processed from newer ones, allowing a number of people to amass considerable assets, at least on paper.
With the economic crisis that began to develop in the United States and elsewhere during the latter part of the 1920s, some concerns about the backing of some stocks and bonds began to appear. With relatively little in the way of government regulation of banking and investment marketing, these practices were not considered illegal, just risky. However, as economic issues continued to proliferate, some of the weaker options began to falter, losing money for their investors. The end result was the collapse of the stock market in the United States on Black Thursday, and the resulting economic woes that had an impact on other investment markets around the world.
For some, Black Thursday was the day that everything they had worked for over the years was lost. People not only lost bank balances and stock portfolios, but in many cases also lost homes and farms when they were unable to keep up the mortgage payments on their properties. While many people found ways to cope and even to begin to build a more solid financial foundation, others were left homeless, drifting across the country in a search for work, food, and shelter. A number of suicides were attributed to the collapse of the stock market on Black Thursday.
As a response to the circumstances that led up to Black Thursday, the governments of the United States and other nations began to implement more stringent requirements for investors, as well as regulations on how various investment markets could function. These safeguards have helped to prevent subsequent shifts in the world economy from creating the same degree of destruction that was witnessed back in 1929. Over time, economists have studied the events that led to the stock market collapse and sought to apply the lessons learned to contemporary situations, occasionally recommending additional forms of regulation in order to maintain as much economic stability as possible, even under adverse circumstances. | https://www.wisegeek.net/in-finance-what-is-black-thursday.htm | 21 |
30 | This article is about the modern eastern Poland. For the region that was annexed by the Soviet Union in 1945, see Kresy.
Kresy Wschodnie or Kresy was the Eastern part of the Second Polish Republic during the interwar period constituting nearly half of the territory of the state. As a concept the Polish notion of Kresy corresponds with the Russian one of Okrainy (Oкраины).. The population in Kresy had a considerable proportion of national minorities, which in total were roughly equal in their number to ethnic Poles and even exceeded the numbers of Poles in some areas. Administratively, the territory of Kresy was composed of voivodeships of Lwów, Nowogródek, Polesie, Stanisławów, Tarnopol, Wilno, Wołyń, and the Białystok. Today, these territories are divided between Western Ukraine, Western Belarus, and south-eastern Lithuania, with such major cities as Lviv, Vilnius, and Grodno no longer in Poland. In the Second Polish Republic the term Kresy roughly equated with the lands beyond the so-called Curzon Line, which was suggested after World War I in December 1919 by the British Foreign Office as the eastern border of the re-emerging sovereign Republic following the century of partitions. In September 1939, after the Soviet Union joined Nazi Germany in their attack on Poland in accordance with the Molotov–Ribbentrop Pact, the territories were incorporated into Soviet Ukraine, Belarus and Lithuania in the atmosphere of terror.
Eastern Poland is a macroregion in Poland comprising Lublin, Podkarpackie, Podlaskie, Świętokrzyskie and Warmian-Masurian voivodeships.
A macroregion is a geopolitical subdivision that encompasses several traditionally or politically defined regions. The meaning may vary, with the common denominator being cultural, economical, historical or social similarity within a macroregion. The term is often used in the context of globalization.
Poland, officially the Republic of Poland, is a country located in Central Europe. It is divided into 16 administrative subdivisions, covering an area of 312,696 square kilometres (120,733 sq mi), and has a largely temperate seasonal climate. With a population of approximately 38.5 million people, Poland is the sixth most populous member state of the European Union. Poland's capital and largest metropolis is Warsaw. Other major cities include Kraków, Łódź, Wrocław, Poznań, Gdańsk, and Szczecin.
Lublin Voivodeship, or Lublin Province, is a voivodeship, or province, located in southeastern Poland. It was created on January 1, 1999, out of the former Lublin, Chełm, Zamość, Biała Podlaska and (partially) Tarnobrzeg and Siedlce Voivodeships, pursuant to Polish local government reforms adopted in 1998. The province is named after its largest city and regional capital, Lublin, and its territory is made of four historical lands: the western part of the voivodeship, with Lublin itself, belongs to Lesser Poland, the eastern part of Lublin Area belongs to Red Ruthenia, and the northeast belongs to Polesie and Podlasie.
The make-up of the distinct macroregion is based not only of geographical criteria, but also economical: in 2005, these five voivodeships has the lowest GDP per capita in the enlarged European Union. –2013.On this basis, the macroregion is subject to special additional support with European funds under the Eastern Poland Economic Promotion Programme over 2007
The European Union (EU) is a political and economic union of 28 member states that are located primarily in Europe. It has an area of 4,475,757 km2 (1,728,099 sq mi) and an estimated population of about 513 million. The EU has developed an internal single market through a standardised system of laws that apply in all member states in those matters, and only those matters, where members have agreed to act as one. EU policies aim to ensure the free movement of people, goods, services and capital within the internal market, enact legislation in justice and home affairs and maintain common policies on trade, agriculture, fisheries and regional development. For travel within the Schengen Area, passport controls have been abolished. A monetary union was established in 1999 and came into full force in 2002 and is composed of 19 EU member states which use the euro currency.
In 2012–2013, the macroregion was the subject of an advertising campaign, Why didn't you invest in Eastern Poland?, which was to raise awareness of and increase investment in the region.
The Wilno Voivodeship was one of 16 Voivodeships in the Second Polish Republic, with the capital in Wilno. It was created in 1926 and populated predominantly by Poles with notable minorities of Belarusians, Jews and Lithuanians.
Poland does not legally recognize same-sex unions, either in the form of marriage or civil unions. In 2012, the Supreme Court ruled that same-sex couples have limited legal rights in regards to the tenancy of a shared household. A few laws also guarantee certain limited rights for unmarried couples, including couples of the same sex. Same-sex spouses also have access to residency rights under EU law.
The Lithuanian minority in Poland consists of 8,000 people living chiefly in the Podlaskie Voivodeship in the north-eastern part of Poland. The Lithuanian embassy in Poland notes that there are about 15,000 people in Poland of Lithuanian ancestry.
The Ukrainian minority in Poland was composed of approximately 51,000 people, according to the Polish census of 2011. Some 38,000 respondents named Ukrainian as their first identity, 13,000 as their second identity, and 21,000 declared Ukrainian identity jointly with Polish nationality.
Gmina Łukowica is a rural gmina in Limanowa County, Lesser Poland Voivodeship, in southern Poland. Its seat is the village of Łukowica, which lies approximately 11 kilometres (7 mi) south-east of Limanowa and 63 km (39 mi) south-east of the regional capital Kraków.
Racism in Poland is present even though a race-based worldview has had little chance to develop. Racism has persisted alongside the fact that ethnic minorities have made up a significant proportion of the population since the founding of the Polish state. Throughout most of its one thousand-year history, Poland has experienced very limited immigration; apart from the immigration of the Jews while they were having been expelled from other parts of the Europe. Poland has never had overseas colonies. For a lengthy period the country was regarded as having a very tolerant society vowing to "constant evidence for numerous varieties of religious nonconformity, sectarians, schism, and heterodoxy."
Lucyna Kulińska is a Polish historian specializing in modern history and university lecturer. She has authored several books, collections of documents, publications and articles on the subject of Polish-Ukrainian relations, globalization and international relations. Lucyna Kulińska is the chairwoman of Społeczna Fundacja Pamięci Narodu Polskiego.
The modern Poland–Russia border is a nearly straight-line division between the Republic of Poland and the Russian Federation exclave of Kaliningrad Oblast, a region not connected to the Russian mainland. It is currently 232 kilometres (144 mi) long. Its current location and size were decided as part of the aftermath of World War II. In 2004 it became part of the boundary of the European Union and Commonwealth of Independent States.
Zbigniew Marcin Bródka is a Polish speed skater and a 2014 Olympic champion in 1500 metres. He also works as a firefighter in the State Fire Service station in Łowicz.
Piotr Tadeusz Gliński is a Polish sociologist, professor, university lecturer and politician. He served as President of the Polish Sociological Association from 2005 to 2011. He was the nominee of Law and Justice, the largest opposition party, for Prime Minister of Poland. In the cabinet of Beata Szydło, he serves as the First Deputy Prime Minister and the Minister of Culture and National Heritage in the Law and Justice government.
Unemployment in Poland appeared in the 19th century during industrialization, and was particularly severe during the Great Depression. Under communist rule Poland officially had close to full employment, although hidden unemployment existed. After Poland's transition to a market economy the unemployment rate sharply increased, peaking at above 16% in 1993, then dropped afterwards, but remained well above pre-1993 levels. Another period of high unemployment occurred in the early 2000s when the rate reached 20%. As Poland entered the European Union (EU) and its job market in 2004, the high unemployment set off a wave of emigration, and as a result domestic unemployment started a downward trend that continued until the onset of the 2008 Great Recession. Recent years have seen an increase in the unemployment rate from below 8% to above 10% (Eurostat) or from below 10% to 13% (GUS). The rate began dropping again in late 2013. Polish government (GUS) reported 9.6% registered unemployment in November 2015, while European Union's Eurostat gave 7.2%. According to Eurostat data, since 2008, unemployment in Poland has been constantly below the EU average. Significant regional differences in the unemployment rate exist across Poland.
Why didn't you invest in Eastern Poland? was an advertising campaign conducted by the Polish Information and Foreign Investment Agency (PAIiIZ), and supported by the European Regional Development Fund, to raise the domestic and international profile of Eastern Poland, with the aim of increasing economic investment in the region.
Since the fall of Communism in 1989, the nature of migration to and from Poland has been in flux. After Poland's accession to the European Union and accession to the Schengen Area in particular, a significant number of Poles, estimated at over two million, have emigrated, primarily to the United Kingdom, Germany, France and Ireland. The majority of them, according to the Central Statistical Office of Poland, left in search of better work opportunities abroad while retaining permanent resident status in Poland itself.
Marek Żukow-Karczewski is a Polish historian, journalist, and author who specializes in the history of Poland, especially Kraków, and in the history of architecture and environmental issues. He is a descendant of the Polish noble family Karczewski and of the Russian noble family Żukow.
Anna Grejman is a female Polish volleyball player, a member of Poland women's national volleyball team and Polish club Polski Cukier Muszyna, 2014 Polish Champion.
Grupa Azoty S.A. - the company in the chemical industry located in the Mościce district of Tarnów, in the Lesser Poland Voivodeship of southeastern Poland.
Possession of most drugs for recreational use, including cannabis, is illegal in Poland. It was classified as a narcotic in 1951 but it was not until 1999 that possession and use of the drug became a crime. Since 2011, prosecutors have the discretion to drop the charges if the quantity of drugs seized is only a small amount. The medical use of cannabis was legalized in 2017.
Halina Weronika Wasilewska-Trenkner was a Polish economist, academic, and finance minister.
W grudniu 2005 r. Rada Europejska podjęła decyzję o przyznaniu Polsce dodatkowej kwoty z budżetu Unii Europejskiej w wysokości 882 mln euro (107 euro na mieszkańca każdego z województw Polski Wschodniej - uznanych za regiony o najniższym poziomie PKB na mieszkańca na podstawie danych Eurostatu z 2002 r.) w ramach Europejskiego Funduszu Rozwoju Regionalnego (EFRR).
|This Poland-related article is a stub. You can help Wikipedia by expanding it.| | https://wikimili.com/en/Eastern_Poland | 21 |
15 | The Three Factors of Production
One of the central characteristics of this course is its focus on land as a distinctive factor of production, which must be considered separately from the other two factors, capital and labor. This is a point that modern-day economics de-emphasizes, or even denies outright. Why is that? Could it be that land was an important economic factor, way back when — but today's social complexity and advanced technology have freed us from dependence on nature?
LAND: The entire material universe exclusive of people and their products. Everything physical (other than human beings) which is not the result of human effort is within the economic definition of land. This concept thus includes not merely the dry surface of the earth, but all natural materials, forces and opportunities. The trees in a virgin forest are land; in a cultivated forest they are wealth.
LABOR: All human exertion in the production of wealth and services. Mental toil is labor as well as muscular effort. All who participate in production by their mental and physical effort are laborers in the economic sense. Thus entrepreneurs as well as blue-collar workers are included.
CAPITAL: Wealth used in the process of production, which includes wealth in the course of exchange. Capital is a subset of wealth (see definition below). Any item of wealth could be used as capital; it could be sold or used in production. This is implied in our definition of production, when we note that production is not completed until wealth reaches the final consumer. If an item of wealth is to be used as capital, its owner foregoes consuming it for that time. It's worth noting that capital is a secondary factor of production. Only the two primary lactors, labor and land, are absolutely necessary. We know that wealth can be created without the use of capital, because capital is wealth. Wealth had to be created before people could choose to use some of it as capital.
Distinguishing the three factors of production is crucial to our analysis. Our most important objective in political economy is to understand the distribution of wealth in society. In order to do that, we need consistent, mutually exclusive definitions of the factors of production. Labor is only human exertion; capital is only physical products of human labor; land is only things not created by human labor. They are not convertible into each other. (For example: something can be built on land, but if the building is destroyed, the value of the bare land remains.) | http://www.businessjournals.org/blog/the-three-factors-of-production-12563.html | 21 |
24 | - Proto-Indo-European language (PIE) had a series of phonemes beyond those reconstructed with the comparative method.
- These phonemes, according to the most-accepted variant of the theory, were "laryngeal" consonants of an indeterminate place of articulation towards the back of the mouth.
The theory aims to:
- Produce greater regularity in the reconstruction of PIE phonology than from the reconstruction that is produced by the comparative method.
- Extend the general occurrence of the Indo-European ablaut to syllables with reconstructed vowel phonemes other than *e or *o.
In its earlier form (see below), the theory postulated two sounds in PIE. Combined with a reconstructed *e or *o, the sounds produce vowel phonemes that would not otherwise be predicted by the rules of ablaut. The theory received considerable support after the decipherment of Hittite, which revealed it to be an Indo-European language.
Many Hittite words were shown to be derived from PIE, with a phoneme represented as ḫ corresponding to one of the hypothetical PIE sounds. Subsequent scholarship has established a set of rules by which an ever-increasing number of reflexes in daughter languages may be derived from PIE roots. The number of explanations thus achieved and the simplicity of the postulated system have both led to widespread acceptance of the theory.
In its most widely accepted version, the theory posits three phonemes in PIE: h₁, h₂ and h₃ (see below). Other daughter languages inherited the derived sounds, resulting from their merger with PIE short vowels and their subsequent loss.
The phonemes are now recognised as consonants, related to articulation in the general area of the larynx, where a consonantal gesture may affect vowel quality. They are regularly known as laryngeal, but the actual place of articulation for each consonant remains a matter of debate. (see below).
The laryngeals get their name because they were believed by Hermann Möller and Albert Cuny to have had a pharyngeal, epiglottal, or glottal place of articulation, involving a constriction near the larynx. While this is still possible, many linguists now think of "laryngeals", or some of them, as having been velar or uvular.
The evidence for their existence is mostly indirect, as will be shown below, but the theory serves as an elegant explanation for a number of properties of the PIE vowel system that made no sense until the theory, such as the "independent" schwas (as in *pəter- 'father'). Also, the hypothesis that PIE schwa *ə was actually a consonant, not a vowel, provides an elegant explanation for some apparent exceptions to Brugmann's law in Indic languages.
The beginnings of the theory were proposed by Ferdinand de Saussure in 1879, in an article chiefly devoted to something else altogether (demonstrating that *a and *o were separate phonemes in PIE).
In the course of his analysis, Saussure proposed that what had then been reconstructed as long vowels *ā and *ō, alternating with *ǝ, was actually an ordinary type of PIE ablaut. That is, it was an alternation between e-grade and zero grade like in "regular" ablaut (further explanations below), but followed by a previously unidentified element. This "element" accounted for both the changed vowel color and the lengthening (short *e becoming long *ā or *ō).
So, rather than reconstructing *ā, *ō and *ǝ as others had done before, Saussure proposed something like *eA alternating with *A and *eO with *O, where A and O represented the unidentified elements. Saussure called them simply coefficients sonantiques, which was the term for what are now in English more usually called resonants; that is, the six elements present in PIE which can be either consonants (nonsyllabic) or vowels (syllabic) depending on the sounds they are adjacent to: *y w r l m n.
These views were accepted by a few scholars, in particular Hermann Möller, who added important elements to the theory. Saussure's observations, however, did not achieve any general currency, as they were still too abstract and had little direct evidence to back them up.
This changed when Hittite was discovered and deciphered in the early 20th century. Hittite phonology included two sounds written with symbols from the Akkadian syllabary conventionally transcribed as ḫ, as in te-iḫ-ḫi "I put, am putting". This consonant did not appear to be clearly related to any of the consonants then reconstructed for PIE, and various unsatisfactory proposals were made to explain this consonant in terms of the PIE consonant system as it had then been reconstructed.
It remained for Jerzy Kuryłowicz (ə indoeuropéen et ḫ hittite, 1927; Études indoeuropéennes I, 1935) to propose that these sounds lined up with Saussure's conjectures. He suggested that the unknown consonant of Hittite was in fact a direct reflex of the coefficients sonantiques that Saussure had proposed.
Their appearance explained some other matters as well; they explained, for example, why verb roots containing only a consonant and a vowel always have long vowels. For example, in *dō- "give", the new consonants allowed linguists to decompose this further into *deh₃. This not only accounted for the patterns of alternation more economically than before (by requiring fewer types of ablaut), but also brought the structure of these roots into line with the basic PIE pattern which required roots to begin and end with a consonant.
The lateness of the discovery of these sounds by Indo-Europeanists is largely because Hittite and the other Anatolian languages are the only Indo-European languages where at least some of them are attested directly and consistently as consonantal sounds. Otherwise, their presence is to be inferred mostly through the effects they have on neighboring sounds, and on patterns of alternation that they participate in. When a laryngeal is attested directly, it is usually as a special type of vowel and not as a consonant, best exemplified in Greek where syllabic laryngeals (when they appeared next to only consonants) developed as such: *h₁ > e, *h₂ > a, and *h₃ > o.
Varieties of laryngeals
There are many variations of the laryngeal theory. Some scholars, such as Oswald Szemerényi, reconstruct just one laryngeal. Some follow Jaan Puhvel's reconstruction of eight or more (in his contribution to Evidence for Laryngeals, ed. Werner Winter).
Basic Laryngeal Set
Most scholars work with a basic three:
- *h₁, the "neutral" laryngeal
- *h₂, the "a-coloring" laryngeal
- *h₃, the "o-coloring" laryngeal
Some scholars suggest the existence of a fourth consonant, *h₄, which differs from *h₂ in not being reflected as Anatolian ḫ but being reflected, to the exclusion of all other laryngeals, as Albanian h when word-initial before an originally stressed vowel.
E.g. PIE *h₄órǵʰiyeh₂ "testicle" yields Albanian herdhe "testicle" but Hittite arki- "testicle" whereas PIE *h₂ŕ̥tkos "bear" yields Alb. ari "bear" but Hittite hart(ag)ga- (=/hartka-/) "cultic official, bear-person".
- *h₁ Doublet
Another such theory, but much less generally accepted, is Winfred P. Lehmann's view, on the basis of inconsistent reflexes in Hittite, that *h₁ was actually two separate sounds. (He assumed that one was a glottal stop and the other a glottal fricative.)
Direct Evidence for Laryngeals
Some direct evidence for laryngeal consonants comes from Anatolian: PIE *a is a fairly rare sound, and in an uncommonly large number of good etymologies it is word-initial. Thus PIE (traditional) *anti "in front of and facing" > Greek antí "against"; Latin ante "in front of, before"; Sanskrit ánti "near; in the presence of". But in Hittite there is a noun ḫants "front, face", with various derivatives (ḫantezzi "first", and so on), pointing to a PIE root-noun *h₂ent- "face" (of which *h₂enti would be the locative singular). (It does not necessarily follow that all reconstructed forms with initial *a should automatically be rewritten *h₂e.)
Similarly, the traditional PIE reconstruction for 'sheep' is *owi- (a y-stem, not an i-stem) whence Sanskrit ávi-, Latin ovis, Greek ὄϊς. But Luwian has ḫawi-, indicating instead a reconstruction *h₃ewis.
Considerable debate still surrounds the pronunciation of the laryngeals and various arguments have been given to pinpoint their exact place of articulation. Firstly the effect these sounds have had on adjacent phonemes is well documented. The evidence from Hittite and Uralic is sufficient to conclude that these sounds were "guttural" or pronounced rather back in the vocal tract. The same evidence is also consistent with the assumption that they were fricative sounds (as opposed to approximants or stops), an assumption which is strongly supported by the behaviour of laryngeals in consonant clusters.
It has been suggested by Beekes (1995) that *h₁ is a glottal stop [ʔ]. However, Winfred P. Lehmann instead theorized, based on inconsistent reflexes in Hittite, that there were two *h₁ sounds: a glottal stop [ʔ] and an h sound [h] as in English hat.
Jens Elmegård Rasmussen (1983) suggested a consonantal realization for *h₁ as the voiceless glottal fricative [h] with a syllabic allophone [ə] (mid central unrounded vowel). This is supported by the closeness of [ə] to [e] (with which it coalesces in Greek), its failure (unlike *h₂ and *h₃) to create an auxiliary vowel in Greek and Tocharian when it occurs between a semivowel and a consonant, and the typological likelihood of an [h] given the presence of aspirated consonants in PIE.
In 2004, Alwin Kloekhorst argued that the Hieroglyphic Luwian sign no. 19 (𔐓, conventionally transcribed á) stood for /ʔa/ (distinct from /a/, sign no. 450: 𔗷 a) and represents the reflex of */h₁/; this would support the hypothesis that */h₁/, or at least some cases of it, was [ʔ]. Later, Kloekhorst (2006) claimed that also Hittite preserves PIE *h₁ as a glottal stop [ʔ], visible in words like Hittite e-eš-zi 'he is' < PIE *h₁és-ti, where an extra initial vowel sign is used (so-called plene spelling). This hypothesis has met with serious criticism (e.g. Rieken (2010), Melchert (2010) and Weeden (2011). Recently, however, Simon (2010) has supported Kloekhorst's thesis by suggesting that plene spelling in Cuneiform Luwian can be explained in a similar way. Additionally, Simon's 2013 article revises the Hieroglyphic Luwian evidence and concludes that "although some details of Kloekhorst's arguments could not be maintained, his theory can be confirmed."
An occasionally advanced idea that the laryngeals were dorsal fricatives corresponding directly to the three traditionally reconstructed series of dorsal stops ("palatal", velar, and labiovelar) suggests a further possibility, a palatal fricative [ç].
From what is known of such phonetic conditioning in contemporary languages, notably Semitic languages, *h₂ (the "a-colouring" laryngeal) could have been a pharyngeal fricative such as [ħ] and [ʕ]. Pharyngeal consonants (like the Arabic letter ح (ħ) as in Muħammad) often cause a-coloring in the Semitic languages. Uvular fricatives, however, may also colour vowels, thus [χ] is also a noteworthy candidate. Weiss (2016) suggests that this was the case in Proto-Indo-European proper, and that a shift from uvular into pharyngeal [ħ] may have been a common innovation of the non-Anatolian languages (before the consonant's eventual loss). Rasmussen (1983) suggested a consonantal realization for *h₂ as a voiceless velar fricative [x], with a syllabic allophone [ɐ], i.e. a near-open central vowel.
Likewise it is generally assumed that *h₃ was rounded (labialized) due to its o-coloring effects. It is often taken to have been voiced based on the perfect form *pi-bh₃- from the root *peh₃ "drink". Rasmussen has chosen a consonantal realization for *h₃ as a voiced labialized velar fricative [ɣʷ], with a syllabic allophone [ɵ], i.e. a close-mid central rounded vowel. Kümmel instead suggests [ʁ].
Support for theory from daughter languages
The hypothetical existence of laryngeals in PIE finds support in the body of daughter language cognates which can be most efficiently explained through simple rules of development.
Direct reflexes of laryngeals
Reflexes of h₂ in Anatolian PIE root Meaning Anatolian reflex Cognates *peh₂-(s)- 'protect' Hittite paḫḫs- Sanskrit pā́ti, Latin pascere (pastus), Greek patéomai *dʰewh₂- 'breath/smoke' Hittite tuḫḫāi- Sanskrit dhūmá-, Latin fūmus, Greek thūmos *h₂ent- 'front' Hittite ḫant- Sanskrit ánti, Latin ante, Greek antí *h₂erǵ- 'white/silver' Hittite ḫarki- Sanskrit árjuna, Latin argentum, Greek árguron, Tocharian A ārki *h₂owi- 'sheep' Luwian hawi-
Sanskrit ávi-, Latin ovis, Greek ó(w)is *péh₂wr̥ 'fire' Hittite paḫḫur, Luwian pāḫur English fire, Tocharian B puwar, Greek pûr *h₂wéh₁n̥t- 'wind' Hittite ḫūwant- English wind, Tocharian A want, Latin ventus, Greek aént-, Sanskrit vāt- *h₂stér- 'star' Hittite ḫasterz English star, Sanskrit stā́, Latin stella, Greek astḗr *h₂ŕ̥tḱo- 'bear' Hittite ḫartaggaš Sanskrit ṛ́kṣa, Latin ursus, Greek árktos' *h₂ewh₂os 'grandfather' Hittite ḫuḫḫa-, Luwian ḫuḫa-, Lycian χuge- Gothic awo, Latin avus, Armenian haw *h₁ésh₂r̥ 'blood' Hittite ēšḫar, Luwian āšḫar Greek éar, Latin sanguīs, Armenian aryun, Latvian asinis, Tocharian A ysār
Some Hittitologists have also proposed that "h₃" was preserved in Hittite as "ḫ", although only word initially and after a resonant. Kortlandt holds that "h₃" was preserved before all vowels except "*o". Similarly, Kloekhorst believes they were lost before resonants as well.
Reflexes of h₃ in Anatolian PIE root Meaning Anatolian reflex Cognates *welh₃- 'to hit' Hittite walḫ- Latin vellō, Greek ealōn *h₃esth₁ 'bone' Hittite ḫaštāi Latin os, Greek ostéon, Sanskrit ásthi *h₃erbʰ- 'to change status' Hittite ḫarp- Latin orbus, Greek orphanós' *h₃eron- 'eagle' Hittite ḫara(n)- Gothic ara, Greek ὄρνῑς *h₃pus- 'to have sex' Hittite ḫapuš- Greek opuíō
Reconstructed instances of *kw in Proto-Germanic have been explained as reflexes of PIE *h₃w (and possibly *h₂w), a process known as Cowgill's law. The proposal has been challenged but is defended by Don Ringe.
In the Albanian language, a minority view proposes that some instances of word-initial h continue a laryngeal consonant.
PIE root Meaning Albanian Other cognates *h₂erǵʰi- testicles herdhe Greek orkhis
In Western Iranian
Martin Kümmel has proposed that some initial [x] and [h] in contemporary Western Iranian languages, commonly thought to be prothetic, are instead direct survivals of *h₂, lost in epigraphic Old Persian but retained in "marginal dialects" ancestral among others to Modern Persian.
- sic, with *h₁ (Kümmel's "h", versus "χ" = *h₂).
Proposed indirect reflexes
In all other daughter languages, comparison of the cognates can support only hypothetical intermediary sounds derived from PIE combinations of vowels and laryngeals. Some indirect reflexes are required to support the examples above where the existence of laryngeals is uncontested.
PIE Intermediary Reflexes eh₂ ā ā, a, ahh uh₂ u ū, uhh h₂e a a, ā h₂o o o, a
The proposals in this table account only for attested forms in daughter languages. Extensive scholarship has produced a large body of cognates which may be identified as reflexes of a small set of hypothetical intermediary sound, including those in the table above. Individual sets of cognates are explicable by other hypotheses but the sheer bulk of data and the elegance of the laryngeal explanation has led to widespread acceptance in principle.
Vowel coloration and lengthening
In the proposed Anatolian-language reflexes above, only some of the vowel sounds reflect PIE *e. In the daughter languages in general, many vowel sounds are not obvious reflexes. The theory explains this as the result of
- 1 H-coloration. PIE *e is 'coloured' (i.e. its sound-value is changed) before or after h₂ and h₃, but not when next to h₁.
Laryngeal precedes Laryngeal follows h₁e > h₁e eh₁ > eh₁ h₂e > h₂a eh₂ > ah₂ h₃e > h₃o eh₃ > oh₃
- 2 H-loss. Any of the three laryngeals (symbolised here as H) is lost before a short vowel. Laryngeals are also lost before another consonant (symbolised here as C,) with consequent lengthening of the preceding vowel.
Before vowel Before consonant He > e eHC > ēC Ha > a aHC > āC Ho > o oHC > ōC Hi > i iHC > īC Hu > u uHC > ūC
The results of H-coloration and H-loss are recognised in daughter-language reflexes such as those in the table below
After vowels PIE Latin Sanskrit Greek Hittite *iH > ī *gʷih₂-wós vīvus jīva bíos *uH > ū *dʰweh₂- fūmus dhūma thūmós tuwaḫḫaš *oH > ō *sóh₂wl̥ sōl sū́rya hḗlios *eh₁ > ē *séh₁-mn̥ sēmen hêma *eh₂ > ā *peh₂-(s)- pāscere (pastus) pā́ti patéomai paḫḫas *eh₃ > ō *deh₃-r/n dōnum dāna dôron Before vowels PIE Latin Sanskrit Greek Hittite *Hi > i *h₁íteros iterum ítara *Hu > u *pélh₁us plūs purú- polús *Ho > o *h₂owi- ovis ávi ó(w)is Luw. ḫawa *h₁e > e *h₁ésti est ásti ésti ēšzi *h₂e > a *h₂ent
*h₃e > o *h₃érbh- orbus arbhas orphanós ḫarp-
Greek triple reflex vs schwa
Between three phonological contexts, Greek reflexes display a regular vowel pattern that is absent from the supposed cognates in other daughter languages. Before the development of laryngeal theory, scholars compared Greek, Latin and Sanskrit (then considered earliest daughter languages) and concluded the existence in these contexts of a schwa (ə) vowel in PIE, the so-called schwa indogermanicum. The contexts are: 1. between consonants (short vowel); 2. word initial before a consonant (short vowel); 3. combined with a liquid or nasal consonant [r, l, m, n] (long vowel).
- 1 Between consonants
- Latin displays a and Sanskrit i, whereas Greek displays e, a or o
- 2 Word initial before a consonant
- Greek alone displays e, a or o
- 3 Combined with a liquid or nasal
- Latin displays a liquid/nasal consonant followed by ā; Sanskrit displays either īr/ūr or the vowel ā alone; Greek displays a liquid/nasal consonant followed by ē, ā (in dialects such as Doric) or ō
Laryngeal theory provides a more elegant general description than reconstructed schwa by assuming that the Greek vowels are derived through vowel colouring and H-loss from PIE h₁, h₂, h₃, constituting a so-called triple reflex.
*CHC *HC- *r̥H l̥H *m̥H *n̥H *h₁ Greek e e rē lē mē nē Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā *h₂ Greek a a rā lā mā nā Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā *h₃ Greek o o rō lō mō nō Latin a lost rā lā mā nā Sanskrit i lost īr/ūr īr/ūr ā ā
- 1 Between consonants
- An explanation is provided for the existence of three vowel reflexes in Greek corresponding to single reflexes in Latin and in Sanskrit
- 2 Word initial
- The assumption of *HC- in PIE yields an explanation for a dichotomy exhibited below between cognates in the Anatolian, Greek and Armenian languages reflexes with initial a and cognates in the remaining daughters which lack that syllable, The theory assumes initial *h₂e in the PIE root, which has been lost in most of the daughter languages.
- *h₂ster- 'star': Hittite hasterza, Greek astḗr, Armenian astí, Latin stella, Sanskrit tár-
- *h₂wes 'live, spend time': Hittite huis- 'live', Greek á(w)esa 'I spent a night', Sanskrit vásati 'spend the night', English was
- 3 Combined with a liquid or nasal
- These presumed sonorant reflexes are completely distinct from those deemed to have developed from single phonemes.
*r̥ *l̥ *m̥ *n̥ Greek ra, ar la, al a a Latin or ul em en Sanskrit r̥ r̥ a a
- These presumed sonorant reflexes are completely distinct from those deemed to have developed from single phonemes.
The phonology of the sonorant examples in the previous table can only be explained by the presence of an adjacent phonemes in PIE. Assuming the phonemes to be a following h₁, h₂ or h₃ allows the same rules of vowel coloration and H-loss to apply to both PIE *e and PIE sonorants.
Support from Greek ablaut
The hypothetical values for sounds with laryngeals after H-coloration and H-loss (such as seen above in the triple reflex) draw much of their support for the regularisation they allow in ablaut patterns, specifically the uncontested patterns found in Greek.
Ablaut in the root
In the following table, each row shows undisputed Greek cognates sharing the three ablaut grades of a root. The four sonorants and the two semi-vowel are represented as individual letters, other consonants as C and the vowel or its absence as (V).
e-grade o-grade zero-grade root meaning C(V)C πέτεσθαι
'fly' C(V)iC λείπειν
'leave' C(V)uC φεύγειν
'flee' C(V)r δέρκομαι
'see clearly' C(V)l πέλομαι
'become' C(V)m τέμω
'cut' C(V)n γένος
The reconstructed PIE e-grade and zero-grade of the above roots may be arranged as follows:
e-grade zero-grade C(V)C *pet *pt C(V)iC *leikʷ *likʷ C(V)uC *bʰeug *bʰug C(V)r *derk *drk C(V)l *kʷel *kʷl C(V)m *tem *tm C(V) *gen *gn
An extension of the table to PIE roots ending in presumed laryngeals allows many Greek cognates to follow a regular ablaut pattern.
root meaning cognates C(V)h₁ *dʰeh₁ *dʰh₁ 'put' I : ē : τίθημι (títhēmi)
II : e : θετός (thetós)
C(V)h₂ *steh₂ *sth₂ 'stand' I : ā : Doric ἳστᾱμι (hístāmi)
II : a : στατός (statós)
C(V)h₃ *deh₃ *dh₃ 'give' I : ō : δίδωμι (dídōmi)
II : o : δοτός (dotós)
Ablaut in the suffix
The first row of the following table shows how uncontested cognates relate to reconstructed PIE stems with e-grade or zero-grade roots, followed by e-grade or zero-grade of the suffix –w-. The remaining rows show how the ablaut pattern of other cognates is preserved if the stems are presumed to include the suffixes h₁, h₂, h₃.
root meaning cognates *gen+w- *gn+ew- *gn+w- 'knee' I Hittite genu
II Gothic kniu
III γνύξ (gnuks)
*gen+h₁- *gn+eh₁ *gn+h₁- 'become' I γενετήρ (genetḗr)
II γνήσιος (gnḗsis)
III γίγνομαι (gígnomai)
*tel+h₂- *tl+eh₂- *tl+h₂- 'lift, bear' I τελαμών (telamṓn)
II ἔτλᾱν (étlān)
III τάλας (tálas)
*ter+h₃- *tr+eh₃- *tr+h₃- 'bore, wound' II τιτρώσκω (titrṓskō)
III ἔτορον (étoron)
In the preceding sections, forms in the daughter languages were explained as reflexes of laryngeals in PIE stems. Since these stems are judged to have contained only one vowel, the explanations involved H-loss either when a vowel preceded or when a vowel followed. However, the possibility of H-loss between two vowels is present when a stem combines with an inflexional suffix.
It has been proposed that PIE H-loss resulted in hiatus, which in turn was contracted to a vowel sound distinct from other long vowels by being disyllabic or of extra length.
Early Indo-Iranian disyllables
A number of long vowels in Avestan were pronounced as two syllables, and some examples also exist in early Sanskrit, particularly in the Rig Veda. These can be explained as reflexes of contraction following a hiatus caused by the loss of intervocalic H in PIE.
Proto-Germanic trimoric o
The reconstructed phonology of Proto-Germanic (P-Gmc), the presumed ancestor of the Germanic languages, includes a long *ō phoneme, which is in turn the reflex of PIE ā. As outlined above Laryngeal theory has identified instances of PIE ā as reflexes of earlier *h₂e, *eh₂ or *aH before a consonant.
However, a distinct long P-Gmc *ō phoneme has been recognised with a different set of reflexes in daughter Germanic languages. The vowel length has been calculated by observing the effect of the shortening of final vowels in Gothic.
length P-Gmc Gothic one mora *a, *i, *u ∅, ∅, u two morae *ē, *ī, *ō, *ū a, i?, a, u? three morae *ê, *ô ē, ō
Reflexes of trimoric or overlong *ô are found in the final syllable of nouns or verbs, and are thus associated with inflectional endings. Thus four P-Gmc sounds are proposed, shown here with Gothic and Old English reflexes:
P-Gmc Reflexes P-Gmc Reflexes bimoric oral *ō Goth -a
trimoric oral *ô Goth -ō
nasal *ō̜ Goth -a
nasal *ǫ̂ Goth -ō
A somewhat different contrast is observed in endings with final *z:
P-Gmc Reflexes P-Gmc Reflexes bimoric *ōz Goth -ōs
trimoric *ôz Goth -ōs
- by H-loss *oHo > *oo > *ô;
- by H-coloration and H-loss *eh₂e > *ae > *â > *ô.
Trimoric ending PIE Reflex P-Gmc Reflexes all stems
*-oHom Sanskrit -ām
[often disyllabic in Rig Veda]
Greek -ῶν (ô̜:n)
*-ǫ̂ Gothic -ō
Old English -a
*-eh₂es Sanskrit –ās
*-ôz Gothic -ōs
Old English -a
Bimoric ending PIE Reflex P-Gmc Reflexes thematic verbs
1st person singular
*-oh₂ Latin -ō
*-ō Gothic -a
Old English -u
*-eh₂ Sanskrit -ā
*-ō Gothic -a
Old English -u
*-eh₂m Sanskrit -ām
*-ō̜ Gothic -a
Old English -e
*-eh₂ns Sanskrit -ās
Latin *-ans > -ās
*-ōz Gothic -ōs
Old English -e
(Trimoric *ô is also reconstructed as word-final in contexts that are not explained by laryngeal theory.)
Balto-Slavic long vowel accent
The reconstructed phonology of the Balto-Slavic languages posits two distinct long vowels in almost exact correspondence to bimoric and trimoric vowels in Proto-Germanic. The Balto-Slavic vowels are distinguished not by length but by intonation; long vowels with circumflex accent correspond to P-Gmc trimoric vowels. A significant proportion of long vowels with acute accent (also described as with acute register) correspond to P-Gmc bimoric vowels. These correspondences have led to the suggestion that the split between them occurred in the last common ancestor of the two daughters.
It has been suggested that acute intonation was associated with glottalisation, a suggestion supported by glottalised reflexes in Latvian. This could lend support to a theory that laryngeal consonants developed into glottal stops before their disappearance in Balto-Slavic and Proto-Germanic.
H-loss adjacent to other sounds
After stop consonants
PIE resonants (sonorants) *r̥,*l̥,*m̥,*n̥ are predicted to become consonantal allophones *r, *l*, *m, *n* when immediately followed by a vowel. Using R to symbolise any resonant (sonorant) and V for any vowel, *R̥V>*RV. Instances in the daughter languages of a vocalic resonant immediately followed by a vowel (RV) are explained as reflexes of PIE *R̥HV with a laryngeal between the resonant and the vowel giving rise to a vocalic allophone. This original vocalic quality was preserved following H-loss.
Next to semi-vowels
(see Holtzmann's law)
Laryngeal theory has been used to explain the occurrence of a reconstructed sound change known as Holtzmann's law or sharpening (German Verschärfung ) in North Germanic and East Germanic languages. Existing theory explains that PIE semivowels *y and *w were doubled to P-Gmc *-yy- and *-ww-, and that these in turn became -ddj-and -ggw-respectively in Gothic and -ggj- and -ggw- in early North Germanic languages. However, existing theory had difficulty in predicting which instances of PIE semivowels led to sharpening and which instances failed to do so. The new explanation proposes that words exhibiting sharpening are derived from PIE words with laryngeals.
Example PIE early P-Gmc later P-Gmc Reflexes *drewh₂yo
*trewwjaz with sharpening *triwwjaz Gothic triggws
Old Norse tryggr
without sharpening *triuwjaz Old English trēowe
Old High German gitriuwi
Many of these techniques rely on the laryngeal being preceded by a vowel, and so they are not readily applicable for word-initial laryngeals except in Greek and Armenian. However, occasionally languages have compounds in which a medial vowel is unexpectedly lengthened or otherwise shows the effect of a following laryngeal. This shows that the second word originally began with a laryngeal, and that this laryngeal still existed at the time the compound was formed.
Laryngeals in the Uralic languages
Further evidence of the laryngeals has been found in Uralic languages. While Proto-Uralic and PIE have not been demonstrated to be genetically related, some word correspondences between Uralic and Indo-European have been identified as likely borrowings from very early Indo-European dialects to early Uralic dialects. One example is the widespread word family including on the Uralic side e.g. Hungarian méz, Finnish and Estonian mesi, met(e)-, Mari мӱ /my/, Komi ма /ma/ 'honey', suggesting Proto-Uralic *meti; and on the Indo-European side, English mead, Greek methu 'wine', German Met 'honey wine', Slavic medъ and Sanskrit mádhu 'honey' etc.
There are several criteria to date such borrowings, the most reliable ones coming from historical phonology. For example, Finnic porsas, Erzya пурцос /purt͡sos/, Mokša пурьхц /pur̥ʲt͡s/ 'piglet' presuppose a common proto-form *porćas at an earlier stage of development. This is etymologized as a loanword from PIE *porḱ-, which gives Latin porcus 'hog', Slavic porsę 'pig', OE fearh (> Engl. farrow 'young pig'), Lithuanian par̃šas 'piglet, castrated boar'. Here loaning must have occurred predating the depalatalisation of centum languages, and the later development into the Baltic *š reflected as Finn. h in borrowings, or Iranian *c medially reflected as Finn. t. If the PIE distinction between palatovelars and plain velars is reconstructed as one of velars and uvulars, then instead of the former condition also a lower limit can be set up for the loan, as postdating the satemization of *ḱ into a palatalized stop or affricate.
Work particularly associated with research of the scholar Jorma Koivulehto has identified a number of additions to the list of Finnic loanwords from an Indo-European source or sources whose particular interest is the apparent correlation of PIE laryngeals with three post-alveolar phonemes (or their later reflexes) in the Finnic forms. If so, this would point to a great antiquity for the borrowings, since no attested Indo-European language neighbouring Uralic has consonants as reflexes of laryngeals. And it would bolster the idea that laryngeals were phonetically distinctly consonantal.
However, Koivulehto's theories are not universally accepted and have been sharply criticized (e. g. by Finno-Ugricist Eugene Helimski) because many of the reconstructions involve a great deal of far-fetched hypotheses and the chronology is not in good agreement with the history of Bronze Age and Iron Age migrations in the Eastern Europe established by archeologists and historians.
Three Uralic phonemes have been posited to reflect PIE laryngeals. In post-vocalic positions both the post-alveolar fricatives that ever existed in Uralic are represented: firstly a possibly velar one, theoretically reconstructed much as the PIE laryngeals (conventionally marked *x), in the very oldest borrowings and secondly a grooved one (*š as in shoe becoming modern Finnic h) in some younger ones. The velar plosive k is the third reflex and the only one found word-initially. In intervocalic position the reflex k is probably younger than either of the two former ones. The fact that Finno-Ugric may have plosive reflexes for PIE laryngeals is to be expected under well documented Finnic phonological behaviour and does not mean much for tracing the phonetic value of PIE laryngeals (cf. Finnish kansa 'people' < PGmc *xansā 'company, troupe, party, crowd' (cf. German Hanse), Finnish kärsiä 'suffer, endure' < PGmc *xarđia- 'endure' (cf. E. hard), Finnish pyrkiä < PGmc. *wurk(i)ja- 'work, work for' etc.).
The correspondences do not differentiate between h₁, h₂ and h₃. Thus
- PIE laryngeals correspond to the PU laryngeal *x in wordstems like:
- Finnish na-inen 'woman' / naa-ras 'female' < PU *näxi-/*naxi- < PIE *[gʷnah₂-] = */gʷneh₂-/ > Sanskrit gnā́ 'goddess', OIr. mná (gen. of ben), ~ Greek gunē 'woman' (cognate to Engl. queen)
- Finnish sou-ta- ~ Samic *sukë- 'to row' < PU *suxi- < PIE *sewh-
- Finnish tuo- 'bring' ~ Samic *tuokë- ~ Tundra Nenets tāś 'give' < PU *toxi- < PIE *[doh₃-] = */deh₃-/ > Greek didōmi, Lat. dō-, Old Lith. dúomi 'give', Hittite dā 'take'
- Note the consonantal reflex /k/ in Samic.
- PIE laryngeals correspond to Finnic *h, whose normal origin is a Pre-Finnic fricative *š in wordstems like:
- Finnish rohto 'medical plant, green herb' < PreFi *rošto < PreG *groH-tu- > Gmc. *grōþu 'green growth' > Swedish grodd 'germ (shoot)'
- Old Finnish inhi-(m-inen) 'human being' < PreFi *inši- 'descendant' < PIE *ǵnh₁-(i)e/o- > Sanskrit jā́- 'born, offspring, descendant', Gmc. *kunja- 'generation, lineage, kin'
- PIE laryngeals correspond to Pre-Finnic *k in wordstems like:
- Finnish kesä 'summer' < PFS *kesä < PIE *h₁es-en- (*h₁os-en-/-er-) > Balto-Slavic *eseni- 'autumn', Gothic asans 'summer'
- Finnish kaski 'burnt-over clearing' < Proto-Finnic *kaski < PIE/PreG *[h₂a(h₁)zg-] = */h₂e(h₁)sg-/ > Gmc. *askōn 'ashes'
- Finnish koke- 'to perceive, sense' < PreFi *koki- < PIE *[h₃okw-ie/o] = */h₃ekw-ie/o/ > Greek opsomai 'look, observe' (cognate to Lat. oculus 'eye')
- Finnish kulke- 'to go, walk, wander' ~ Hungarian halad- 'to go, walk, proceed' < PFU *kulki- < PIE *kʷelH-e/o- > Greek pelomai '(originally) to be moving', Sanskrit cárati 'goes, walks, wanders (about)', cognate Lat. colere 'to till, cultivate, inhabit'
- Finnish teke- 'do, make' ~ Hungarian tëv-, të-, tesz- 'to do, make, put, place' < PFU *teki- < PIE *dʰeh₁ > Greek títhēmi, Sanskrit dádhāti 'put, place', but 'do, make' in the western IE languages, e.g. the Germanic forms do, German tun, etc., and Latin faciō (though OE dón and into Early Modern English still sometimes means "put", and still does in Dutch and colloquial German).
This list is not exhaustive, especially when one also considers a number of etymologies with laryngeal reflexes in Finno-Ugric languages other than Finnish. For most cases no other plausible etymology exists. While some single etymologies may be challenged, the case for this oldest stratum itself seems conclusive from the Uralic point of view, and corresponds well with all that is known about the dating of the other most ancient borrowings and about contacts with Indo-European populations. Yet acceptance for this evidence is far from unanimous among Indo-European linguists, some even regard the hypothesis as controversial (see above).
PIE Laryngeals and Proto Semitic
Several linguists have posited a relationship between PIE and Semitic, almost right after the discovery of Hittite. Among these were Hermann Möller, though a few had argued that such a relationship existed long before the 20th century, like Richard Lepsius in 1836. The postulated correspondences between the IE laryngeals and that of Semitic assist in demonstrating their evident existence. Given here are a few lexical comparisons between the two respective proto languages.
- Semitic ʼ-b-y 'to want, desire' ~ PIE *[hyebʰ-] 'to fuck'
- Semitic ʼ-m-m/y ~ PIE *[h₁em-] 'to take'
- Semitic ʼin-a 'in', 'on', 'by' ~ PIE *[h₁en-] > Sanskrit ni, ~ Greek enōpḗ
- Semitic ʼanāku ~ PIE *h₁eǵ(hom)- 'I'
- Semitic ʻ-d-w 'to pass (over), move, run' ~ PIE *[weh₂dʰ-] 'to pass through'
- Semitic ʻ-l-y 'to rise, grow, go up, be high' ~ PIE *[h₂el-] 'to grow, nourish'
- Semitic ʻ-k-w: Arabic ʻakā 'to rise, be big' ~ PIE *[h₂ewg-] 'to grow, nourish'
- Semitic ʻl 'next, in addition' ~ PIE *[h₂el-] 'in'
- Semitic: Arabic ʻanan 'side', ʻan 'from, for; upon; in' ~ PIE *[h₂en h₂e/u-] 'on'
Explanation of ablaut and other vowel changes
A feature of Proto-Indo-European morpheme structure was a system of vowel alternations termed ablaut ("alternate sound") by early German scholars and still generally known by that term (except in French, where the term apophonie is preferred). Several different such patterns have been discerned, but the commonest one, by a wide margin, is e/o/∅ alternation found in a majority of roots, in many verb and noun stems, and even in some affixes (the genitive singular ending, for example, is attested as *-es, *-os, and *-s). The different states are called ablaut grades; e-grade and o-grade are together "full grades", and the total absence of any vowel is "zero grade".
Thus the root *sed- "to sit (down)" (roots are traditionally cited in the e-grade, if they have one) has three different shapes: *sed-, *sod-, and *sd-. This kind of patterning is found throughout the PIE root inventory and is transparent:
- *sed-: (Vedic), **sed-: in Latin sedeō "am sitting", Old English sittan "to sit" < *set-ja- (with umlaut) < *sed-; Greek hédrā "seat, chair" < *sed- (Greek systemically turns word-initial prevocalic s to h, i.e. rough breathing).
- *sod-: in Latin solium "throne" (in Latin l sporadically replaces d between vowels, said by Roman grammarians to be a Sabine trait) = Old Irish suideⁿ /suðʲe/ "a sitting" (all details regular from PIE *sod-yo-m); Gothic satjan = Old English settan "to set" (causative) < *sat-ja- (umlaut again) < PIE *sod-eye-. PIE *se-sod-e "sat" (perfect) > Sanskrit sa-sād-a per Brugmann's law.
- *sd-: in compounds, as *ni- "down" + *sd- = *nisdos "nest": English nest < Proto-Germanic *nistaz, Latin nīdus < *nizdos (all regular developments); Slavic gnězdo < *g-ně-sd-os. The 3pl (third person plural) of the perfect would have been *se-sd-ṛ whence Indo-Iranian *sazdṛ, which gives (by regular developments) Sanskrit sedur /seːdur/.
Roots *dō and *stā
In addition to the commonplace roots of consonant + vowel + consonant structure, there are also well-attested roots like *dhē- "put, place" and *dō- "give" (mentioned above): these end in a vowel, which is always long in the categories where roots like *sed- have full grades; and in those forms where zero grade would be expected, if before an affix beginning with a consonant, we find a short vowel, reconstructed as *ə, or schwa (more formally, schwa primum indogermanicum). An "independent schwa", like the one in PIE *pǝter- "father", can be identified by the distinctive cross-language correspondences of this vowel that are different from the other five short vowels. (Before an affix beginning with a vowel, there is no trace of a vowel in the root, as shown below.)
Whatever caused a short vowel to disappear entirely in roots like *sed-/*sod-/*sd-, it was a reasonable inference that a long vowel under the same conditions would not quite disappear, but would leave a sort of residue. This residue is reflected as i in Indic while dropping in Iranian; it gives variously e, a, o in Greek; it mostly falls together with the reflexes of PIE *a in the other languages (always bearing in mind that short vowels in non-initial syllables undergo various developments in Italic, Celtic, and Germanic):
- *dō- "give": in Latin dōnum "gift" = Old Irish dán /daːn/ and Sanskrit dâna- (â = ā with tonic accent); Greek dí-dō-mi (reduplicated present) "I give" = Sanskrit dádāmi; Slavic damъ 'I give'. But in the participles, Greek dotós "given" = Sanskrit ditá-, Latin datus all < *də-tó-.
- *stā- "stand": in Greek hístēmi (reduplicated present, regular from *si-stā-), Sanskrit a-sthā-t aorist "stood", Latin testāmentum "testimony" < *ter-stā- < *tri-stā- ("third party" or the like), Slavic sta-ti 'to stand'. But Sanskrit sthitá-"stood", Greek stásis "a standing", Latin supine infinitive statum "to stand".
Conventional wisdom lined up roots of the *sed- and *dō- types as follows:
|Full Grades||Weak Grades||Meaning|
But there are other patterns of "normal" roots, such as those ending with one of the six resonants (*y w r l m n), a class of sounds whose peculiarity in Proto-Indo-European is that they are both syllabic (vowels, in effect) and consonants, depending on what sounds are adjacent:
Root *bher-/bhor-/bhṛ- ~ bhr
- *bher-: in Latin ferō = Greek phérō, Avestan barā, Sanskrit bharāmi, Old Irish biur, Old Norse ber, Old English bere all "I carry"; Slavic berǫ 'I take'; Latin ferculum "bier, litter" < *bher-tlo- "implement for carrying".
- *bhor-: in Gothic and Scandinavian barn "child" (= English dial. bairn), Greek phoréō "I wear [clothes]" (frequentative formation, *"carry around"); Sanskrit bhâra- "burden" (*bhor-o- via Brugmann's law); Slavic vyborъ 'choice'.
- *bhṛ- before consonants: Sanskrit bhṛ-tí- "a carrying"; Gothic gabaurþs /gaˈbɔrθs/, Old English ġebyrd /jəˈbyɹd/, Old High German geburt all "birth" < *gaburdi- < *bhṛ-tí; Slavic bьrati 'to take'.
- *bhr- before vowels: Ved bibhrati 3pl. "they carry" < *bhi-bhr-ṇti; Greek di-phrós "chariot footboard big enough for two men" < *dwi-bhr-o-.
Saussure's insight was to align the long-vowel roots like *dō-, *stā- with roots like *bher-, rather than with roots of the *sed- sort. That is, treating "schwa" not as a residue of a long vowel but, like the *r of *bher-/*bhor-/*bhṛ-, an element that was present in the root in all grades, but which in full grade forms coalesced with an ordinary e/o root vowel to make a long vowel, with "coloring" (changed phonetics) of the e-grade into the bargain; the mystery element was seen by itself only in zero grade forms:
|Full Grades||Zero Grade||Meaning|
|bher-, bhor-||bhṛ- / bhr-||"carry"|
|deX, doX-||dẊ- / dX-||"give"|
(Ẋ = syllabic form of the mystery element)
Saussure treated only two of these elements, corresponding to our *h₂ and *h₃. Later it was noticed that the explanatory power of the theory, as well as its elegance, were enhanced if a third element were added, our *h₁, which has the same lengthening and syllabifying properties as the other two but has no effect on the color of adjacent vowels. Saussure offered no suggestion as to the phonetics of these elements; his term for them, "coefficients sonantiques", was not however a fudge, but merely the term in general use for glides, nasals, and liquids (i.e., the PIE resonants) as in roots like *bher-.
As mentioned above, in forms like *dwi-bhr-o- (etymon of Greek diphrós, above), the new "coefficients sonantiques" (unlike the six resonants) have no reflexes at all in any daughter language. Thus the compound *mṇs-dheH- "to 'fix thought', be devout, become rapt" forms a noun *mṇs-dhH-o- seen in Proto-Indo-Iranian *mazdha- whence Sanskrit medhá- /mēdha/ "sacrificial rite, holiness" (regular development as in sedur < *sazdur, above), Avestan mazda- "name (originally an epithet) of the greatest deity".
There is another kind of unproblematic root, in which obstruents flank a resonant. In the zero grade, unlike the case with roots of the *bher- type, the resonant is therefore always syllabic (being always between two consonants). An example would be *bhendh- "tie, bind":
- *bhendh-: in Germanic forms like Old English bindan "to tie, bind", Gothic bindan; Lithuanian beñdras "chum", Greek peĩsma "rope, cable" /pêːsma/ < *phenth-sma < *bhendh-smṇ.
- *bhondh-: in Sanskrit bandhá- "bond, fastening" (*bhondh-o-; Grassmann's law) = Old Icelandic bant, OE bænd; Old English bænd, Gothic band "he tied" < *(bhe)bhondh-e.
- *bhṇdh-: in Sanskrit baddhá- < *bhṇdh-tó- (Bartholomae's law), Old English gebunden, Gothic bundan; German Bund "league". (English bind and bound show the effects of secondary (Middle English) vowel lengthening; the original length is preserved in bundle.)
This is all straightforward and such roots fit directly into the overall patterns. Less so are certain roots that seem sometimes to go like the *bher- type, and sometimes to be unlike anything else, with (for example) long syllabics in the zero grades while at times pointing to a two-vowel root structure. These roots are variously called "heavy bases", "dis(s)yllabic roots", and "seṭ roots" (the last being a term from Pāṇini's grammar. It will be explained below).
Root *ǵen, *ǵon, *ǵṇn-/*ǵṇ̄
For example, the root "be born, arise" is given in the usual etymological dictionaries as follows:
- (A) *ǵen-, *ǵon-, *ǵṇn-
- (B) *ǵenə-, *ǵonə-, *ǵṇ̄-
The (A) forms occur when the root is followed by an affix beginning with a vowel; the (B) forms when the affix begins with a consonant. As mentioned, the full-grade (A) forms look just like the *bher- type, but the zero grades always and only have reflexes of syllabic resonants, just like the *bhendh- type; and unlike any other type, there is a second root vowel (always and only *ə) following the second consonant:
- (A) PIE *ǵenos- neut s-stem "race, clan" > Greek (Homeric) génos, -eos, Sanskrit jánas-, Avestan zanō, Latin genus, -eris.
- (B) Greek gené-tēs "begetter, father"; géne-sis < *ǵenə-ti- "origin"; Sanskrit jáni-man- "birth, lineage", jáni-tar- "progenitor, father", Latin genitus "begotten" < genatos.
- (A) Sanskrit janayati "beget" = Old English cennan /kennan/ < *ǵon-eye- (causative); Sanskrit jána- "race" (o-grade o-stem) = Greek gónos, -ou "offspring".
- (B) Sanskrit jajāna 3sg. "was born" < *ǵe-ǵon-e.
- (A) Gothic kuni "clan, family" = OE cynn /künn/, English kin; Rigvedic jajanúr 3pl.perfect < *ǵe-ǵṇn- (a relic; the regular Sanskrit form in paradigms like this is jajñur, a remodeling).
- (B) Sanskrit jātá- "born" = Latin nātus (Old Latin gnātus, and cf. forms like cognātus "related by birth", Greek kasí-gnētos "brother"); Greek gnḗsios "belonging to the race". (The ē in these Greek forms can be shown to be original, not Attic-Ionic developments from Proto-Greek *ā.)
On the term "seṭ". The Pāṇinian term "seṭ" (that is, sa-i-ṭ) is literally "with an /i/". This refers to the fact that roots so designated, like jan- "be born", have an /i/ between the root and the suffix, as we've seen in Sanskrit jánitar-, jániman-, janitva (a gerund). Cf. such formations built to "aniṭ" ("without an /i/") roots, such as han- "slay": hántar- "slayer", hanman- "a slaying", hantva (gerund). In Pāṇini's analysis, this /i/ is a linking vowel, not properly a part of either the root or the suffix. It is simply that some roots are in effect in the list consisting of the roots that (as we would put it) "take an -i-".
But historians have the advantage here: the peculiarities of alternation, the "presence of /i/", and the fact that the only vowel allowed in second place in a root happens to be *ə, are all neatly explained once *ǵenə- and the like were understood to be properly *ǵenH-. That is, the patterns of alternation, from the point of view of Indo-European, were simply those of *bhendh-, with the additional detail that *H, unlike obstruents (stops and *s) would become a syllable between two consonants, hence the *ǵenə- shape in the Type (B) formations, above.
The startling reflexes of these roots in zero grade before a consonant (in this case, Sanskrit ā, Greek nē, Latin nā, Lithuanian ìn) is explained by the lengthening of the (originally perfectly ordinary) syllabic resonant before the lost laryngeal, while the same laryngeal protects the syllabic status of the preceding resonant even before an affix beginning with a vowel: the archaic Vedic form jajanur cited above is structurally quite the same (*ǵe-ǵṇh₁-ṛ) as a form like *da-dṛś-ur "they saw" < *de-dṛḱ-ṛ.
Incidentally, redesigning the root as *ǵenH- has another consequence. Several of the Sanskrit forms cited above come from what look like o-grade root vowels in open syllables, but fail to lengthen to -ā- per Brugmann's law. All becomes clear when it is understood that in such forms as *ǵonH- before a vowel, the *o is not in fact in an open syllable. And in turn that means that a form like jajāna "was born", which apparently does show the action of Brugmann's law, is actually a false witness: in the Sanskrit perfect tense, the whole class of seṭ roots, en masse, acquired the shape of the aniṭ 3sing. forms. (See Brugmann's law for further discussion.)
There are also roots ending in a stop followed by a laryngeal, as *pleth₂-/*pḷth₂- "spread, flatten", from which Sanskrit pṛthú- "broad" masc. (= Avestan pərəθu-), pṛthivī- fem., Greek platús (zero grade); Skt. prathimán- "wideness" (full grade), Greek platamṓn "flat stone". The laryngeal explains (a) the change of *t to *th in Proto-Indo-Iranian, (b) the correspondence between Greek -a-, Sanskrit -i- and no vowel in Avestan (Avestan pərəθwī "broad" fem. in two syllables vs Sanskrit pṛthivī- in three).
- Caution has to be used in interpreting data from Indic in particular. Sanskrit remained in use as a poetic, scientific, and classical language for many centuries, and the multitude of inherited patterns of alternation of obscure motivation (such as the division into seṭ and aniṭ roots) provided models for coining new forms on the "wrong" patterns. There are many forms like tṛṣita- "thirsty" and tániman- "slenderness", that is, seṭ formations to unequivocally aniṭ roots; and conversely aniṭ forms like píparti "fills", pṛta- "filled", to securely seṭ roots (cf. the "real" past participle, pūrṇá-). Sanskrit preserves the effects of laryngeal phonology with wonderful clarity, but looks upon the historical linguist with a threatening eye: for even in Vedic Sanskrit, the evidence has to be weighed carefully with due concern for the antiquity of the forms and the overall texture of the data. (It is no help that Proto-Indo-European itself had roots which varied somewhat in their makeup, as *ǵhew- and *ǵhewd-, both "pour"; and some of these "root extensions" as they're called, for want of any more analytical term, are, unluckily, laryngeals.)
Stray laryngeals can be found in isolated or seemingly isolated forms; here the three-way Greek reflexes of syllabic *h₁, *h₂, *h₃ are particularly helpful, as seen below. (Comments on the forms follow.)
- *h₁ in Greek ánemos "wind" (cf. Latin animus "breath, spirit; mind", Vedic aniti "breathes") < *anə- "breathe; blow" (now *h₂enh₁-). Perhaps also Greek híeros "mighty, super-human; divine; holy", cf. Sanskrit iṣirá- "vigorous, energetic".
- *h₂ in Greek patḗr "father" = Sanskrit pitár-, Old English fæder, Gothic fadar, Latin pater. Also *meǵh₂ "big" neut. > Greek méga, Sanskrit máha.
- *h₃ in Greek árotron "plow" = Welsh aradr, Old Norse arðr, Lithuanian árklas.
The Greek forms ánemos and árotron are particularly valuable because the verb roots in question are extinct in Greek as verbs. This means that there is no possibility of some sort of analogical interference, as for example happened in the case of Latin arātrum "plow", whose shape has been distorted by the verb arāre "to plow" (the exact cognate to the Greek form would have been *aretrum). It used to be standard to explain the root vowels of Greek thetós, statós, dotós "put, stood, given" as analogical. Most scholars nowadays probably take them as original, but in the case of "wind" and "plow", the argument can't even come up.clarification and citation needed: consider "νέμω", seeing as "άνεμος" can be defined as that which is without "νομή"
Regarding Greek híeros, the pseudo-participle affix *-ro- is added directly to the verb root, so *ish₁-ro- > *isero- > *ihero- > híeros (with regular throwback of the aspiration to the beginning of the word), and Sanskrit iṣirá-. There seems to be no question of the existence of a root *eysH- "vigorously move/cause to move". If the thing began with a laryngeal, and most scholars would agree that it did, it would have to be *h₁-, specifically; and that's a problem. A root of the shape *h₁eysh₁- is not possible. Indo-European had no roots of the type *mem-, *tet-, *dhredh-, i.e., with two copies of the same consonant. But Greek attests an earlier (and rather more widely attested) form of the same meaning, híaros. If we reconstruct *h₁eysh₂-, all of our problems are solved in one stroke. The explanation for the híeros/híaros business has long been discussed, without much result; laryngeal theory now provides the opportunity for an explanation which did not exist before, namely metathesis of the two laryngeals. It is still only a guess, but it is a much simpler and more elegant guess than the guesses available before.
The syllabic *h₂ in *ph₂ter- "father" might not really be isolated. Certain evidence shows that the kinship affix seen in "mother, father" etc. might actually have been *-h₂ter- instead of *-ter-. The laryngeal syllabified after a consonant (thus Greek patḗr, Latin pater, Sanskrit pitár-; Greek thugátēr, Sanskrit duhitár- "daughter") but lengthened a preceding vowel (thus say Latin māter "mother", frāter "brother") — even when the "vowel" in question was a syllabic resonant, as in Sanskrit yātaras "husbands' wives" < *yṆt- < *yṇ-h₂ter-).
Laryngeals in morphology
Like any other consonant, Laryngeals feature in the endings of verbs and nouns and in derivational morphology, the only difference being the greater difficulty of telling what's going on. Indo-Iranian, for example, can retain forms that pretty clearly reflect a laryngeal, but there is no way of knowing which one.
The following is a rundown of laryngeals in Proto-Indo-European morphology.
- *h₁ is seen in the instrumental ending (probably originally indifferent to number, like English expressions of the type by hand and on foot). In Sanskrit, feminine i- and u-stems have instrumentals in -ī, -ū, respectively. In the Rigveda, there are a few old a-stems (PIE o-stems) with an instrumental in -ā; but even in that oldest text the usual ending is -enā, from the n-stems.
- Greek has some adverbs in -ē, but more important are the Mycenaean forms like e-re-pa-te "with ivory" (i.e. elephantē? -ě?)
- The marker of the neuter dual was *-iH, as in Sanskrit bharatī "two carrying ones (neut.)", nāmanī "two names", yuge "two yokes" (< yuga-i? *yuga-ī?). Greek to the rescue: the Homeric form ósse "the (two) eyes" is manifestly from *h₃ekʷ-ih₁ (formerly *okʷ-ī) via fully regular sound laws (intermediately *okʷye).
- *-eh₁- derives stative verb senses from eventive roots: PIE *sed- "sit (down)": *sed-eh₁- "be in a sitting position" (> Proto-Italic *sed-ē-ye-mos "we are sitting" > Latin sedēmus). It is clearly attested in Celtic, Italic, Germanic (the Class IV weak verbs), and Baltic/Slavic, with some traces in Indo-Iranian (In Avestan the affix seems to form past-habitual stems).
- It seems likely, though it is less certain, that this same *-h₁ underlies the nominative-accusative dual in o-stems: Sanskrit vṛkā, Greek lúkō "two wolves". (The alternative ending -āu in Sanskrit cuts a small figure in the Rigveda, but eventually becomes the standard form of the o-stem dual.)
- *-h₁s- derives desiderative stems as in Sanskrit jighāṃsati "desires to slay" < *gʷhi-gʷhṇ-h₁s-e-ti- (root *gʷhen-, Sanskrit han- "slay"). This is the source of Greek future tense formations and (with the addition of a thematic suffix *-ye/o-) the Indo-Iranian one as well: bhariṣyati "will carry" < *bher-h₁s-ye-ti.
- *-yeh₁-/*-ih₁- is the optative suffix for root verb inflections, e.g. Latin (old) siet "may he be", sīmus "may we be", Sanskrit syāt "may he be", and so on.
- *h₂ is seen as the marker of the neuter plural: *-h₂ in the consonant stems, *-eh₂ in the vowel stems. Much leveling and remodeling is seen in the daughter languages that preserve any ending at all, thus Latin has generalized *-ā throughout the noun system (later regularly shortened to -a), Greek generalized -ǎ < *-h₂.
- The categories "masculine/feminine" plainly did not exist in the most original form of Proto-Indo-European, and there are very few noun types which are formally different in the two genders. The formal differences are mostly to be seen in adjectives (and not all of them) and pronouns. Both types of derived feminine stems feature *h₂: a type that is patently derived from the o-stem nominals; and an ablauting type showing alternations between *-yeh₂- and *-ih₂-. Both are peculiar in having no actual marker for the nominative singular, and at least as far as the *-eh₂- type, two things seem clear: it is based on the o-stems, and the nom.sg. is probably in origin a neuter plural. (An archaic trait of Indo-European morpho-syntax is that plural neuter nouns construe with singular verbs, and quite possibly *yugeh₂ was not so much "yokes" in our sense, but "yokage; a harnessing-up".) Once that much is thought of, however, it is not easy to pin down the details of the "ā-stems" in the Indo-European languages outside of Anatolia, and such an analysis sheds no light at all on the *-yeh₂-/*-ih₂- stems, which (like the *eh₂-stems) form feminine adjective stems and derived nouns (e.g. Sanskrit devī- "goddess" from deva- "god") but unlike the "ā-stems" have no foundation in any neuter category.
- *-eh₂- seems to have formed factitive verbs, as in *new-eh₂- "to renew, make new again", as seen in Latin novāre, Greek neáō and Hittite ne-wa-aḫ-ḫa-an-t- (participle) all "renew" but all three with the pregnant sense of "plow anew; return fallow land to cultivation".
- *-h₂- marked the 1st person singular, with a somewhat confusing distribution: in the thematic active (the familiar -ō ending of Greek and Latin, and Indo-Iranian -ā(mi)), and also in the perfect tense (not really a tense in PIE): *-h₂e as in Greek oîda "I know" < *woyd-h₂e. It is the basis of the Hittite ending -ḫḫi, as in da-aḫ-ḫi "I take" < *-ḫa-i (original *-ḫa embellished with the primary tense marker with subsequent smoothing of the diphthong).
- *-eh₃ may be tentatively identified in a "directive case". No such case is found in Indo-European noun paradigms, but such a construct accounts for a curious collection of Hittite forms like ne-pi-ša "(in)to the sky", ták-na-a "to, into the ground", a-ru-na "to the sea". These are sometimes explained as o-stem datives in -a < *-ōy, an ending clearly attested in Greek and Indo-Iranian, among others, but there are serious problems with such a view, and the forms are highly coherent, functionally. And there are also appropriate adverbs in Greek and Latin (elements lost in productive paradigms sometimes survive in stray forms, like the old instrumental case of the definite article in English expressions like the more the merrier): Greek ánō "upwards, kátō "downwards", Latin quō "whither?", eō "to that place"; and perhaps even the Indic preposition/preverb â "to(ward)" which has no satisfactory competing etymology. (These forms must be distinguished from the similar-looking ones formed to the ablative in *-ōd and with a distinctive "fromness" sense: Greek ópō "whence, from where".)
Throughout its history, the laryngeal theory in its various forms has been subject to extensive criticism and revision.
The original argument of Saussure was not accepted by any of the Neogrammarians, the school, primarily based at the University of Leipzig, then reigning at the cutting-edge of Indo-European linguistics. Several of them attacked the Mémoire savagely. Osthoff's criticism was particularly virulent, often descending into personal invective.
For the first half-century of its existence, the laryngeal theory was widely seen as ‘an eccentric fancy of outsiders’. In Germany it was totally rejected. Among its early proponents were Hermann Möller, who extended Saussure's system with a third, non-colouring laryngeal, Albert Cuny, Holger Pedersen and Karl Oštir. The fact that these scholars were engaged in highly speculative long-range linguistic comparison further contributed to its isolation.
Although the founding fathers were able to provide some indirect evidence of a lost consonantal element (for example, the origin of the Indo-Iranian voiceless aspirates in *CH sequences and the ablaut pattern of the so-called heavy bases, *CeRə- ~ *CR̥̄- in the traditional formulation), the direct evidence so crucial for the Neogrammarian thinking was lacking. Saussure's structural considerations were foreign to the leading contemporary linguists.
After Kuryłowicz's convincing demonstration that the Hittite language preserved at least some of Saussure's coefficients sonantiques, the focus of the debate shifted. It was still unclear how many laryngeals are to be posited to account for the new facts and what effect they have had exactly. Kuryłowicz, after a while, settled on four laryngeals, an approach further accepted by Sapir, Sturtevant, and through them much of American linguistics. The three-laryngeal system was defended, among others, by Walter Couvreur and Émile Benveniste. Many individual proposals were made, which assumed up to ten laryngeals (André Martinet). While some scholars, like Heinz Kronasser and Giuliano Bonfante, attempted to disregard Anatolian evidence altogether, the ‘minimal’ serious proposal (with roots in Pedersen's early ideas) was put forward by Hans Hendriksen, Louis Hammerich and later Ladislav Zgusta, who assumed a single /H/ phoneme without vowel-colouring effects.
By 2000s, however, a widespread, though not unanimous agreement was reached in the field on reconstructing Möller's three laryngeals. One of the last major critics of this approach was Oswald Szemerényi, who subscribed to a theory similar to Zgusta's (Szemerényi 1996).
- Zair, N., The Reflexes of the Proto-Indo-European Laryngeals in Celtic (Brill, 2012), pp. 3-4.
- Mallory, J. P.; Adams, Douglas Q. (2006). The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World. Oxford University Press. p. 55. ISBN 978-0-19-929668-2.
- Encyclopedia of Indo-European culture By J. P. Mallory, Douglas Q. Adams Edition: illustrated Published by Taylor & Francis, 1997 ISBN 1-884964-98-2, ISBN 978-1-884964-98-5 pp. 9-10, 13-14, 55.
- Rasmussen (1999), p. 77
- Rasmussen (1999), p. 71
- Rasmussen (1999), p. 76
- Kloekhorst, Alwin (2004). "The Preservation of *h₁ in Hieroglyphic Luwian. Two Separate a-Signs". Historische Sprachforschung. 117: 26–49.
- Kloekhorst, Alwin (2006). "Initial Laryngeals in Anatolian". Historische Sprachforschung. 119: 77–108.
- Rieken, Elisabeth (2010). "Review of A. Kloekhorst, Etymological Dictionary of the Hittite Inherited Lexicon". Kratylos. 55: 125–33. doi:10.29091/KRATYLOS/2010/1/17.
- Melchert, Craig (2010). "Spelling of Initial /a-/ in Hieroglyphic Luwian". In Singer, Itamar (ed.). Ipamati kistamati pari tumatimis. Tel Aviv University: Institute of Archaeology. pp. 147–58.
- Weeden, Mark (2011). "Spelling, phonology and etymology in Hittite historical linguistics" (PDF). Bulletin of the School of Oriental and African Studies. 74: 59–76. doi:10.1017/s0041977x10000716.
- Simon, Zsolt (2010). "Das Problem der phonetischen Interpretation der anlautenden scriptio plena im Keilschriftluwischen". Babel und Bibel. 4: 249–65.
- Simon, Zsolt (2013). "Once again on the Hieroglyphic Luwian sign *19 〈á〉". Indogermanische Forschungen. 118 (2013): 1–22, page 17. doi:10.1515/indo.2013.118.2013.1.
- Watson, Janet C. E. (2002). The Phonology and Morphology of Arabic. Oxford Univ. Press. p. 46. ISBN 9780199257591. Retrieved 2012-03-18.
- Weiss, Michael (2016). "The Proto-Indo-European Laryngeals and the Name of Cilicia in the Iron Age". In Byrd, Andrew Miles; DeLisi, Jessica; Wenthe, Mark (eds.). Tavet Tata Satyam: Studies in honor of Jared H. Klein on the Occasion of His Seventieth Birthday. Ann Arbor: Beech Stave Press. pp. 331–340.
- Kümmel, Martin (November 2012). "On historical phonology, typology, and reconstruction" (PDF). Enlil.ff.cuni.cz. Institute of Comparative Linguistics, Charles University, Prague. p. 4. Retrieved 17 June 2019.
- Clackson p. 56.
- Clackson p. 58.
- Ringe pp. 68–70
- Kümmel, Martin (2016). "Is ancient old and modern new? Fallacies of attestation and reconstruction (with special focus on Indo-Iranian)". Proceedings of the 27th Annual UCLA Indo-European Conference. Bremen: Hempen.
- Ramat p. 41.
- Clackson p. 57.
- Clackson p. 58
- Palmer pp. 216–218
- Palmer pp. 219–220
- Ringe pp. 73–74
- Ringe pp. 74–75
- http://inslav.ru/images/stories/books/BSI1988-1996(1997).pdf (in Russian)
- Whitehead, Benedicte Nielsen (2012). The Sound of Indo-european: Phonetics, Phonemics and Morphophonemics. ISBN 9788763538381.
- De Mauro, Tullio (1972). "Notes bibliographiques et critiques sur F. de Saussure". Cours de linguistique générale. By de Saussure, Ferdinand. Paris: Payot. pp. 327–328. ISBN 2-22-850070-4.
- Szemerényi 1996, p. 123.
- Szemerényi 1996, p. 134.
- Cuny, Albert (1912). "Notes phonétique historique. Indo-européen et sémitique". Révue de phonétique. 2.
- Kuryłowicz, Jerzy (1927). "ə indo-européen et ḫ hittite". In Taszycki, Witold; Doroszewski, Witold (eds.). Symbolae grammaticae in honorem Ioannis Rozwadowski. Kraków: Gebethner & Wolff.
- Kuryłowicz, Jerzy (1935). "Sur les éléments consonantiques disparus en indoeuropéen". Études indoeuropéens. Kraków: Gebethner & Wolff.
- Meier-Brügger, Michael (2003). Indo-European Linguistics. Berlin/New York: De Gruyter. p. 107. ISBN 3-11-017433-2.
- Lehrman, Alexander (2002). "Indo-Hittite laryngeals in Anatolian and Indo-European". In Shevoroshkin, Vitaly; Sidwell, Paul (eds.). Anatolian languages. Canberra: Association for the History of Language. ISBN 0-95-772514-0.
- Voyles, Joseph; Barrack, Charles (2015). On Laryngealism. A Coursebook in the History of a Science. München: Lincom. ISBN 978-3-86-288651-7.
- Beekes, Robert S. P. (1969). The Development of Proto-Indo-European Laryngeals in Greek (Thesis). The Hague: Mouton.
- Beekes, Robert S. P. (1995). Comparative Indo-European Linguistics: An Introduction. Amsterdam: John Benjamins. ISBN 1-55619-504-4.
- Clackson, James (2007). Indo-European Linguistics: An Introduction. Cambridge: Cambridge University Press. ISBN 978-0-521-65367-1.
- Feuillet, Jack (2016). "Quelques réflexions sur la reconstruction du système phonologique indo-européen". Historische Sprachforschung. 129: 39–65. doi:10.13109/hisp.2016.129.1.39.
- Koivulehto, Jorma (1991). Uralische Evidenz für die Laryngaltheorie, Veröffentlichungen der Komission für Linguistik und Kommunikationsforschung nr. 24. Wien: Österreichische Akademie der Wissenschaften. ISBN 3-7001-1794-9.
- Koivulehto, Jorma (2001). "The earliest contacts between Indo-European and Uralic speakers in the light of lexical loans". In C. Carpelan; A. Parpola; P. Koskikallio (eds.). The earliest contacts between Uralic and Indo-European: Linguistic and Archeological Considerations. Helsinki: Mémoires de la societé Finno-Ougrienne 242. pp. 235–263. ISBN 952-5150-59-3.
- Lehmann, Winfred P. (1993). Theoretical Bases of Indo-European Linguistics, see pp. 107-110. London: Routledge.
- Lindeman, Frederik Otto (1970). Einführung in die Laryngaltheorie. Berlin: Walter de Gruyter & Co.
- Lindeman, Frederik Otto (1997). Introduction to the Laryngeal theory. Innsbruck: Institut für Sprachwissenschaft der Universität Innsbruck.
- Möller, Hermann (1970) . Vergleichendes indogermanisch-semitisches Wörterbuch. Göttingen: Vandenhoek & Ruprecht.
- Palmer, F.R. (1995). The Greek Language. London: Bristol Classical Press. ISBN 1-85399-466-9.
- Ramat, Anna Gicalone & Paolo (1998). The Indo-European Languages. Abingdon & New York: Routledge. ISBN 978-0-415-41263-6.
- Rasmussen, Jens Elmegård (1999) . "Determining Proto-Phonetics by Circumstantial Evidence: The Case of the Indo-European laryngeals". Selected Papers on Indo-European Linguistics. Copenhagen: Museum of Tusculanum Press. pp. 67–81. ISBN 87-7289-529-2.
- Ringe, Don (2006). From Proto-Indo-European to Proto-Germanic (A Linguistic History of English Volume 1). New York: Oxford University Press. ISBN 978-0-19-955229-0.
- Rix, Helmut (1976). Historische Grammatik der Griechischen: Laut- und Formenlehre. Darmstadt: Wissenschaftliche Buchgesellschaft.
- Saussure, Ferdinand de (1879). Memoire sur le systeme primitif des voyelles dans les langues indo-europeennes. Leipzig: Vieweg.
- Szemerényi, Oswald (1996). Introduction to Indo-European Linguistics. Oxford: Clarendon Press.
- Sihler, Andrew (1996). New Comparative Grammar of Greek and Latin. Oxford: Oxford University Press.
- Winter, Werner, ed. (1965). Evidence for Laryngeals (2nd. ed.). The Hague: Mouton.
- "Proto-Indo-European phonology (Nonstandard and Theoretical)". Retrieved 11 November 2005.
- Kortlandt, Frederik (2001): Initial laryngeals in Anatolian (pdf)
- Lexicon of Early Indo-European Loanwords Preserved in Finnish | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Laryngeal_theory | 21 |
15 | Greenland ice sheet
|Greenland ice sheet|
|Coordinates||76°42′N 41°12′W / 76.7°N 41.2°WCoordinates: 76°42′N 41°12′W / 76.7°N 41.2°W|
|Area||1,710,000 km2 (660,000 sq mi)|
|Length||2,400 km (1,500 mi)|
|Width||1,100 km (680 mi)|
|Thickness||2,000–3,000 m (6,600–9,800 ft)|
It is the second largest ice body in the world, after the Antarctic Ice Sheet. The ice sheet is almost 2,400 kilometres (1,500 mi) long in a north-south direction, and its greatest width is 1,100 kilometres (680 mi) at a latitude of 77°N, near its northern margin. The mean altitude of the ice is 2,135 metres (7,005 ft). The thickness is generally more than 2 km (1.2 mi) and over 3 km (1.9 mi) at its thickest point. It is not the only ice mass of Greenland – isolated glaciers and small ice caps cover between 76,000 and 100,000 square kilometres (29,000 and 39,000 sq mi) around the periphery. If the entire 2,850,000 cubic kilometres (684,000 cu mi) of ice were to melt, it would lead to a global sea level rise of 7.2 m (24 ft). The Greenland Ice Sheet is sometimes referred to under the term inland ice, or its Danish equivalent, indlandsis. It is also sometimes referred to as an ice cap.
The ice in the current ice sheet is as old as 110,000 years. The presence of ice-rafted sediments in deep-sea cores recovered off of northeast Greenland, in the Fram Strait, and south of Greenland indicated the more or less continuous presence of either an ice sheet or ice sheets covering significant parts of Greenland for the last 18 million years. From about 11 million years ago to 10 million years ago, the Greenland Ice Sheet was greatly reduced in size. The Greenland Ice Sheet formed in the middle Miocene by coalescence of ice caps and glaciers. There was an intensification of glaciation during the Late Pliocene.
The weight of the ice has depressed the central area of Greenland; the bedrock surface is near sea level over most of the interior of Greenland, but mountains occur around the periphery, confining the sheet along its margins. If the ice disappeared, Greenland would most probably appear as an archipelago, at least until isostasy lifted the land surface above sea level once again. The ice surface reaches its greatest altitude on two north-south elongated domes, or ridges. The southern dome reaches almost 3,000 metres (10,000 ft) at latitudes 63°–65°N; the northern dome reaches about 3,290 metres (10,800 ft) at about latitude 72°N. The crests of both domes are displaced east of the centre line of Greenland. The unconfined ice sheet does not reach the sea along a broad front anywhere in Greenland, so that no large ice shelves occur. The ice margin just reaches the sea, however, in a region of irregular topography in the area of Melville Bay southeast of Thule. Large outlet glaciers, which are restricted tongues of the ice sheet, move through bordering valleys around the periphery of Greenland to calve off into the ocean, producing the numerous icebergs that sometimes occur in North Atlantic shipping lanes. The best known of these outlet glaciers is Jakobshavn Glacier (Greenlandic: Sermeq Kujalleq), which, at its terminus, flows at speeds of 20 to 22 metres or 66 to 72 feet per day.
On the ice sheet, temperatures are generally substantially lower than elsewhere in Greenland. The lowest mean annual temperatures, about −31 °C (−24 °F), occur on the north-central part of the north dome, and temperatures at the crest of the south dome are about −20 °C (−4 °F). During winter, the ice sheet takes on a clear blue/green color. During summer, the top layer of ice melts leaving pockets of air in the ice that makes it look white.
The ice sheet as a record of past climates
The ice sheet, consisting of layers of compressed snow from more than 100,000 years, contains in its ice today's most valuable record of past climates. In the past decades, scientists have drilled ice cores up to 4 kilometres (2.5 mi) deep. Scientists have, using those ice cores, obtained information on (proxies for) temperature, ocean volume, precipitation, chemistry and gas composition of the lower atmosphere, volcanic eruptions, solar variability, sea-surface productivity, desert extent and forest fires. This variety of climatic proxies is greater than in any other natural recorder of climate, such as tree rings or sediment layers.
The melting ice sheet
Many scientists who study the ice melt in Greenland consider that a two or three degree Centigrade temperature rise would result in a complete melting of Greenland’s ice. Positioned in the Arctic, the Greenland ice sheet is especially vulnerable to climate change. Arctic climate is believed to be now rapidly warming and much larger Arctic shrinkage changes are projected. The Greenland Ice Sheet has experienced record melting in recent years since detailed records have been kept and is likely to contribute substantially to sea level rise as well as to possible changes in ocean circulation in the future if this is sustained. The area of the sheet that experiences melting has been argued to have increased by about 16% between 1979 (when measurements started) and 2002 (most recent data). The area of melting in 2002 broke all previous records. The number of glacial earthquakes at the Helheim Glacier and the northwest Greenland glaciers increased substantially between 1993 and 2005. In 2006, estimated monthly changes in the mass of Greenland's ice sheet suggest that it is melting at a rate of about 239 cubic kilometers (57 cu mi) per year. A more recent study, based on reprocessed and improved data between 2003 and 2008, reports an average trend of 195 cubic kilometers (47 cu mi) per year. These measurements came from the US space agency's GRACE (Gravity Recovery and Climate Experiment) satellite, launched in 2002, as reported by BBC. Using data from two ground-observing satellites, ICESAT and ASTER, a study published in Geophysical Research Letters (September 2008) shows that nearly 75 percent of the loss of Greenland's ice can be traced back to small coastal glaciers.
If the entire 2,850,000 km3 (684,000 cu mi) of ice were to melt, global sea levels would rise 7.2 m (24 ft). Recently, fears have grown that continued climate change will make the Greenland Ice Sheet cross a threshold where long-term melting of the ice sheet is inevitable. Climate models project that local warming in Greenland will be 3 °C (5 °F) to 9 °C (16 °F) during this century. Ice sheet models project that such a warming would initiate the long-term melting of the ice sheet, leading to a complete melting of the ice sheet (over centuries), resulting in a global sea level rise of about 7 metres (23 ft). Such a rise would inundate almost every major coastal city in the world. How fast the melt would eventually occur is a matter of discussion. According to the IPCC 2001 report, such warming would, if kept from rising further after the 21st Century, result in 1 to 5 meter sea level rise over the next millennium due to Greenland ice sheet melting. Some scientists have cautioned that these rates of melting are overly optimistic as they assume a linear, rather than erratic, progression. James E. Hansen has argued that multiple positive feedbacks could lead to nonlinear ice sheet disintegration much faster than claimed by the IPCC. According to a 2007 paper, "we find no evidence of millennial lags between forcing and ice sheet response in paleoclimate data. An ice sheet response time of centuries seems probable, and we cannot rule out large changes on decadal time-scales once wide-scale surface melt is underway."
In a 2013 study published in Nature, 133 researchers analyzed a Greenland ice core from the Eemian interglacial. They concluded that GIS had been 8 degrees C warmer than today. Resulting in a thickness decrease of the northwest Greenland ice sheet by 400 ± 250 metres, reaching surface elevations 122,000 years ago of 130 ± 300 metres lower than at present.
The melt zone, where summer warmth turns snow and ice into slush and melt ponds of meltwater, has been expanding at an accelerating rate in recent years. When the meltwater seeps down through cracks in the sheet, it accelerates the melting and, in some areas, allows the ice to slide more easily over the bedrock below, speeding its movement to the sea. Besides contributing to global sea level rise, the process adds freshwater to the ocean, which may disturb ocean circulation and thus regional climate. In July 2012, this melt zone extended to 97 percent of the ice cover. Ice cores show that events such as this occur approximately every 150 years on average. The last time a melt this large happened was in 1889. This particular melt may be part of cyclical behavior; however, Lora Koenig, a Goddard glaciologist suggested that "...if we continue to observe melting events like this in upcoming years, it will be worrisome."
Meltwater around Greenland may transport nutrients in both dissolved and particulate phases to the ocean. Measurements of the amount of iron in meltwater from the Greenland ice sheet show that extensive melting of the ice sheet might add an amount of this micronutrient to the Atlantic Ocean equivalent to that added by airborne dust. However much of the particles and iron derived from glaciers around Greenland may be trapped within the extensive fjords that surround the island and, unlike the HNLC Southern ocean where iron is an extensive limiting micronutrient, biological production in the North Atlantic is subject only to very spatially and temporally limited periods of iron limitation. Nonetheless high productivity is observed in the immediate vicinity of major marine terminating glaciers around Greenland and this is attributed to meltwater inputs driving the upwelling of seawater rich in macronutrients.
Researchers have considered clouds to enhance Greenland ice sheet melt. A study published in Nature in 2013 found that optically thin liquid-bearing clouds extended this July 2012 extreme melt zone, while a Nature Communications study in 2016 suggests that clouds in general enhance Greenland ice sheet's meltwater runoff by more than 30% due to decreased meltwater refreezing in the firn layer at night.
A 2015 study by climate scientists Michael Mann of Penn State and Stefan Rahmstorf from the Potsdam Institute for Climate Impact Research suggests that the observed cold blob in the North Atlantic during years of temperature records is a sign that the Atlantic ocean’s Meridional overturning circulation (AMOC) may be weakening. They published their findings, and concluded that the AMOC circulation shows exceptional slowdown in the last century, and that Greenland melt is a possible contributor.
A study published in 2016, by researchers from the University of South Florida, Canada and the Netherlands, used GRACE satellite data to estimate freshwater flux from Greenland. They concluded that freshwater runoff is accelerating, and could eventually cause a disruption of AMOC in the future, which would affect Europe and North America.
Recent ice loss events
- Between 2000 and 2001: Northern Greenland's Petermann glacier lost 85 square kilometres (33 sq mi) of floating ice.
- Between 2001 and 2005: Sermeq Kujalleq broke up, losing 93 square kilometres (36 sq mi) and raised awareness worldwide of glacial response to global climate change.
- July 2008: Researchers monitoring daily satellite images discovered that a 28-square-kilometre (11 sq mi) piece of Petermann broke away.
- August 2010: A sheet of ice measuring 260 square kilometres (100 sq mi) broke off from the Petermann Glacier. Researchers from the Canadian Ice Service located the calving from NASA satellite images taken on August 5. The images showed that Petermann lost about one-quarter of its 70 km-long (43 mile) floating ice shelf.
- July 2012: Another large ice sheet twice the area of Manhattan, about 120 square kilometres (46 sq mi), broke away from the Petermann glacier in northern Greenland.
- In 2015, Jakobshavn Glacier calved an iceberg the size of about 4,600 feet (1,400 m) thick and about 5 square miles (13 km2).
- Satellite measurements of Greenland's ice cover from 1979 to 2009 reveals a trend of increased melting.
- NASA's MODIS and QuikSCAT satellite data from 2007 were compared to confirm the precision of different melt observations.
- NASA scientist Eric Rignot provides a narrated tour of Greenland’s moving ice sheet.
- This narrated animation shows the accumulated change in the elevation of the Greenland ice sheet between 2003 and 2012.
- Until 2007, rate of decrease in ice sheet height in cm per year.
- Modelling results of the sea-level rise under different warming scenarios.
- Satellite image of dark melt ponds.
- Albedo change in Greenland
Ice sheet acceleration
Two mechanisms have been utilized to explain the change in velocity of the Greenland Ice Sheets outlet glaciers. The first is the enhanced meltwater effect, which relies on additional surface melting, funneled through moulins reaching the glacier base and reducing the friction through a higher basal water pressure. (It should be noted that not all meltwater is retained in the ice sheet and some moulins drain into the ocean, with varying rapidity.) This idea was observed to be the cause of a brief seasonal acceleration of up to 20% on Sermeq Kujalleq in 1998 and 1999 at Swiss Camp. (The acceleration lasted between two and three months and was less than 10% in 1996 and 1997 for example. They offered a conclusion that the "coupling between surface melting and ice-sheet flow provides a mechanism for rapid, large-scale, dynamic responses of ice sheets to climate warming". Examination of recent rapid supra-glacial lake drainage documented short term velocity changes due to such events, but they had little significance to the annual flow of the large outlet glaciers.
The second mechanism is a force imbalance at the calving front due to thinning causing a substantial non-linear response. In this case an imbalance of forces at the calving front propagates up-glacier. Thinning causes the glacier to be more buoyant, reducing frictional back forces, as the glacier becomes more afloat at the calving front. The reduced friction due to greater buoyancy allows for an increase in velocity. This is akin to letting off the emergency brake a bit. The reduced resistive force at the calving front is then propagated up-glacier via longitudinal extension because of the backforce reduction. For ice streaming sections of large outlet glaciers (in Antarctica as well) there is always water at the base of the glacier that helps lubricate the flow.
If the enhanced meltwater effect is the key, then since meltwater is a seasonal input, velocity would have a seasonal signal and all glaciers would experience this effect. If the force imbalance effect is the key, then the velocity will propagate up-glacier, there will be no seasonal cycle, and the acceleration will be focused on calving glaciers. Helheim Glacier, East Greenland had a stable terminus from the 1970s-2000. In 2001–2005 the glacier retreated 7 km (4.3 mi) and accelerated from 20 to 33 m or 70 to 110 ft/day, while thinning up to 130 meters (430 ft) in the terminus region. Kangerdlugssuaq Glacier, East Greenland had a stable terminus history from 1960 to 2002. The glacier velocity was 13 m or 43 ft/day in the 1990s. In 2004–2005 it accelerated to 36 m or 120 ft/day and thinned by up to 100 m (300 ft) in the lower reach of the glacier. On Sermeq Kujalleq the acceleration began at the calving front and spread up-glacier 20 km (12 mi) in 1997 and up to 55 km (34 mi) inland by 2003. On Helheim the thinning and velocity propagated up-glacier from the calving front. In each case the major outlet glaciers accelerated by at least 50%, much larger than the impact noted due to summer meltwater increase. On each glacier the acceleration was not restricted to the summer, persisting through the winter when surface meltwater is absent.
An examination of 32 outlet glaciers in southeast Greenland indicates that the acceleration is significant only for marine-terminating outlet glaciers—glaciers that calve into the ocean. A 2008 study noted that the thinning of the ice sheet is most pronounced for marine-terminating outlet glaciers. As a result of the above, all concluded that the only plausible sequence of events is that increased thinning of the terminus regions, of marine-terminating outlet glaciers, ungrounded the glacier tongues and subsequently allowed acceleration, retreat and further thinning.
Warmer temperatures in the region have brought increased precipitation to Greenland, and part of the lost mass has been offset by increased snowfall. However, there are only a small number of weather stations on the island, and though satellite data can examine the entire island, it has only been available since the early 1990s, making the study of trends difficult. It has been observed that there is more precipitation where it is warmer, up to 1.5 meters per year on the southeast flank, and less precipitation or none on the 25–80 percent (depending on the time of year) of the island that is cooler.
Rate of change
Several factors determine the net rate of growth or decline. These are
- Accumulation and melting rates of snow in the central parts
- Melting of surface snow and ice which then flows into moulins, falls and flows to bedrock, lubricates the base of glaciers, and affects the speed of glacial motion. This flow is implicated in accelerating the speed of glaciers and thus the rate of glacial calving.
- Melting of ice along the sheet's margins (runoff) and basal hydrology,
- Iceberg calving into the sea from outlet glaciers also along the sheet's edges
The IPCC Third Assessment Report (2001) estimated the accumulation to 520 ± 26 Gigatonnes of ice per year, runoff and bottom melting to 297±32 Gt/yr and 32±3 Gt/yr, respectively, and iceberg production to 235±33 Gt/yr. On balance, the IPCC estimates -44 ± 53 Gt/yr, which means that the ice sheet may currently be melting. Data from 1996 to 2005 shows that the ice sheet is thinning even faster than supposed by IPCC. According to the study, in 1996 Greenland was losing about 96 km3 or 23.0 cu mi per year in mass from its ice sheet. In 2005, this had increased to about 220 km3 or 52.8 cu mi a year due to rapid thinning near its coasts, while in 2006 it was estimated at 239 km3 (57.3 cu mi) per year. It was estimated that in the year 2007 Greenland ice sheet melting was higher than ever, 592 km3 (142.0 cu mi). Also snowfall was unusually low, which led to unprecedented negative −65 km3 (−15.6 cu mi) Surface Mass Balance. If iceberg calving has happened as an average, Greenland lost 294 Gt of its mass during 2007 (one km3 of ice weighs about 0.9 Gt).
The IPCC Fourth Assessment Report (2007) noted, it is hard to measure the mass balance precisely, but most results indicate accelerating mass loss from Greenland during the 1990s up to 2005. Assessment of the data and techniques suggests a mass balance for the Greenland Ice Sheet ranging between growth of 25 Gt/yr and loss of 60 Gt/yr for 1961 to 2003, loss of 50 to 100 Gt/yr for 1993 to 2003 and loss at even higher rates between 2003 and 2005.
Analysis of gravity data from GRACE satellites indicates that the Greenland ice sheet lost approximately 2900 Gt (0.1% of its total mass) between March 2002 and September 2012. The mean mass loss rate for 2008–2012 was 367 Gt/year.
A paper on Greenland's temperature record shows that the warmest year on record was 1941 while the warmest decades were the 1930s and 1940s. The data used was from stations on the south and west coasts, most of which did not operate continuously the entire study period.
While Arctic temperatures have generally increased, there is some discussion concerning the temperatures over Greenland. First of all, Arctic temperatures are highly variable, making it difficult to discern clear trends at a local level. Also, until recently, an area in the North Atlantic including southern Greenland was one of the only areas in the World showing cooling rather than warming in recent decades, but this cooling has now been replaced by strong warming in the period 1979–2005.
- GLIMPSE Project
- List of glaciers in Greenland
- Moulin (geomorphology)
- Polar ice packs
- Retreat of glaciers since 1850
- Encyclopaedia Britannica. 1999 Multimedia edition.
- Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) [Houghton, J.T.,Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881pp. ,, and .
- Meese, DA, AJ Gow, RB Alley, GA Zielinski, PM Grootes, M Ram, KC Taylor, PA Mayewski, JF Bolzan (1997) The Greenland Ice Sheet Project 2 depth-age scale: Methods and results. Journal of Geophysical Research. C. Oceans. 102(C12):26,411-26,423.
- Thiede, JC Jessen, P Knutz, A Kuijpers, N Mikkelsen, N Norgaard-Pedersen, and R Spielhagen (2011) Millions of Years of Greenland Ice Sheet History Recorded in Ocean Sediments. Polarforschung. 80(3):141-159.
- "The Secrets in Greenland's Ice Sheet". The New York Times. 2015.
- Impacts of a Warming Arctic: Arctic Climate Impact Assessment, Cambridge University Press, 2004.
- "Glacial Earthquakes Point to Rising Temperatures in Greenland – Lamont-Doherty Earth Observatory News". columbia.edu.
- ScienceDaily, 10 October 2008: "An Accurate Picture Of Ice Loss In Greenland"
- "BBC NEWS – Science/Nature – Greenland melt 'speeding up'". bbc.co.uk.
- Small Glaciers Account for Most of Greenland's Recent Ice Loss Newswise, Retrieved on September 15, 2008.
- Climate change and trace gases. James Hansen, Makiko Sato, et al. Phil.Trans.R.Soc.A (2007)365,1925–1954, doi:10.1098/rsta.2007.2052. Published online 18 May 2007,
- "Eemian interglacial reconstructed from a Greenland folded ice core". Nature. 493: 489–494. January 24, 2013. doi:10.1038/nature11789.
- "Greenland enters melt mode". Science News.
- Wall, Tim. "Greenland Hits 97 Percent Meltdown in July". Discovery News.
- "NASA Made Up 150 Year Melt Cycle". Daily Kos.
- "The Accumulation Record from the GISP2 Core as an Indicator of Climate Change Throughout the Holocene" (PDF). sciencemag.org.
- Statham, Peter J.; Skidmore, Mark; Tranter, Martyn (2008-09-01). "Inputs of glacially derived dissolved and colloidal iron to the coastal ocean and implications for primary productivity". Global Biogeochemical Cycles. 22 (3): GB3013. doi:10.1029/2007GB003106. ISSN 1944-9224.
- "Glaciers Contribute Significant Iron to North Atlantic Ocean" (news release). Woods Hole Oceanographic Institution. March 10, 2013. Retrieved March 18, 2013.
- Hopwood, Mark James; Connelly, Douglas Patrick; Arendt, Kristine Engel; Juul-Pedersen, Thomas; Stinchcombe, Mark; Meire, Lorenz; Esposito, Mario; Krishna, Ram (2016-01-01). "Seasonal changes in Fe along a glaciated Greenlandic fjord.". Marine Biogeochemistry. 4: 15. doi:10.3389/feart.2016.00015.
- Martin, John H.; Fitzwater, Steve E.; Gordon, R. Michael (1990-03-01). "Iron deficiency limits phytoplankton growth in Antarctic waters". Global Biogeochemical Cycles. 4 (1): 5–12. doi:10.1029/GB004i001p00005. ISSN 1944-9224.
- Nielsdóttir, Maria C.; Moore, Christopher Mark; Sanders, Richard; Hinz, Daria J.; Achterberg, Eric P. (2009-09-01). "Iron limitation of the postbloom phytoplankton communities in the Iceland Basin". Global Biogeochemical Cycles. 23 (3): GB3001. doi:10.1029/2008GB003410. ISSN 1944-9224.
- Arendt, Kristine Engel; Nielsen, Torkel Gissel; Rysgaard, Sren; Tnnesson, Kajsa (2010-02-22). "Differences in plankton community structure along the Godthåbsfjord, from the Greenland Ice Sheet to offshore waters". Marine Ecology Progress Series. 401: 49–62. doi:10.3354/meps08368.
- Brown, Dwayne; Cabbage, Michael; McCarthy, Leslie; Norton, Karen (20 January 2016). "NASA, NOAA Analyses Reveal Record-Shattering Global Warm Temperatures in 2015". NASA. Retrieved 21 January 2016.
- Bennartz, R.; Shupe, M. D.; Turner, D. D.; Walden, V. P.; Steffen, K.; Cox, C. J.; Kulie, M. S.; Miller, N. B.; Pettersen, C. "July 2012 Greenland melt extent enhanced by low-level liquid clouds". Nature. 496 (7443): 83–86. doi:10.1038/nature12002.
- Van Tricht, K.; Lhermitte, S.; Lenaerts, J. T. M.; Gorodetskaya, I. V.; L’Ecuyer, T. S.; Noël, B.; van den Broeke, M. R.; Turner, D. D.; van Lipzig, N. P. M. (2016-01-12). "Clouds enhance Greenland ice sheet meltwater runoff". Nature Communications. 7: 10266. doi:10.1038/ncomms10266.
- Stefan Rahmstorf, Jason E. Box, Georg Feulner, Michael E. Mann, Alexander Robinson, Scott Rutherford & Erik J. Schaffernicht. "Exceptional twentieth-century slowdown in Atlantic Ocean overturning circulation". Nature. doi:10.1038/nclimate2554.
- "Melting Greenland ice sheet may affect global ocean circulation, future climate". Phys.org. 2016.
- "Images Show Breakup of Two of Greenland's Largest Glaciers, Predict Disintegration in Near Future". NASA Earth Observatory. August 20, 2008. Retrieved 2008-08-31.
- "Huge ice island breaks from Greenland glacier". BBC News.
- Iceberg breaks off from Greenland's Petermann Glacier 19 July 2012
- "Surface Melt-Induced Acceleration of Greenland Ice-Sheet Flow by Zwally et al., "
- "Fracture Propagation to the Base of the Greenland Ice Sheet During Supraglacial Lake Drainage by Das. et al.,"
- "Thomas R.H (2004), Force-perturbation analysis of recent thinning and acceleration of Jakobshavn Isbrae, Greenland, Journal of Glaciology 50 (168): 57–66. "
- "Thomas, R. H. Abdalati W, Frederick E, Krabill WB, Manizade S, Steffen K, (2003) Investigation of surface melting and dynamic thinning on Jakobshavn Isbrae, Greenland. Journal of Glaciology 49, 231–239."
- "Letters to Nature Nature 432, 608–610 (2 December 2004) | doi:10.1038/nature03130; Received 7 July 2004; Accepted 8 October 2004 Large fluctuations in speed on Greenland's Jakobshavn Isbræ glacier by Joughin, Abdalati and Fahnestock"
- "Rates of southeast Greenland ice volume loss...by Howat et al.". AGU.
- "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.,"
- "Rapid and synchronous ice-dynamic changes in East Greenland by Luckman, Murray. de Lange and Hanna"
- "Greenland Ice Sheet: is land-terminating ice thinning at anomalously high rates by Sole et al.,"
- "Moulins calving fronts and Greenland outletglacier acceleration by Pelto"
- "Modelling Precipitation over ice sheets: an assessment using Greenland", Gerard H. Roe, University of Washington,
- "Greenland Ice Loss Doubles in Past Decade, Raising Sea Level Faster". Jet Propulsion Laboratory News release, Thursday, 16 February 2006.
- Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Chapter 4 Observations: Changes in Snow, Ice and Frozen Ground.IPCC, 2007. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 996 pp.
- "Arctic Report Card: Update for 2012; Greenland Ice Sheet".
- "A Greenland temperature record spanning two centuries" JOURNAL OF GEOPHYSICAL RESEARCH, VOL. 111, D11105, doi:10.1029/2005JD006810, 2006. Vinther, Anderson, Jones, Briffa, Cappelen.
- see Arctic Climate Impact Assessment (2004) and IPCC Second Assessment Report, among others.
- IPCC, 2007. Trenberth, K.E., P.D. Jones, P. Ambenje, R. Bojariu, D. Easterling, A. Klein Tank, D. Parker, F. Rahimzadeh, J.A. Renwick, M. Rusticucci, B. Soden and P. Zhai, 2007: Observations: Surface and Atmospheric Climate Change. In: Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change [Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M. Tignor and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
|Wikimedia Commons has media related to Greenland ice sheet.|
- Real Climate the Greenland Ice
- Geological Survey of Denmark and Greenland (GEUS) GEUS has much scientific material on Greenland.
- Emporia State University – James S. Aber Lecture 2: Modern Glaciers and Ice Sheets.
- Arctic Climate Impact Assessment
- GRACE ice mass measurement: "Recent Land Ice Mass Flux from SpaceborneGravimetry"
- Greenland ice cap melting faster than ever, Bristol University
- Greenland Ice Mass Loss: Jan. 2004 – June 2014 (NASA animation)
- AGU 2015: Eric Rignot - Ice Sheet Systems and Sea Level Change (Sea level rise) | https://gateway.ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Greenland_ice_sheet.html | 21 |
20 | The word Dominion was used from 1907 to 1948 to refer to one of several self-governing nations of the British Empire. "Dominion status" was formally accorded to Canada, Australia, New Zealand, Newfoundland, South Africa, and the Irish Free State at the 1926 Imperial Conference to designate "autonomous communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown and freely associated as members of the British Commonwealth of Nations”. India, Pakistan, and Ceylon (now Sri Lanka) were also dominions for short periods of time. The Balfour Declaration of 1926 recognised the Dominions as "autonomous communities within the British Empire", and the 1931 Statute of Westminster confirmed their full legislative independence. With the dissolution of the British Empire after World War II and the formation of the Commonwealth of Nations, use of the term was formally abandoned at the 1949 Commonwealth Prime Ministers' Conference and replaced with "member of the Commonwealth" to recognize the full autonomy of members.
The term dominion means "that which is mastered or ruled". It was used by the British to describe their colonies or territorial possessions.
Use of dominion to refer to a particular territory within the British Empire dates back to the 16th century and was sometimes used to describe Wales from 1535 to around 1800: for instance, the Laws in Wales Act 1535 applies to "the Dominion, Principality and Country of Wales". Dominion, as an official title, was conferred on the Colony of Virginia about 1660 and on the Dominion of New England in 1686.
Under the British North America Act 1867, the partially self-governing colonies of British North America were united into the Dominion of Canada. The new federal and provincial governments split considerable local powers, but Britain retained overall legislative supremacy. At the Colonial Conference of 1907, the self-governing colonies of Canada and the Commonwealth of Australia were referred to collectively as Dominions for the first time. Two other self-governing colonies—New Zealand and Newfoundland—were granted the status of Dominion in the same year. These were followed by the Union of South Africa in 1910. The Order in Council annexing the island of Cyprus in 1914 declared that, from 5 November 1914, the island "shall be annexed to and form part of His Majesty's dominions".
Dominion status was formally accorded to Canada, Australia, New Zealand, Newfoundland, South Africa, and the Irish Free State at the 1926 Imperial Conference to designate "autonomous communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown and freely associated as members of the British Commonwealth of Nations”. The British Government of Lloyd George had emphasized the use of the capital "D" when referring to the Irish Free State in the Anglo-Irish Treaty to assure it the same constitutiuonal status in order to avoid confusion with the wider term "His Majesty's dominions", which referred to the British Empire as a whole. At the time of the founding of the League of Nations in 1924, the League Covenant made provision for the admission of any "fully self-governing state, Dominion, or Colony", the implication being that "Dominion status was something between that of a colony and a state".
With the adoption of the Statute of Westminster 1931, Britain and the Dominions (except Newfoundland) formed the British Commonwealth of Nations, a free association of sovereign states. Dominions asserted full legislative independence, with direct access to the Monarch as Head of State previously reserved only for British governments. It also recognized autonomy in foreign affairs, including participation as autonomous nations in the League of Nations with full power over appointing ambassadors to other countries.
Following the Second World War, the decline of British colonialism led to Dominions generally being referred to as Commonwealth realms and the use of the word dominion gradually diminished. Nonetheless, though no longer in use, "Dominion of Canada" remains Canada's legal title and the phrase Her Majesty's Dominions is still used occasionally in legal documents in the United Kingdom.
"His/Her Majesty's dominions"Edit
This section needs additional citations for verification. (October 2016)
The phrase His/Her Majesty's dominions is a legal and constitutional phrase that refers to all the realms and territories of the Sovereign, whether independent or not. Thus, for example, the British Ireland Act 1949, recognised that the Republic of Ireland had "ceased to be part of His Majesty's dominions". When dependent territories that had never been annexed (that is, were not colonies of the Crown, but were League of Nations mandates, protectorates or United Nations Trust Territories) were granted independence, the United Kingdom act granting independence always declared that such and such a territory "shall form part of Her Majesty's dominions", and so become part of the territory in which the Queen exercises sovereignty, not merely suzerainty. The later sense of "Dominion" was capitalised to distinguish it from the more general sense of "dominion".
The word dominions originally referred to the possessions of the Kingdom of England. Oliver Cromwell's full title in the 1650s was "Lord Protector of the Commonwealth of England, Scotland and Ireland, and the dominions thereto belonging". In 1660, King Charles II gave the Colony of Virginia the title of dominion in gratitude for Virginia's loyalty to the Crown during the English Civil War. The Commonwealth of Virginia, a State of the United States, still has "the Old Dominion" as one of its nicknames. Dominion also occurred in the name of the short-lived Dominion of New England (1686–1689). In all of these cases, the word dominion implied no more than being subject to the English Crown.
Responsible government: precursor to Dominion statusEdit
The foundation of "Dominion" status followed the achievement of internal self-rule in British Colonies, in the specific form of full responsible government (as distinct from "representative government"). Colonial responsible government began to emerge during the mid-19th century. The legislatures of Colonies with responsible government were able to make laws in all matters other than foreign affairs, defence and international trade, these being powers which remained with the Parliament of the United Kingdom. Bermuda, notably, was never defined as a Dominion, despite meeting this criteria, but as a self-governing colony that remains part of the British Realm.
Nova Scotia soon followed by the Province of Canada (which included modern southern Ontario and southern Quebec) were the first Colonies to achieve responsible government, in 1848. Prince Edward Island followed in 1851, and New Brunswick and Newfoundland in 1855. All except for Newfoundland and Prince Edward Island agreed to form a new federation named Canada from 1867. This was instituted by the British Parliament in the British North America Act 1867. (See also: Canadian Confederation). Section 3 of the Act referred to the new entity as a "Dominion", the first such entity to be created. From 1870 the Dominion included two vast neighbouring British territories that did not have any form of self-government: Rupert's Land and the North-Western Territory, parts of which later became the Provinces of Manitoba, Saskatchewan, Alberta, and the separate territories, the Northwest Territories, Yukon and Nunavut. In 1871, the Crown Colony of British Columbia became a Canadian province, Prince Edward Island joined in 1873 and Newfoundland in 1949.
The conditions under which the four separate Australian colonies—New South Wales, Tasmania, Western Australia, South Australia—and New Zealand could gain full responsible government were set out by the British government in the Australian Constitutions Act 1850. The Act also separated the Colony of Victoria (in 1851) from New South Wales. During 1856, responsible government was achieved by New South Wales, Victoria, South Australia, and Tasmania, and New Zealand. The remainder of New South Wales was divided in three in 1859, a change that established most of the present borders of NSW; the Colony of Queensland, with its own responsible self-government, and the Northern Territory (which was not granted self-government prior to federation of the Australian Colonies). Western Australia did not receive self-government until 1891, mainly because of its continuing financial dependence on the UK Government. After protracted negotiations (that initially included New Zealand), six Australian colonies with responsible government (and their dependent territories) agreed to federate, along Canadian lines, becoming the Commonwealth of Australia, in 1901.
In South Africa, the Cape Colony became the first British self-governing Colony, in 1872. (Until 1893, the Cape Colony also controlled the separate Colony of Natal.) Following the Second Boer War (1899–1902), the British Empire assumed direct control of the Boer Republics, but transferred limited self-government to Transvaal in 1906, and the Orange River Colony in 1907.
The Commonwealth of Australia was recognised as a Dominion in 1901, and the Dominion of New Zealand and the Dominion of Newfoundland were officially given Dominion status in 1907, followed by the Union of South Africa in 1910.
Canadian Confederation and evolution of the term DominionEdit
In connection with proposals for the future government of British North America, use of the term "Dominion" was suggested by Samuel Leonard Tilley at the London Conference of 1866 discussing the confederation of the Province of Canada (subsequently becoming the provinces of Ontario and Quebec), Nova Scotia and New Brunswick into "One Dominion under the Name of Canada", the first federation internal to the British Empire. Tilley's suggestion was taken from the 72nd Psalm, verse eight, "He shall have dominion also from sea to sea, and from the river unto the ends of the earth", which is echoed in the national motto, "A Mari Usque Ad Mare". The new government of Canada under the British North America Act of 1867 began to use the phrase "Dominion of Canada" to designate the new, larger nation. However, neither the Confederation nor the adoption of the title of "Dominion" granted extra autonomy or new powers to this new federal level of government. Senator Eugene Forsey wrote that the powers acquired since the 1840s that established the system of responsible government in Canada would simply be transferred to the new Dominion government:
By the time of Confederation in 1867, this system had been operating in most of what is now central and eastern Canada for almost 20 years. The Fathers of Confederation simply continued the system they knew, the system that was already working, and working well.
The constitutional scholar Andrew Heard has established that Confederation did not legally change Canada's colonial status to anything approaching its later status of a Dominion.
At its inception in 1867, Canada's colonial status was marked by political and legal subjugation to British Imperial supremacy in all aspects of government—legislative, judicial, and executive. The Imperial Parliament at Westminster could legislate on any matter to do with Canada and could override any local legislation, the final court of appeal for Canadian litigation lay with the Judicial Committee of the Privy Council in London, the Governor General had a substantive role as a representative of the British government, and ultimate executive power was vested in the British Monarch—who was advised only by British ministers in its exercise. Canada's independence came about as each of these sub-ordinations was eventually removed.
Heard went on to document the sizeable body of legislation passed by the British Parliament in the latter part of the 19th century that upheld and expanded its Imperial supremacy to constrain that of its colonies, including the new Dominion government in Canada.
When the Dominion of Canada was created in 1867, it was granted powers of self-government to deal with all internal matters, but Britain still retained overall legislative supremacy. This Imperial supremacy could be exercised through several statutory measures. In the first place, the British North America Act of 1867 provided in Section 55 that the Governor General may reserve any legislation passed by the two Houses of Parliament for "the signification of Her Majesty's pleasure", which is determined according to Section 57 by the British Monarch in Council. Secondly, Section 56 provides that the Governor General must forward to "one of Her Majesty's Principal Secretaries of State" in London a copy of any Federal legislation that has been assented to. Then, within two years after the receipt of this copy, the (British) Monarch in Council could disallow an Act. Thirdly, at least four pieces of Imperial legislation constrained the Canadian legislatures. The Colonial Laws Validity Act of 1865 provided that no colonial law could validly conflict with, amend, or repeal Imperial legislation that either explicitly, or by necessary implication, applied directly to that colony. The Merchant Shipping Act of 1894, as well as the Colonial Courts of Admiralty Act of 1890 required reservation of Dominion legislation on those topics for approval by the British Government. Also, the Colonial Stock Act of 1900 provided for the disallowance of any Dominion legislation the British government felt would harm British stockholders of Dominion trustee securities. Most importantly, however, the British Parliament could exercise the legal right of supremacy that it possessed over common law to pass any legislation on any matter affecting the colonies.
For decades, none of the Dominions were allowed to have its own embassies or consulates in foreign countries. All matters concerning international travel, commerce, etc., had to be transacted through British embassies and consulates. For example, all transactions concerning visas and lost or stolen passports by citizens of the Dominions were carried out at British diplomatic offices. It was not until the late 1930s and early 1940s that the Dominion governments were allowed to establish their own embassies, and the first two of these that were established by the Dominion governments in Ottawa and in Canberra were both established in Washington, D.C., in the United States.
As Heard later explained, the British government seldom invoked its powers over Canadian legislation. British legislative powers over Canadian domestic policy were largely theoretical and their exercise was increasingly unacceptable in the 1870s and 1880s. The rise to the status of a Dominion and then full independence for Canada and other possessions of the British Empire did not occur by the granting of titles or similar recognition by the British Parliament but by initiatives taken by the new governments of certain former British dependencies to assert their independence and to establish constitutional precedents.
What is remarkable about this whole process is that it was achieved with a minimum of legislative amendments. Much of Canada's independence arose from the development of new political arrangements, many of which have been absorbed into judicial decisions interpreting the constitution—with or without explicit recognition. Canada's passage from being an integral part of the British Empire to being an independent member of the Commonwealth richly illustrates the way in which fundamental constitutional rules have evolved through the interaction of constitutional convention, international law, and municipal statute and case law.
What was significant about the creation of the Canadian and Australian federations was not that they were instantly granted wide new powers by the Imperial centre at the time of their creation; but that they, because of their greater size and prestige, were better able to exercise their existing powers and lobby for new ones than the various colonies they incorporated could have done separately. They provided a new model which politicians in New Zealand, Newfoundland, South Africa, Ireland, India, Malaysia could point to for their own relationship with Britain. Ultimately, "[Canada's] example of a peaceful accession to independence with a Westminster system of government came to be followed by 50 countries with a combined population of more than 2-billion people."
Colonial Conference of 1907Edit
Issues of colonial self-government spilled into foreign affairs with the Boer War (1899–1902). The self-governing colonies contributed significantly to British efforts to stem the insurrection, but ensured that they set the conditions for participation in these wars. Colonial governments repeatedly acted to ensure that they determined the extent of their peoples' participation in imperial wars in the military build-up to the First World War.
The assertiveness of the self-governing colonies was recognised in the Colonial Conference of 1907, which implicitly introduced the idea of the Dominion as a self-governing colony by referring to Canada and Australia as Dominions. It also retired the name "Colonial Conference" and mandated that meetings take place regularly to consult Dominions in running the foreign affairs of the empire.
The Colony of New Zealand, which chose not to take part in Australian federation, became the Dominion of New Zealand on 26 September 1907; Newfoundland became a Dominion on the same day. The Union of South Africa was referred to as a Dominion upon its creation in 1910.
First World War and Treaty of VersaillesEdit
The initiatives and contributions of British colonies to the British war effort in the First World War were recognised by Britain with the creation of the Imperial War Cabinet in 1917, which gave them a say in the running of the war. Dominion status as self-governing states, as opposed to symbolic titles granted various British colonies, waited until 1919, when the self-governing Dominions signed the Treaty of Versailles independently of the British government and became individual members of the League of Nations. This ended the purely colonial status of the Dominions.
The First World War ended the purely colonial period in the history of the Dominions. Their military contribution to the Allied war effort gave them claim to equal recognition with other small states and a voice in the formation of policy. This claim was recognised within the Empire by the creation of the Imperial War Cabinet in 1917, and within the community of nations by Dominion signatures to the Treaty of Versailles and by separate Dominion representation in the League of Nations. In this way the "self-governing Dominions", as they were called, emerged as junior members of the international community. Their status defied exact analysis by both international and constitutional lawyers, but it was clear that they were no longer regarded simply as colonies of Britain.
Irish Free StateEdit
The Irish Free State, set up in 1922 after the Anglo-Irish War, was the third Dominion to appoint a non-UK born, non-aristocratic Governor-General when Timothy Michael Healy, following the tenures of Sir Gordon Drummond in Canada and of Sir Walter Davidson and Sir William Allardyce in Newfoundland, took the position in 1922. Dominion status was never popular in the Irish Free State where people saw it as a face-saving measure for a British government unable to countenance a republic in what had previously been the United Kingdom of Great Britain and Ireland. Successive Irish governments undermined the constitutional links with Britain until they were severed completely in 1949. In 1937 Ireland adopted, almost simultaneously, both a new constitution that included powers for a president of Ireland and a law confirming a role for the king in external relations.
Balfour Declaration of 1926 and Statute of WestminsterEdit
The Balfour Declaration of 1926, and the subsequent Statute of Westminster, 1931, restricted Britain's ability to pass or affect laws outside of its own jurisdiction. Significantly, Britain initiated the change to complete sovereignty for the Dominions. The First World War left Britain saddled with enormous debts, and the Great Depression had further reduced Britain's ability to pay for defence of its empire. In spite of popular opinions of empires, the larger Dominions were reluctant to leave the protection of the then-superpower. For example, many Canadians felt that being part of the British Empire was the only thing that had prevented them from being absorbed into the United States.
Until 1931, Newfoundland was referred to as a colony of the United Kingdom, as for example, in the 1927 reference to the Judicial Committee of the Privy Council to delineate the Quebec-Labrador boundary. Full autonomy was granted by the United Kingdom parliament with the Statute of Westminster in December 1931. However, the government of Newfoundland "requested the United Kingdom not to have sections 2 to 6[—]confirming Dominion status[—]apply automatically to it[,] until the Newfoundland Legislature first approved the Statute, approval which the Legislature subsequently never gave". In any event, Newfoundland's letters patent of 1934 suspended self-government and instituted a "Commission of Government", which continued until Newfoundland became a province of Canada in 1949. It is the view of some constitutional lawyers that—although Newfoundland chose not to exercise all of the functions of a Dominion like Canada—its status as a Dominion was "suspended" in 1934, rather than "revoked" or "abolished".
Canada, Australia, New Zealand, the Irish Free State, Newfoundland and South Africa (prior to becoming a republic and leaving the Commonwealth in 1961), with their large populations of European descent, were sometimes collectively referred to as the "White Dominions".
List of DominionsEdit
|Country[‡ 1]||From||To[‡ 2]||Status|
Continues as a Commonwealth realm and member of the Commonwealth of Nations.
Continues as a Commonwealth realm and member of the Commonwealth of Nations.
|Newfoundland||1907||1934||In 1934, after a series of financial difficulties (owing in part to Newfoundland's railway debt from the 1890s, and its debt from the First World War, both of which were exacerbated by the collapse of fish prices during the Great Depression) and a riot against the elected government, Newfoundland voluntarily relinquished its elected parliament and de jure Dominion status, becoming a dependent territory of the British Empire until 1949. During these 15 years, the dependent territory was considered a de facto Dominion, but was ruled by the Newfoundland Commission of Government, an unelected body of civil servants who were directly subordinate to the British Government in London. After two referendums in the dependent territory in 1948, Newfoundlanders rejected both the continuance of the Newfoundland Commission of Government, and independence, voting instead to join the Dominion of Canada as its 10th province. This was achieved under the British North America Act of 1949 (now known as the Newfoundland Act), which was passed in the UK Parliament at Westminster on 23 March 1949, prior to the London Declaration of 28 April 1949.|
|South Africa||1910||1961||Continued as a Commonwealth realm until it became a republic in 1961 under the Republic of South Africa Constitution Act 1961, passed by the Parliament of South Africa, long title "To constitute the Republic of South Africa and to provide for matters incidental thereto", assented to 24 April 1961 to come into operation on 31 May 1961.|
| Irish Free State (1922–37)
Éire (1937–49) [‡ 3]
|1922||1949||The link with the monarchy ceased with the passage of the Republic of Ireland Act 1948, which came into force on 18 April 1949 and declared that the state was a republic.|
|India||1947||1950||The Union of India (with the addition of Sikkim from 1975) became a federal republic after its constitution came into effect on 26 January 1950.|
|Pakistan||1947||1956||Continued as a Commonwealth realm until 1956 when it became a republic under the name "The Islamic Republic of Pakistan": Constitution of 1956.|
|Ceylon||1948||1972||Continued as a Commonwealth realm until 1972 when it became a republic under the name of Sri Lanka.|
- The flags shown are the national flags of each country at the time it was a Dominion.
- There was no single constitutional or legislative change that abolished the status of "dominions". The accession proclamation of 1952 referred to "realms", and the Royal Style and Titles Acts of 1953 changed references to "dominions" in the monarch's titles in the various Dominions to "realms", after which the term dominion generally fell into disuse, and the countries sharing the same monarch as the United Kingdom came to be referred to as realms (with the possible exception of Canada; see also Name of Canada).
- The Irish Free State was renamed Éire in Irish or Ireland in English in 1937. In 1937–1949, the Dominion was referred to as "Eire" by the British government. See also Names of the Irish state.
Four colonies of Australia had enjoyed responsible government since 1856: New South Wales, Victoria, Tasmania and South Australia. Queensland had responsible government soon after its founding in 1859. Because of ongoing financial dependence on Britain, Western Australia became the last Australian colony to attain self-government in 1890. During the 1890s, the colonies voted to unite and in 1901 they were federated under the British Crown as the Commonwealth of Australia by the Commonwealth of Australia Constitution Act. The Constitution of Australia had been drafted in Australia and approved by popular consent. Thus Australia is one of the few countries established by a popular vote. Under the Balfour Declaration of 1926, the federal government was regarded as coequal with (and not subordinate to) the British and other Dominion governments, and this was given formal legal recognition in 1942 (when the Statute of Westminster was adopted retroactively to the commencement of the Second World War in 1939). In 1930, the Australian prime minister, James Scullin, reinforced the right of the overseas Dominions to appoint native-born governors-general, when he advised King George V to appoint Sir Isaac Isaacs as his representative in Australia, against the wishes of the opposition and officials in London. The governments of the States (called colonies before 1901) remained under the Commonwealth but retained links to the UK until the passage of the Australia Act 1986.
The term Dominion is employed in the Constitution Act, 1867 (originally the British North America Act, 1867), and describes the resulting political union. Specifically, the preamble of the act states: "Whereas the Provinces of Canada, Nova Scotia, and New Brunswick have expressed their Desire to be federally united into One Dominion under the Crown of the United Kingdom of Great Britain and Ireland, with a Constitution similar in Principle to that of the United Kingdom ..." Furthermore, Sections 3 and 4 indicate that the provinces "shall form and be One Dominion under the Name of Canada; and on and after that Day those Three Provinces shall form and be One Dominion under that Name accordingly".
According to the Canadian Encyclopedia, (1999), "The word came to be applied to the federal government and Parliament, and under the Constitution Act, 1982, 'Dominion' remains Canada's official title."
Usage of the phrase Dominion of Canada was employed as the country's name after 1867, predating the general use of the term Dominion as applied to the other autonomous regions of the British Empire after 1907. The phrase Dominion of Canada does not appear in the 1867 act nor in the Constitution Act, 1982, but does appear in the Constitution Act, 1871, other contemporaneous texts, and subsequent bills. References to the Dominion of Canada in later acts, such as the Statute of Westminster, do not clarify the point because all nouns were formally capitalised in British legislative style. Indeed, in the original text of the Constitution Act, 1867, "One" and "Name" were also capitalised.
Frank Scott theorised that Canada's status as a Dominion ended when Canadian parliament declared war on Germany on 9 September 1939, separately and distinctly from the United Kingdom's declaration of war six days earlier. By the 1950s, the term Dominion of Canada was no longer used by the United Kingdom, which considered Canada a "Realm of the Commonwealth". The government of Louis St. Laurent ended the practice of using Dominion in the statutes of Canada in 1951. This began the phasing out of the use of Dominion, which had been used largely as a synonym of "federal" or "national" such as "Dominion building" for a post office, "Dominion-provincial relations", and so on. The last major change was renaming the national holiday from Dominion Day to Canada Day in 1982. Official bilingualism laws also contributed to the disuse of Dominion, as it has no acceptable equivalent in French.
While the term may be found in older official documents, and the Dominion Carillonneur still tolls at Parliament Hill, it is now hardly used to distinguish the federal government from the provinces or (historically) Canada before and after 1867. Nonetheless, the federal government continues to produce publications and educational materials that specify the currency of these official titles. The Constitution Act of 1982 does not mention and does not remove the title, and therefore a constitutional amendment may be required to change it.
The word Dominion has been used with other agencies, laws, and roles:
- Dominion Carillonneur: official responsible for playing the carillons at the Peace Tower since 1916
- Dominion Day (1867–1982): holiday marking Canada's national day; now called Canada Day
- Dominion Observatory (1905–1970): weather observatory in Ottawa; now used as Office of Energy Efficiency, Energy Branch, Natural Resources Canada
- Dominion Lands Act (1872): federal lands act; repealed in 1918
- Dominion Bureau of Statistics (1918–1971): superseded by Statistics Canada
- Dominion Police (1867–1920): merged to form the Royal Canadian Mounted Police (RCMP)
- Dominion Astrophysical Observatory (1918–present); now part of the National Research Council Herzberg Institute of Astrophysics
- Dominion Radio Astrophysical Observatory (1960–present); now part of the National Research Council Herzberg Institute of Astrophysics
- Dominion of Canada Rifle Association founded in 1868 and incorporated by an Act of Parliament in 1890
Notable Canadian corporations and organizations (not affiliated with government) that have used Dominion as a part of their name have included:
- The Dominion Bank, opened 1871
- The Dominion of Canada General Insurance Company, founded in 1887; bought out by Travelers in 2013
- The Dominion Atlantic Railway, in Nova Scotia, formed by the 1894 merger of two railways; controlled by the Canadian Pacific Railway after 1911, shut down in 1994
- Dominion Stores, a supermarket chain founded in 1927; following a series of acquisitions the last Dominion stores were renamed as Metro stores in 2008
- The Dominion Institute, created in 1997 to promote awareness of Canadian history and national identity
- The Historica-Dominion Institute, its successor following a 2009 merger with the Historica Foundation; renamed Historica Canada in 2013
Ceylon, which, as a Crown colony, was originally promised "fully responsible status within the British Commonwealth of Nations", was formally granted independence as a Dominion in 1948. In 1972 it adopted a republican constitution to become the Free, Sovereign and Independent Republic of Sri Lanka. By a new constitution in 1978, it became the Democratic Socialist Republic of Sri Lanka.
India, Pakistan and BangladeshEdit
British India acquired a partially representative government in 1909, and the first Parliament was introduced in 1919. Discussions on the further devolution of power, and granting of Dominion status, continued through the 1920s, with The Commonwealth of India Bill 1925, Simon Commission 1927–1930, and Nehru Report 1928 being often cited proposals. Further powers were eventually devolved, following the 1930–32 Round Table Conferences (India), to the locally elected legislatures, via the Government of India Act 1935. The Cripps Mission of 1942 proposed the further devolution of powers, within Dominion status, to the political leadership of British India. Cripps's plan was rejected and full independence was sought.Pakistan (including Muslim-majority East Bengal forming East Pakistan) seceded from India at the point of Indian Independence with the passage of the Indian Independence Act 1947 and ensuing partition, resulting in two dominions. For India, dominion status was transitory until its new republican constitution was drafted and promulgated in 1950. Pakistan remained a dominion until 1956 when it became an Islamic Republic under its 1956 constitution. East Pakistan gained independence from Pakistan through Liberation War, as Bangladesh, in 1971.
Irish Free State / IrelandEdit
The Irish Free State (Ireland from 1937) was a British Dominion between 1922 and 1949. As established by the Irish Free State Constitution Act of the United Kingdom Parliament on 6 December 1922 the new state—which had Dominion status in the likeness of that enjoyed by Canada within the British Commonwealth of Nations—comprised the whole of Ireland. However, provision was made in the Act for the Parliament of Northern Ireland to opt out of inclusion in the Irish Free State, which—as had been widely expected at the time—it duly did one day after the creation of the new state, on 7 December 1922.
Following a plebiscite of the people of the Free State held on 1 July 1937, a new constitution came into force on 29 December of that year, establishing a successor state with the name of "Ireland" which ceased to participate in Commonwealth conferences and events. Nevertheless, the United Kingdom and other member states of the Commonwealth continued to regard Ireland as a Dominion owing to the unusual role accorded to the British Monarch under the Irish External Relations Act of 1936. Ultimately, however, Ireland's Oireachtas passed the Republic of Ireland Act 1948, which came into force on 18 April 1949 and unequivocally ended Ireland's links with the British Monarch and the Commonwealth.
The colony of Newfoundland enjoyed responsible government from 1855 to 1934. It was among the colonies declared Dominions in 1907. Following the recommendations of a Royal Commission, parliamentary government was suspended in 1934 due to severe financial difficulties resulting from the depression and a series of riots against the Dominion government in 1932. In 1949, it joined Canada and the legislature was restored.
The New Zealand Constitution Act 1852 gave New Zealand its own Parliament (General Assembly) and home rule in 1852. In 1907 New Zealand was proclaimed the Dominion of New Zealand. New Zealand, Canada, and Newfoundland used the word Dominion in the official title of the nation, whereas Australia used Commonwealth of Australia and South Africa Union of South Africa. New Zealand adopted the Statute of Westminster in 1947 and in the same year legislation passed in London gave New Zealand full powers to amend its own constitution. In 1986, the New Zealand parliament passed the Constitution Act 1986, which repealed the Constitution Act of 1852 and the last constitutional links with the United Kingdom, formally ending its Dominion status.
The Union of South Africa was formed in 1910 from the four self-governing colonies of the Cape Colony, Natal, the Transvaal, and the Orange River Colony (the last two were former Boer republics). The South Africa Act 1909 provided for a Parliament consisting of a Senate and a House of Assembly. The provinces had their own legislatures. In 1961, the Union of South Africa adopted a new constitution, became a republic, left the Commonwealth (and re-joined following end of Apartheid rule in 1994), and became the present-day Republic of South Africa.
Southern Rhodesia (renamed Zimbabwe in 1980) was a special case in the British Empire. Although it was never a Dominion de jure, it was treated as a Dominion in many respects, and came to be regarded as a de facto Dominion. Southern Rhodesia was formed in 1923 out of territories of the British South Africa Company and established as a self-governing colony with substantial autonomy on the model of the Dominions. The imperial authorities in London retained direct powers over foreign affairs, constitutional alterations, native administration and bills regarding mining revenues, railways and the governor's salary.
Southern Rhodesia was not one of the territories that were mentioned in the 1931 Statute of Westminster although relations with Southern Rhodesia were administered in London through the Dominion Office, not the Colonial Office. When the Dominions were first treated as foreign countries by London for the purposes of diplomatic immunity in 1952, Southern Rhodesia was included in the list of territories concerned. This semi-Dominion status continued in Southern Rhodesia between 1953 and 1963, when it joined Northern Rhodesia and Nyasaland in the Central African Federation, with the latter two territories continuing to be British protectorates. When Northern Rhodesia was given independence in 1964 it adopted the new name of Zambia, prompting Southern Rhodesia to shorten its name to Rhodesia, but Britain did not recognise this latter change.
Rhodesia unilaterally declared independence from Britain in 1965 as a result of the British government's insistence on no independence before majority rule (NIBMAR). London regarded this declaration as illegal, and applied sanctions and expelled Rhodesia from the sterling area. Rhodesia continued with its Dominion-style constitution until 1970, and continued to issue British passports to its citizens. The Rhodesian government continued to profess its loyalty to the Sovereign, despite being in a state of rebellion against Her Majesty's Government in London, until 1970, when it adopted a republican constitution following a referendum the previous year. This endured until the state's reconstitution as Zimbabwe Rhodesia in 1979 under the terms of the Internal Settlement; this lasted until the Lancaster House Agreement of December 1979, which put it under interim British rule while fresh elections were held. The country achieved independence deemed legal by the international community in April 1980, when Britain granted independence under the name Zimbabwe.
Several of Britain's newly independent colonies were dominions during the period from the late 1950s to the early 1990s. Their gradualist constitutions, featuring a Westminster-style parliamentary government and the British monarch as head of state, were typically replaced by republican constitutions in less than a generation:
After World War II, Britain attempted to repeat the Dominion model in decolonizing the Caribbean. ... Though several colonies, such as Guyana and Trinidad and Tobago, maintained their formal allegiance to the British monarch, they soon revised their status to become republics. Britain also attempted to establish a Dominion model in decolonizing Africa, but it, too, was unsuccessful. ... Ghana, the first former colony declared a Dominion in 1957, soon demanded recognition as a republic. Other African nations followed a similar pattern throughout the 1960s: Nigeria, Tanganyika, Uganda, Kenya, and Malawi. In fact, only Gambia, Sierra Leone, and Mauritius retained their Dominion status for more than three years.
In Africa, the Dominion of Ghana (formerly the Gold Coast) existed from 1957 until 1960, when it became the Republic of Ghana. The Federation of Nigeria was established as a dominion in 1960, but became the Federal Republic of Nigeria in 1963. The Dominion of Uganda existed from 1962 to 1963. Kenya was a dominion upon independence in 1963, but a republic was declared in 1964. Tanganyika was a dominion from 1961 to 1962, after which it became a republic and then merged with the former British protectorate of Zanzibar to become Tanzania. The Dominion of Gambia existed from 1965 until 1970, when it was renamed the Republic of Gambia. Sierra Leone was a dominion from 1961 to 1971. Mauritius was a dominion from 1968 to 1992, when it became a republic.
Initially, the Foreign Office of the United Kingdom conducted the foreign relations of the Dominions. A Dominions section was created within the Colonial Office for this purpose in 1907. Canada set up its own Department of External Affairs in June 1909, but diplomatic relations with other governments continued to operate through the governors-general, Dominion High Commissioners in London (first appointed by Canada in 1880; Australia followed only in 1910), and British legations abroad. Britain deemed her declaration of war against Germany in August 1914 to extend to all territories of the Empire without the need for consultation, occasioning some displeasure in Canadian official circles and contributing to a brief anti-British insurrection by Afrikaner militants in South Africa later that year. A Canadian War Mission in Washington, D.C., dealt with supply matters from February 1918 to March 1921.
Although the Dominions had had no formal voice in declaring war, each became a separate signatory of the June 1919 peace Treaty of Versailles, which had been negotiated by a British-led united Empire delegation. In September 1922, Dominion reluctance to support British military action against Turkey influenced Britain's decision to seek a compromise settlement. Diplomatic autonomy soon followed, with the U.S.-Canadian Halibut Treaty (March 1923) marking the first time an international agreement had been entirely negotiated and concluded independently by a Dominion. The Dominions Section of the Colonial Office was upgraded in June 1926 to a separate Dominions Office; however, initially, this office was held by the same person that held the office of Secretary of State for the Colonies.
The principle of Dominion equality with Britain and independence in foreign relations was formally recognised by the Balfour Declaration, adopted at the Imperial Conference of November 1926. Canada's first permanent diplomatic mission to a foreign country opened in Washington, D.C., in 1927. In 1928, Canada obtained the appointment of a British high commissioner in Ottawa, separating the administrative and diplomatic functions of the governor-general and ending the latter's anomalous role as the representative of the British government in relations between the two countries. The Dominions Office was given a separate secretary of state in June 1930, though this was entirely for domestic political reasons given the need to relieve the burden on one ill minister whilst moving another away from unemployment policy. The Balfour Declaration was enshrined in the Statute of Westminster 1931 when it was adopted by the British Parliament and subsequently ratified by the Dominion legislatures.
Britain's declaration of hostilities against Nazi Germany on 3 September 1939 tested the issue. Most took the view that the declaration did not commit the Dominions. Ireland chose to remain neutral. At the other extreme, the conservative Australian government of the day, led by Robert Menzies, took the view that, since Australia had not adopted the Statute of Westminster, it was legally bound by the UK declaration of war—which had also been the view at the outbreak of the First World War—though this was contentious within Australia. Between these two extremes, New Zealand declared that as Britain was or would be at war, so it was too. This was, however, a matter of political choice rather than legal necessity. Canada issued its own declaration of war after a recall of Parliament, as did South Africa after a delay of several days (South Africa on 6 September, Canada on 10 September). Ireland, which had negotiated the removal of British forces from its territory the year before, remained neutral. There were soon signs of growing independence from the other Dominions: Australia opened a diplomatic mission in the US in 1940, as did New Zealand in 1941, and Canada's mission in Washington gained embassy status in 1943.
From Dominions to Commonwealth realmsEdit
Initially, the Dominions conducted their own trade policy, some limited foreign relations and had autonomous armed forces, although the British government claimed and exercised the exclusive power to declare wars. However, after the passage of the Statute of Westminster the language of dependency on the Crown of the United Kingdom ceased, where the Crown itself was no longer referred to as the Crown of any place in particular but simply as "the Crown". Arthur Berriedale Keith, in Speeches and Documents on the British Dominions 1918–1931, stated that "the Dominions are sovereign international States in the sense that the King in respect of each of His Dominions (Newfoundland excepted) is such a State in the eyes of international law". After then, those countries that were previously referred to as "Dominions" became Commonwealth realms where the sovereign reigns no longer as the British monarch, but as monarch of each nation in its own right, and are considered equal to the UK and one another.
The Second World War, which fatally undermined Britain's already weakened commercial and financial leadership, further loosened the political ties between Britain and the Dominions. Australian Prime Minister John Curtin's unprecedented action (February 1942) in successfully countermanding an order from British Prime Minister Winston Churchill that Australian troops be diverted to defend British-held Burma (the 7th Division was then en route from the Middle East to Australia to defend against an expected Japanese invasion) demonstrated that Dominion governments might no longer subordinate their own national interests to British strategic perspectives. To ensure that Australia had full legal power to act independently, particularly in relation to foreign affairs, defence industry and military operations, and to validate its past independent action in these areas, Australia formally adopted the Statute of Westminster in October 1942 and backdated the adoption to the start of the war in September 1939.
The Dominions Office merged with the India Office as the Commonwealth Relations Office upon the independence of India and Pakistan in August 1947. The last country officially made a Dominion was Ceylon in 1948. The term "Dominion" fell out of general use thereafter. Ireland ceased to be a member of the Commonwealth on 18 April 1949, upon the coming into force of the Republic of Ireland Act 1948. This formally signalled the end of the former dependencies' common constitutional connection to the British Crown. India also adopted a republican constitution in January 1950. Unlike many dependencies that became republics, Ireland never re-joined the Commonwealth, which agreed to accept the British monarch as head of that association of independent states.
The independence of the separate realms was emphasised after the accession of Queen Elizabeth II in 1952, when she was proclaimed not just as Queen of the United Kingdom, but also Queen of Canada, Queen of Australia, Queen of New Zealand, Queen of South Africa, and of all her other "realms and territories" etc. This also reflected the change from Dominion to realm; in the proclamation of Queen Elizabeth II's new titles in 1953, the phrase "of her other Realms and Territories" replaced "Dominion" with another mediaeval French word with the same connotation, "realm" (from royaume). Thus, recently, when referring to one of those sixteen countries within the Commonwealth of Nations that share the same monarch, the phrase Commonwealth realm has come into common usage instead of Dominion to differentiate the Commonwealth nations that continue to share the monarch as head of state (Australia, Canada, New Zealand, Jamaica, etc.) from those that do not (India, Pakistan, South Africa, etc.). The term "Dominion" is still found in the Canadian constitution where it appears numerous times, but it is largely a vestige of the past, as the Canadian government does not actively use it (see Canada section). The term "realm" does not appear in the Canadian constitution.
The generic language of Dominion did not cease in relation to the Sovereign. It was, and is, used to describe territories in which the monarch exercises sovereignty.
Many distinctive characteristics that once pertained only to Dominions are now shared by other states in the Commonwealth, whether republics, independent realms, associated states or territories. The practice of appointing a High Commissioner instead of a diplomatic representative such as an ambassador for communication between the government of a Dominion and the British government in London continues in respect of Commonwealth realms and republics as sovereign states.
- "dominion". Archived 29 September 2007 at the Wayback Machine. Merriam Webster's Dictionary (based on Collegiate vol., 11th ed.), 2006. Springfield, MA: Merriam-Webster, Inc.
- Hillmer, Norman (2001). "Commonwealth". Canadian Encyclopedia. Toronto.
...the Dominions (a term applied to Canada in 1867 and used from 1907 to 1948 to describe the empire's other self-governing members)
- Forsey, Eugene A. (7 November 2019). "Dominion of Canada". Canadian Encyclopedia. Retrieved 12 February 2021.
The term Dominion — that which is mastered or ruled — was used by the British to describe their colonies or territorial possessions.
- "Parliamentary questions, Hansard, 5 November 1934". hansard.millbanksystems.com. 5 November 1924. Archived from the original on 13 July 2009. Retrieved 11 June 2010.
- Herd, Andrew (1990). "Canadian independence". Retrieved 12 February 2021.
When the Dominion of Canada was created in 1867 it was granted powers of self-government to deal with all internal matters, but Britain still retained overall legislative supremacy.
- Roberts, J. M., The Penguin History of the World (London: Penguin Books, 1995, ISBN 0-14-015495-7), p. 777
- Cyprus (Annexation) Order in Council, 1914, dated 5 November 1914.
- Order quoted in The American Journal of International Law, "Annexation of Cyprus by Great Britain"
- "Dominion". Encyclopedia Britannica. 7 December 2011. Retrieved 13 February 2021.
Although there was no formal definition of dominion status, a pronouncement by the Imperial Conference of 1926 described Great Britain and the dominions as “autonomous communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown and freely associated as members of the British Commonwealth of Nations.”
- Mohr, Thomas (2013). "The Statute of Westminster, 1931: An Irish Perspective" (PDF). Law and History Review. 31 (4): 749–791: fn.25. doi:10.1017/S073824801300045X. hdl:10197/7515. ISSN 0738-2480. Archived from the original (PDF) on 9 October 2016. Retrieved 6 October 2016.
- League of Nations (1924). "The Covenant of the League of Nations". Article 1: The Avalon Project at Yale Law School. Archived from the original on 24 July 2011. Retrieved 20 April 2009. Cite journal requires
|journal=(help)CS1 maint: location (link)
- James Crawford, The Creation of States in International Law (Oxford: Oxford University Press, 1979, ISBN 978-0-19-922842-3), p. 243
- "Commonwealth association of states". Encyclopedia Britanica. 11 August 2020. Retrieved 13 February 2021.
- "Dominion of Canada". Canadian Encyclopedia. Historica Foundation of Canada, 2006. Accessed 2 January 2020. "Dominion of Canada is the country's formal title, though it is rarely used."
- National Health Service Act 2006 (c. 41), sch. 22
- Link to the Australian Constitutions Act 1850 on the website of the National Archives of Australia: www.foundingdocs.gov.au Archived 3 December 2007 at the Wayback Machine
- Link to the New South Wales Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au Archived 3 December 2007 at the Wayback Machine
- Link to the Victoria Constitution Act 1855, on the Web site of the National Archives of Australia: www.foundingdocs.gov.au Archived 3 December 2007 at the Wayback Machine
- Link to the Constitution Act 1855 (SA), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au Archived 3 December 2007 at the Wayback Machine
- Link to the Constitution Act 185 (Tasmania), on the Web site of the National Archives of Australia: www.foundingdocs.gov.au Archived 3 December 2007 at the Wayback Machine
- Link to the Order in Council of 6 June 1859, which established the Colony of Queensland, on the Web site of the National Archives of Australia."Archived copy". Archived from the original on 1 January 2009. Retrieved 1 February 2009.CS1 maint: archived copy as title (link)
- The "Northern Territory of New South Wales" was physically separated from the main part of NSW. In 1863, the bulk of it was transferred to South Australia, except for a small area that became part of Queensland. See: Letters Patent annexing the Northern Territory to South Australia, 1863 Archived 1 June 2011 at the Wayback Machine. In 1911, the Commonwealth of Australia agreed to assume responsibility for administration of the Northern Territory, which was regarded by the government of South Australia as a financial burden.www.foundingdocs.gov.au Archived 31 August 2006 at the Wayback Machine. The NT did not receive responsible government until 1978.
- Link to the Constitution Act 1890, which established self-government in Western Australia: www.foundingdocs.gov.au[permanent dead link]
- Alan Rayburn (2001). Naming Canada: Stories about Canadian Place Names. University of Toronto Press. pp. 17–21. ISBN 978-0-8020-8293-0.
- "The London Conference December 1866 – March 1867". www.collectionscanada.gc.ca. Archived from the original on 22 November 2006. Retrieved 11 June 2010.
- Andrew Heard (1990). "Canadian Independence". Archived from the original on 6 May 2009. Retrieved 5 February 2008.
- Forsey, Eugene (1990). "How Canadians Govern Themselves". Archived from the original on 11 February 2008. Retrieved 14 October 2007.
- Buckley, F. H. (2014). The Once and Future King: The Rise of Crown Government in America. Encounter Books. Archived from the original on 15 May 2014. Retrieved 15 May 2014.
- F. R. Scott (January 1944). "The End of Dominion Status". The American Journal of International Law. American Society of International Law. 38 (1): 34–49. doi:10.2307/2192530. JSTOR 2192530.
- Europe Since 1914: Encyclopedia of the Age of War and Reconstruction; John Merriman and Jay Winter; 2006; see the British Empire entry which lists the "White Dominions" above except Newfoundland
- J. E. Hodgetts. 2004. "Dominion". Oxford Companion to Canadian History, Gerald Hallowell, ed. (ISBN 0-19-541559-0), at Hallowell, Gerald (2004). The Oxford Companion to Canadian History. ISBN 9780195415599. Archived from the original on 17 March 2015. Retrieved 1 March 2015. - p. 183: "Ironically, defenders of the title dominion who see signs of creeping republicanism in such changes can take comfort in the knowledge that the Constitution Act, 1982, retains the title and requires a constitutional amendment to alter it."
- Forsey, Eugene A., in Marsh, James H., ed. 1988. "Dominion of Canada" The Canadian Encyclopedia. Hurtig Publishers: Toronto.
- "National Flag of Canada Day: How Did You Do?". Department of Canadian Heritage. Archived from the original on 13 July 2007. Retrieved 7 February 2008.
The issue of our country's legal title was one of the few points on which our constitution is not entirely homemade. The Fathers of Confederation wanted to call the country "the Kingdom of Canada". However the British government was afraid of offending the Americans so it insisted on the Fathers finding another title. The term "Dominion" was drawn from Psalm 72. In the realms of political terminology, the term dominion can be directly attributed to the Fathers of Confederation and it is one of the very few, distinctively Canadian contributions in this area. It remains our country's official title.
- Sotomayor, William Fernando. "Newfoundland Act". www.solon.org. Archived from the original on 24 February 2011.
- s:Republic of South Africa Constitution Act, 1961
- "Archives". Republic of Rumi. Archived from the original on 14 June 2013. Retrieved 12 July 2013.
- B. Hunter (ed), The Stateman's Year Book 1996-1997, Macmillan Press Ltd, pp. 130-156
- Order in Council of the UK Privy Council, 6 June 1859, establishing responsible government in Queensland. See Australian Government's "Documenting a Democracy" website at this webpage: www.foundingdocs.gov.au Archived 22 July 2008 at the Wayback Machine
- Constitution Act 1890 (UK), which came into effect as the Constitution of Western Australia when proclaimed in WA on 21 October 1890, and establishing responsible government in WA from that date; Australian Government's "Documenting a Democracy" website: www.foundingdocs.gov.au Archived 22 July 2008 at the Wayback Machine
- Smith, David (2005). Head of state : the governor-general, the monarchy, the republic and the dismissal (Hardcover ed.). Paddington NSW: Macleay Press. p. 18. ISBN 978-1876492151.
- Canadian Encyclopedia, (1999) p 680. online
- Scott, Frank R. (January 1944). "The End of Dominion Status". The American Journal of International Law. American Society of International Law. 38 (1): 34–49. doi:10.2307/2192530. JSTOR 2192530. Archived from the original on 28 July 2013.
- Morra, Irene (2016). The New Elizabethan Age: Culture, Society and National Identity after World War II. I.B.Tauris. p. 49. ISBN 978-0-85772-867-8.
- "November 8, 1951 (21st Parliament, 5th Session)". Canadian Hansard Dataset. Retrieved 9 April 2019.
- Bowden, J.W.J. (2015). "'Dominion': A Lament". The Dorchester Review. 5 (2): 58–64.
- "The Prince of Wales 2001 Royal Visit: April 25 - April 30; Test Your Royal Skills". Department of Canadian Heritage. 2001. Archived from the original on 11 July 2006. Retrieved 7 February 2008.
As dictated by the British North America Act, 1867, the title is Dominion of Canada. The term is a uniquely Canadian one, implying independence and not colonial status, and was developed as a tribute to the Monarchical principle at the time of Confederation.
"How Canadians Govern Themselves" (PDF). PDF. Archived (PDF) from the original on 25 March 2009. Retrieved 6 February 2008.
Forsey, Eugene (2005). How Canadians Govern Themselves (6th ed.). Ottawa: Her Majesty the Queen in Right of Canada. ISBN 0-662-39689-8.
The two small points on which our constitution is not entirely homemade are, first, the legal title of our country, "Dominion," and, second, the provisions for breaking a deadlock between the Senate and the House of Commons.
- Kulke, Hermann; Rothermund, Dietmar (2004), A History of India (Fourth ed.), Routledge, pp. 279–281, ISBN 9780415329194
- "The Commonwealth of India Bill 1925". Constituent assembly debates & India. Retrieved 30 May 2018.
- Tripathi, Suresh Mani (2016). Fundamental Rights and Directive Principles in India. Anchor Academic Publishing. pp. 39–40. ISBN 9783960670032. Retrieved 30 May 2018.
- William Roger Louis (2006). Ends of British Imperialism: The Scramble for Empire, Suez, and Decolonization. I.B.Tauris. pp. 387–400. ISBN 9781845113476.
- Indian Independence Act 1947, "An Act to make provision for the setting up in India of two independent Dominions, to substitute other provisions for certain provisions of the Government of India Act 1935, which apply outside those Dominions, and to provide for other matters consequential on or connected with the setting up of those Dominions" passed by the U.K. parliament 18 July 1947."Indian Independence Act 1947". Archived from the original on 30 June 2012. Retrieved 17 July 2012.
- The Statesman's Year Book, p. 635
- The Statesman's Year Book, p. 1002
- On 7 December 1922 (the day after the establishment of the Irish Free State) the Parliament resolved to make the following address to the King so as to opt out of the Irish Free State: "MOST GRACIOUS SOVEREIGN, We, your Majesty's most dutiful and loyal subjects, the Senators and Commons of Northern Ireland in Parliament assembled, having learnt of the passing of the Irish Free State Constitution Act, 1922, being the Act of Parliament for the ratification of the Articles of Agreement for a Treaty between Great Britain and Ireland, do, by this humble Address, pray your Majesty that the powers of the Parliament and Government of the Irish Free State shall no longer extend to Northern Ireland". Source: Northern Ireland Parliamentary Report, 7 December 1922 Archived 19 March 2009 at the Wayback Machine and Anglo-Irish Treaty, sections 11, 12.
- The Statesman's Year Book, p. 302
- The Statesman's Year Book, p. 303
- The Statesman's Year Book
- "History, Constitutional - The Legislative Authority of the New Zealand Parliament - 1966 Encyclopaedia of New Zealand". www.teara.govt.nz. 22 April 2009. Archived from the original on 28 April 2009. Retrieved 11 June 2010.
- "Dominion status". NZHistory. Archived from the original on 3 June 2010. Retrieved 11 June 2010.
- Prof. Dr. Axel Tschentscher, LL. M. "ICL - New Zealand - Constitution Act 1986". servat.unibe.ch. Archived from the original on 13 February 2010. Retrieved 11 June 2010.
- The Stateman's Year Book p. 1156
- Wikisource: South Africa Act 1909
- Roiron, Virginie (2013). "CHALLENGED COMMONWEALTH? THE DECOLONISATION OF RHODESIA" (PDF). Cercles. Institut d’Études Politiques de Strasbourg. Number 28: 171 – via PDF.
- Rowland, J. Reid. "Constitutional History of Rhodesia: An outline": 245–251. Cite journal requires
|journal=(help) Appendix to Berlyn, Phillippa (April 1978). The Quiet Man: A Biography of the Hon. Ian Douglas Smith. Salisbury: M. O. Collins. pp. 240–256. OCLC 4282978.
- Wood, J. R. T. (April 2008). A matter of weeks rather than months: The Impasse between Harold Wilson and Ian Smith: Sanctions, Aborted Settlements and War 1965–1969. Victoria, British Columbia: Trafford Publishing. p. 5. ISBN 978-1-42514-807-2.
- Harris, P. B. (September 1969). "The Rhodesian Referendum: June 20th, 1969". Parliamentary Affairs. Oxford University Press. 23 (1969sep): 72–80. doi:10.1093/parlij/23.1969sep.72.
- Gowlland-Debbas, Vera (1990). Collective Responses to Illegal Acts in International Law: United Nations action in the question of Southern Rhodesia (First ed.). Leiden and New York: Martinus Nijhoff Publishers. p. 73. ISBN 0-7923-0811-5.
- Brandon Jernigan, "British Empire" in M. Juang & Noelle Morrissette, eds., Africa and the Americas: Culture, Politics, and History (ABC-CLIO, 2008) p. 204.
- "For the first three years of its independence, Nigeria was a dominion. As a result, its head of state was Elizabeth Windsor II ..." Hill, J.N.C. (2012). Nigeria Since Independence: Forever Fragile?. London: Palgrave Macmillan. p. 146, note 22. ISBN 978-1-349-33471-1.
- Da Graça, John V. (2000). Heads of State and Government (2nd ed.). London and Oxford: Macmillan. p. 937. ISBN 978-0-333-78615-4.
- Mr. K.N. Gichoya, bringing a motion on 11 June 1964 in the Kenyan House of Representatives that Kenya be made a Republic: "I should make it clear to those who must know our status today, we are actually a dominion of the United Kingdom in the same way as ... Canada, Australia and New Zealand." Kenya National Assembly Official Record (Hansard), 1st Parliament, 2nd Session, Vol. 3 (Part 1), Column 135.
- Da Graça, John V. (2000). Heads of State and Government (2nd ed.). London and Oxford: Macmillan. p. 917. ISBN 978-0-333-78615-4.
- Engel, Ulf et al. eds. (2000). Tanzania Revisited: Political Stability, Aid Dependency, and Development Constraints. Hamburg: Institute of African Affairs. p. 115. ISBN 3-928049-69-0.CS1 maint: extra text: authors list (link)
- Da Graça, John V. (2000). Heads of State and Government (2nd ed.). London and Oxford: Macmillan. p. 355. ISBN 978-0-333-78615-4.
- "In 1971 Siaka Stevens embarked on the process to transform Sierra Leone from a Dominion to a Republic." Berewa, Solomon E. (2011). A New Perspective on Governance, Leadership, Conflict and Nation Building in Sierra Leone. Bloomington, Indiana: AuthorHouse. p. 66. ISBN 978-1-4678-8886-8.
- "Prime Minister Jugnauth proposed to amend the constitution to change Mauritius from a dominion to a republic. It was passed unanimously and on 12 March 1992, Mauritius acceded to a republic state." NgCheong-Lum, Roseline (2009). CultureShock! Mauritius: A Survival Guide to Customs and Etiquette. Tarrytown, New York: Marshall Cavendish. p. 37. ISBN 978-07614-5668-1.
- Da Graça, John V. (2000). Heads of State and Government (2nd ed.). London and Oxford: Macmillan. p. 565. ISBN 978-0-333-78615-4.
- Statute of Westminster Adoption Act 1942 (Act no. 56 of 1942). The long title for the Act was "To remove Doubts as to the Validity of certain Commonwealth Legislation, to obviate Delays occurring in its Passage, and to effect certain related purposes, by adopting certain Sections of the Statute of Westminster, 1931, as from the Commencement of the War between His Majesty the King and Germany." Link: www.foundingdocs.gov.au Archived 16 July 2005 at the Wayback Machine.
- Buckley, F. H., The Once and Future King: The Rise of Crown Government in America, Encounter Books, 2014.
- Choudry, Sujit. 2001 (?). "Constitution Acts" (based on looseleaf by Hogg, Peter W.). Constitutional Keywords. University of Alberta, Centre for Constitutional Studies: Edmonton.
- Holland, R. F., Britain and the Commonwealth Alliance 1918-1939, MacMillan, 1981.
- Forsey, Eugene A. 2005. How Canadians Govern Themselves, 6th ed. (ISBN 0-662-39689-8) Canada: Ottawa.
- Hallowell, Gerald, ed. 2004. The Oxford Companion to Canadian History. Oxford University Press: Toronto; p. 183-4 (ISBN 0-19-541559-0).
- Marsh, James H., ed. 1988. "Dominion of Canada" et al. The Canadian Encyclopedia. Hurtig Publishers: Toronto.
- Martin, Robert. 1993 (?). 1993 Eugene Forsey Memorial Lecture: A Lament for British North America. The Machray Review. Prayer Book Society of Canada. A summative piece about nomenclature and pertinent history with abundant references.
- Rayburn, Alan. 2001. Naming Canada: stories about Canadian place names, 2nd ed. (ISBN 0-8020-8293-9) University of Toronto Press: Toronto. | https://en.m.wikipedia.org/wiki/Dominion | 21 |
61 | Silver standard (銀本位制)
The silver standard is a system in which silver forms the basis of a monetary system of a country. The base currency, or the standard currency, is silver coins, allowed free coinage and free fusing and given power to circulate unlimitedly.
In this case, the currency of the country is represented as a certain amount of silver, and silver value is set as default for commodity price.
In fact, there are not many examples of a genuine silver standard for which silver is the only legal standard currency in history. In the Edo period, there was Sanka seido (Tokugawa coinage) in which Koban (oval gold coins), Chogin (silver coins), and Senka (small coins) were unlimitedly passable as standard coins in effect. In fact, however, gold coins circulated in eastern Japan, while silver coins circulated in western Japan. In this way, the system in which both silver and gold coins were set as standard currency is called bimetallism.
According to some reasons such as securing gold and silver metal, however, the bimetallism became a mere formality that only silver coins circulated. Therefore, the system often goes to silver standard in effect. As an example, in the nineteenth century, many of European countries adopted the bimetallism, but there was wide gap of legal exchange ratio between gold and silver because silver production increased and its market value declined. In such cases, the system gradually eroded and changed to the silver standard, because it was advantageous to circulate silver coins and hoard gold coins (Gresham's law).
Since fluctuation of market value of silver during this period was significant and tended to decline remarkably and the United Kingdom, which led the world economy around that time, already shifted to the gold standard, countries which adopted silver standard were seriously affected, so most of the countries shifted to the gold standard in the end of the nineteenth century.
In Japan, New Currency Regulation was established in June, 1871, and the gold standard was adopted formally. In the East market during that time, however, foreign payment was generally done by silver coins; therefore, 1 yen silver coin (weighed value: 416 grain) and silver coins of weighed value of 420 grain equivalent to German silver were issued for trade, and used as means of foreign payment for trade.
In 1878, 1 yen silver coins were approved for circulation in Japan, so that the system became bimetallism virtually. However, gold coins were scarcely circulated, because they outflowed and the government issued huge sum of irredeemable currency. In 1885 after the Matsukata deflation, conversion of silver by first Bank of Japan notes (100 yen, 10 yean and 1 yen of silver conversion notes with a picture of Daikoku [the god of wealth]) was started. The silver standard continued in effect until gold standard was formally adopted in 1897.
In China, the silver standard continued after the Xinhai Revolution. However, confusion of financial market because of the Great Depression led outflow of silver coins, so the silver standard was abandoned (the abandonment of the old "tael" and its replacement with the "yuan"; adoption of legal currency based on controlled currency system) in 1935. Consequently, there were few countries which adopted the silver standard. | https://www.japanese-wiki-corpus.org/history/Silver%20standard.html | 21 |
19 | How do non-polar covalent bonds arise
Covalent bonding explained in simple terms: example, difference, electronegativity
Atoms can reach a more energetically favorable position through bonds. In addition to the ionic bond and the metallic bond, there is also the covalent bond.
A particularly favorable one is the noble gas configuration with 2 (helium) or 8 valence electrons. There are several ways to achieve the electron octet.
Covalent bonds or electron pair bonds come about when atoms share (at least) one electron pair, whereby both atoms reach the octet.
Example of a covalent bond
As an example, consider elemental fluorine (F2):
Each of the two fluorine atoms has 7 valence electrons. For fluorine, it would be energetically worthwhile if the two atoms shared a pair of electrons. The result is that a molecule is created that has an octet of electrons.
Two fluorine atoms share a pair of electrons, creating one Covalent bond arises. However, a single bond does not have to be present in a covalent bond. Of course, double and triple bonds can also be shared by the individual atoms, as is the case with elemental nitrogen (N2).
Difference to the ion compound
A covalent bond can be distinguished from an ionic bond in that the electronegativity difference of of the atoms involved is not particularly large. Usually in the literature a value of < 1,7="" für="" kovalente="" bindungen="" angegeben.="" oberhalb="" dieser="" grenze="" geht="" man="" von="" ionischen="" verbindungen="" aus.="">
Electronegativity and covalent bonds
Covalent bonds can be further subdivided among each other. A criterion or measure for this subdivision is that polarity. This describes the distribution of the shared electrons between the binding partners.
The electrons can either be distributed equally (symmetrically) between the binding partners or unevenly (asymmetrically).
In the unbalanced The division of electrons can also be used to determine which binding partner attracts the electrons more strongly. To assess the binding polarity one has to consider the electronegativities of the individual binding partners.
The Electronegativity describes the ability of an atom to attract electrons in a bond. The higher the electronegativity of an atom, the more strongly this atom will attract the common electrons in a bond.
Non-polar and polar covalent bonds
If both binding partners in a covalent bond have the same electronegativity, the partners attract the common electrons equally. This results in a non-polar covalent bond. Non-polar means that the charge distribution within the molecule is balanced and no pole is formed in which the electron density would be higher than expected. Non-polar covalent bonds always occur when, for example, only atoms of one element are involved in a bond, as is the case with a fluorine or oxygen molecule.
You can use the same principle polar covalent bonds define. In these, the electronegativity difference is not equal to zero, but less than 1.7, since otherwise it would be an ionic compound. An example of a polar covalent bond would be HBr (hydrogen bromide). In this case the bromine with an electronegativity of 2.8 is more electronegative than the hydrogen with 2.1.
This fact is indicated with the following notation:
The “plus” indicates that the hydrogen is less electronegative (more electropositive) than the bromine. Alternatively, the polarity can also be indicated by a small delta followed by a “plus” above the hydrogen atom or by a delta followed by a “minus” above the bromine atom.
- Is toxic to salt for rabbits
- Leaked Amon Amarth Jomsviking album
- How can I run Python in MATLAB
- Is intelligent extraterrestrial life visiting the planet
- Do Woolworths still exist in the UK?
- How was Dobby the house elf killed
- What was the Atreides worldview in Dune
- What do you think of Adventure Time
- How to start the data analyst for beginners
- What are some notable startups in Switzerland
- What is Nabisco doing
- Can we get a debit card right away?
- Is teeth whitening permanent
- Is the industry standard separate
- What environmental disaster occurred in 1969
- Humans are the happiest animals
- Does software automation testing really work?
- Is the grade of petroleum engineering from RGIPT worth it
- What is Switzerland best
- What is the full form of SPMU
- What is pea soup
- What is Bitcoin for laypeople
- Which group in the 11th standard becomes a doctor
- What Harvard admissions interviewers look for
- Who is the god of Indian cinema
- How much is Bob Dylan worth
- Has RBC energy
- Bengali is the second most beautiful language
- Is Obama responsible for the rise in unemployment
- How can you reuse an enema kit?
- Who qualifies for a purple heart
- Who sings the song Kashmir
- What kind of adjective is everyone
- What is the spirit of truth | https://helios-domain6.xyz/?post=2813 | 21 |
276 | Monetary Policy Worksheet
Posted in Worksheet, by Kimberly R. Foreman
Worksheet. monetary policy cause and effect. if the fed wants to increase the money supply, determine the use of the three fed tools and explain how the money supply increase would happen. increase the money supply reserve requirement discount rate open market operations action by fed how is money supply change worksheet allows students to demonstrate their understanding of governments role in influencing monetary policy through bonds, interest rates, and reserve requirements.
students define choices and consequences of various government actions. author teacher principal. Monetary policy worksheet this worksheet is almost identical to the fiscal policy worksheet. however the tools of fiscal policy are different than that of monetary policy.
List of Monetary Policy Worksheet
Monetary policy is controlled by the fed and the banking system of the united states. it Displaying top worksheets found for monetary policy. some of the worksheets for this concept are monetary and fiscal policy work, the federal reserve monetary policy and the economy, due date name units fixing an economy fiscal, fiscal and monetary policy answer key, fiscal policy monetary policy whats the difference, lesson plans grades, focus high school economics, money Monetary policy worksheet.
you are the fed chairman. for each of the situations listed below, decide if you would use policy, policy, moral persuasion, or do nothing. if you use e. monetary policy easy you are trying to grow the economy and create more jobs by increasing the money supply.
1. 9 Economics Assignment Images
If you use. policy you are trying to grow the economy and create more jobs by increasing the money supply. in reality the fed would do this by lowering the reserve. Monetary and fiscal policy worksheet. name as you read each situation, answer the following questions.
a. is the problem inflation, recession, stagnant economy or unemployment list the type b. should money be increased inc should money be decreased or should neither be done c. government action write policy we now bring together all of the pieces of the process by which monetary policy is transmitted to the economy, and we examine both the effects and the effects of monetary policy.
2. Evolution Bill Money Collection Paper
Suppose that initially the economy is at the intersection of ad and in View full document. economics principles of macroeconomics monetary policy worksheet chap s. fall monetary policy definition the deliberate exercise of the federal reserves power to expand or contract the money supply in order to bring about a desired economic goal e.
g. , price stability, full employment, economic growth instruments of monetary policy. Now, creating a monetary and fiscal policy worksheet requires not more than minutes. our state samples and clear guidelines remove mistakes. follow our simple steps to get your monetary and fiscal policy worksheet ready quickly find the web sample in the catalogue.
3. Political Party
Unit prices and controls worksheet. unit types of market structures worksheet. unit personal finance worksheet. unit economic health indicators and worksheet. unit economic health indicators inflation and unemployment worksheet. unit fixing an economy fiscal and monetary policy worksheet.
unit international. Monetary policy is an important component to economics and government. this will help you test your understanding of its definition and application. quiz worksheet goals in these. Tuesday, may federal reserve monetary policy complete federal reserve monetary policy worksheets.
4. Hard Money Loan Contract Template Luxury Loan Contract
Continue filling out fiscal and monetary policy flow chart. , may fed chairman game play fed chairman game and complete worksheet and questions on google form links to the left computer lab Activity reading and thinking time inflation targets. from the onwards, a number of central banks decided to introduce inflation targets as part of their monetary policy approach.
for a brief overview of the inflation target, take a look at the bank of information here. you can follow this up with a slightly more explanation of inflation targeting from the imf. Oct, monetary policy worksheet this worksheet is almost identical to the fiscal policy worksheet.
5. Free Printable Money Game Money Activities Money Games
Monetary policy worksheet answers. monetary policy cause and effect. increase the money supply. monetary policy worksheet answers questions like when a recession will occur what effect it will have on economic growth and when it will be over. monetary policy worksheet answers intents from the monetary policy worksheet This is a worksheet to accompany the crash course video for u.
s. government and politics monetary and fiscal policy. answer key is included as well. by purchasing this file, you agree not to make it publicly available on websites, etc. or to share with any other teachers. Mar, monetary policy worksheet answers. worksheet by.
6. Fiscal Policy Instructional Videos Guided Notes
Previous to dealing with monetary policy worksheet answers, be sure to know that schooling can be our own factor to a better another day, as well as finding out wont just cease once the college bell rings. that will being stated, most people provide variety of straightforward but educational reports Sep, monetary policy for students in this monetary policy worksheet, students solve problems with varying reserve requirements and interest rates, then complete balance sheets for the federal reserve banks.
Dec, the term monetary policy refers to what the federal reserve, the nations central bank, does to influence the amount of money and credit in the u. s. economy. monetary policy worksheet this worksheet is almost identical to the fiscal policy worksheet.
7. Federal Reserve Worksheet Problem Sets Economics
Start studying crash course monetary policy and the federal reserve. This quiz and worksheet combo can help gauge your knowledge of monetary and fiscal policies and how they differ. you will be quizzed on roles of the federal reserve, as well as characteristics of.
May, worksheet may, any potential investor should consider a monetary policy worksheet to help guide his investment decisions. a financial expert can assist in making a decision but it is up to the investor to learn all he can about how financial markets work.
8. Worlds Largest Social Reading
The best way to educate yourself on how the market work is to use a worksheet. What is meant by monetary policy the nature of interest rates, and the variety of interest rates in an economy the interest rate transmission mechanism via consumer and investment spending, international trade, and asset prices additional teacher guidance is available at the end of this lesson.
Fiscal policy displaying top worksheets found for this concept. some of the worksheets for this concept are fiscal policy work, monetary and fiscal policy work, snacks the monetary and fiscal policy two step, monetary and fiscal policy work, fiscal policy work, fiscal and monetary policy answer key, chapter fiscal policy section, title inflation activity.
9. Debt Payoff
Fiscal monetary policy worksheet, impact. k. view profile. use this version, or check out other variations created by teachers from the community original by. use worksheet fiscal monetary policy enter. Units fixing an economy fiscal monetary policy worksheet use the lecture notes to answer the following questions each.
john suggested that government should finish the sentence. the three tools of fiscal policy are list below a. b. c. expansionary fiscal policy will increase and. as a. Monetary policy scenarios worksheet scenario currency appreciates or depreciates why country a has a lot of young, workers as well as a number of entrepreneurs who are starting new companies.
10. Monetary Policy Fiscal Policy Top 7
The country is relatively stable politically. country b suffers from brain drain most of its citizens migrate fiscal policy displaying top worksheets found for this concept. some of the worksheets for this concept are monetary and fiscal policy work, monetary and fiscal policy work, snacks the monetary and fiscal policy two step, due date name units fixing an economy fiscal, basics back to what is fiscal policy, fiscal and monetary policy answer key, unit.
The monetary policy which he considers necessary to economic stabilization, and professor w. , the nations foremost advocate of the economics, was called upon to discuss the importance of fiscal policy as an approach to this problem. since each man could easily be identified with one, monetary policy worksheet answers.
11. Workers Insurance Workers
Types of insurance include automobile. health. life. disability. homeowners renters. a. u. t. o. m. ob. e. i. n. s. u. r. a. n. according to the insurance education foundation, there is a chance a person will be involved in an automobile accident within the first three years of driving.
a. u. t. o. i. n. Type of plan name of carrier or plan, city and state of patient employer based or union group coverage examples include open access, of, employee plan patients own in coverage including marketplace plans traditional medicare part a inpatient insurance investigation rubric.
12. Wedding Planner Organiser Custom Excel Template Saving
Normally, the fed conducts monetary policy by setting a target for the federal funds rate, the rate at which banks borrow and lend reserves on an overnight basis. it meets its target through open market operations,Sep, monetary policy vs. fiscal policy an overview.
monetary policy and fiscal policy refer to the two most widely recognized tools used to influence a nations economic activity. Mar, monetary policy in practice a. inflation targeting major difference between inflation targeting and the rule inflation targeting is forward looking rather than.
13. Supply Orders Printable Debt Payoff Debt Payoff
That is, the rule adjusts monetary policy in response to past inflation, but inflation targeting, monetary policy vocabulary worksheet. fiscal policy worksheet individual activity for a grade. i will issue the worksheets in class. www. sgawarriors. com features of fiscal and monetary, moreover, monetary policy actions tend to influence economic activity and prices with a lag.
therefore, the committees policy decisions reflect its goals, its outlook, and its assessments of the balance of risks, including risks to the financial system, monetary policy worksheet answers together with answer math questions for money.
14. Step Equations Worksheet Unique Quiz Worksheet
If you are not confident in your abilities to solve equations with word problems, you can go to equations. Jan, they must apply these skills when writing and solving equations and inequalities to represent situations in word problems common core standard.
this is a comprehensive collection of free printable math worksheets for grade and for algebra organized by topics such as expressions integers one step equations rational numbers multi step More equations interactive worksheets. solving equations by combining like terms practice by solving one step equations notes by writing linear equations in slope intercept form notes by sole two step equations by two step equations by solving two step equations by solving.
15. Shop Stock Meal Planning Pantry Includes
They must determine. These are stock market worksheets that are perfect for teaching students about a stock market, also known as share market or equity market, which is the aggregation of buyers and sellers of shares or stocks which represents public or private ownership Cloudfront.
net the stock market portfolio spreadsheet template in format can be used for analyzing the comparing the performance of different stocks over time. this is a highly useful and beneficial template that can also be used for putting down the stock market portfolio.
16. Save Cost Free
Quiz unemployment. pages. money market worksheet. pages. quick review questions. pages. perfect competition worksheet. pages. basics of investing iii. pages. final exam review part. Fiscal and monetary policy worksheet. posted on, by. Monetary policy is the best policy to get us back on track.
past history suggests that manipulating taxes in order to stabilize the economy is rarely effective and can be harmful. more importantly, we must keep in place the policies that contributed to the outstanding economic performance of recent years. the policy of fiscal discipline.
17. Berry En Plastic Canvas
High specific heat. since specific heat can be described using the equation q m x x t, or q m x t, then that means q and are inversely proportional q t, so if specific heat is high then energy will need to be high in order to key. heat key. heat key.
18. Passes Term Policy
Printable worksheet scroll down to print social studies economics fiscal and monetary policy click here to print. terms easy money, federal funds rate, inflation rate of unemployment, curve, tight money. Reading notes worksheet directions save file to your computer open file using adobe acrobat.
19. Overview Fact Sheet Health Safety
To open file using adobe acrobat, file, select open with in the menu and click on adobe acrobat. the format of the file is a form fill using adobe acrobat. using a program other than adobe acrobat to complete the form does not guarantee that your notes will be saved.
20. Nuclear Decay Worksheet Answers Nuclear
Balancing equations iii more practice balancing. balancing equations iv still more practice balancing. balancing equations v even more practice balancing. N in equation. write the nuclear symbol for the missing term in equation. write the nuclear symbol for the missing term in equation.
name date class balancing nuclear nuclear equations math skills transparency worksheet use with chapter, section. If you also get perplexed in balancing chemical equations, follow the tips for correct balancing chemical equations worksheet answers. tip when you are trying to balance the chemical equations, you should remember that you can only change the value of coefficient in front of the element or compound, and not the subscript.
21. Monthly Family Income Tracker Template Excel Excel
Pay a fraction of their interest income in taxes e. Mar, iii. monetary policy in practice fed goals economic growth and price stability inflation control. the rule for monetary policy setting the federal funds rate taking into account both the inflation rate and the output gap.
22. Money Money Worksheets Money Activities Grade
The rule originally suggested was as follows federal funds rate . inflation rate This is called monetary policy. the u. s. money supply is set by the board of governors of the federal reserve system fed the supply for money quantity of money billions of dollars interest rate monetary policy when the fed adjusts the money supply policy worksheet.
23. Kit Girl Movie Activity Economics
Name hour as you read each situation, answer the following questions a. is the problem inflation, unemployment type, recession, stagnant economy or no problem b. which type of fiscal policy should the government expansionary. neither. c. government tools of fiscal policy worksheet.
24. Density Practice Problem Worksheet Reading
Reviews. reviews. last updated. share this. the latest feature is access to some of our topic resources including answers, with subscribed schools having access to each. May, some of the worksheets below are density problems worksheet middle school, density calculations worksheet, density word problems, density workbook definition of density, formula for volume of a rectangular shaped and problems about density.
Density practice problem worksheet remember to consider significant figures circle final answer be sure to include units a block of aluminum occupies a volume of. and a mass of. g. what is its density mercury metal is poured into a graduated cylinder that to exactly.
25. Insurance Policy Tracker Free Organizing
Conversely, a monetary policy that raises interest rates and reduces borrowing in the economy is a monetary policy or tight monetary policy. this module will discuss how expansionary and monetary policies affect interest rates and Complete this worksheet that accompanies the videos and online tutorials tools of monetary policy money market graph online money market graph interactive fed policy changes with graphs interactive online quizzes for monetary policy quiz quiz Oct, lessons macro from monetary policy worksheet answers, image source www.
26. Image Result High School Biology Coloring Pages
Harpercollege. edu. gallery of monetary policy worksheet, a more active fiscal policy is back in favor. how does fiscal policy work when policymakers seek to in the economy, they have two main tools at their policy and policy. central banks indirectly target activity by in the money supply through adjustments to interest rates, fed interpreted the falling interest rates as a sign that monetary policy was too easy, so they too steps to tighten policy the money supply fell by a third between and.
27. Identifying Variables Worksheet Science Answers
This caused a further leftward shift in the ad curve. taken together, the is and shifts caused a massive leftward ad shift. Monetary policy works to correct both trade balance and problems together. an easy monetary policy leads to increased domestic spending and increased, but it also leads to depreciated dollar and higher u.
28. Auto Insurance Comparison Spreadsheet
S. export demand, which enhances and erases a trade deficit. The fed can use four tools to achieve its monetary policy goals the discount rate, reserve requirements, open market operations, and interest on reserves. all four affect the amount of funds in the banking system.
29. Bill Payments Calendar Personal Finance Organizing
The discount rate is the interest rate reserve banks The design of the monetary policy strategy takes into account such problems, thus facilitating robust in an economic environment characterised by high uncertainty. in our approach, the assessment of risks to price stability relies on a comprehensive economic analysis based on a large set of information and models.
30. Gene Mutations Graphic Organizer Interactive Notebooks
Duplication a segment of a chromosome is doubled. this is the result of uneven division of chromosomes during meiosis. There are many different types of mutations. they can happen at many different levels. for example, in chromosomal mutations, an entire part of the chromosome or the whole chromosome itself can be duplicated, deleted, or moved to a different location.
point mutations and mutations are a type of mutation that happens when single are changed, inserted, or deleted. Mutations worksheet part gene mutations in the chart below, transcribe the sequence into. then use the codon. reverses the direction of parts of chromosomes. . occurs when part of one chromosome breaks off and attaches to another. Aug, mutations guided practice worksheet this is my most comprehensive mutations resource with over pages. perfect for distance learning or creation of independent work packets, this resource comes with a digital version and answer key.
31. Calculate Net Worth Personal Finance Term
S. government securities on the open market for the purpose of influencing interest rates and the growth of the money and credit aggregates. Monetary policy is the tools used by the federal open market committee to influence the availability of credit and the money supply.
32. Calculate Net Worth Statement Template Net
Congress and the president are responsible for fiscal policy. the federal open market committee is responsible for monetary policy. changes in government spending and tax policies such as changes to tax. Monetary policy worksheet this worksheet is almost identical to the fiscal policy worksheet.
33. Fiscal Monetary Policy Terms Vocabulary Terms
If the fed wants to increase the money supply determine the use of the three fed tools and. test your ability to determine the effects of fiscal and monetary policy on the economy in this quiz and printable worksheet. Slides on monetary policy. changing the discount rate, reserve ratio, and buying and selling securities.
34. Counting Dollars Cents Printable Worksheets Money
Goes through chain of monetary policy that is required on the advanced placement exam. Monetary policy. the use of the money supply to influence macroeconomic aggregates, such as output, inflation, and unemployment. dual mandate. the two objectives of most central banks, to control inflation and maintain full employment.
35. Crash Government Worksheets Episodes
Origins of government government of the people, by the people, for the people, shall not perish from the earth. U. s. government. u. s. government the constitution was written in and is the basic design for how our government should work. read more. social studies worksheets Read chapter origins of government chapter chapter notes chapter.
36. Easy Blank Budget Worksheet Printable Crush
G. , money demandgdp. There is a worksheet on fiscal policy, a worksheet on government policy with answers and two class activities. one analysing the effect of an increase in spending and the other is a true or false activity on crowding out in both editable version and word version.
37. Historic Policy Rates Chart Monetary Policy Policies
Fiscal and monetary policy terms worksheet. terms easy money, federal funds rate, inflation rate of unemployment, curve, tight money. saved by student handouts. curve monetary policy economics social studies vocabulary worksheets study free. In the united states monetary policy is undertaken by the federal reserve system the fed.
Monetary and fiscal policy worksheet name hour. the rate of inflation has increased by. over the last year. the u. s. government wonders what it can do to help improve this situation. a. should the government use. fiscal. or monetary policies b. should the Monetary policy worksheet this worksheet is almost identical to the fiscal policy worksheet.
however the tools of fiscal policy are different than that of monetary policy. monetary policy is controlled by the fed and the banking system of the united states. it Monetary policy worksheet name block date read the following descriptions of possible monetary policy in order to complete the worksheet. | http://andrewcannon.net/monetary-policy-worksheet/ | 21 |
22 | Although money seems on the surface to be a stable, objective medium of exchange, money values actually fluctuate considerably based on a number of factors. Each of these variables has some basis in cold hard truths such as the amount of currency available. Factors affecting the value of money also depend on subjective, psychological factors such as perceptions regarding the strength of a national economy.
Inflation reduces the value of money. When prices go up because wages are high and materials are scarce, it takes more money to buy goods. Money is then worth less relative to the goods and services that you can purchase with it. A dollar was worth more when it could buy several trips on the subway than it is now that it doesn't even cover a single trip.
Currency devaluation is an official action on the part of a national government to declare that its currency is worth less than it was previously. A country may decide to do this to make its exports more appealing overseas: foreign dollars can buy more product sold through a devalued currency than sold through a currency whose value is intact. In addition, devaluing a currency makes exports more expensive to people who hold the devalued currency. This encourages spending on domestically made products and helps local industries.
In addition to deliberate government actions to manipulate the value of a currency such as devaluation, the value of different currencies relative to one another fluctuates over time. This fluctuation depends on a number of variables, including the relative strengths of the economies of the nations that issue the currency. Investors may choose to exchange their money for one currency rather than another based on assumptions and calculations as to whether that currency will retain its value. If investors all over the world want a particular currency, then it becomes worth more because it is in demand.
Interest rates are established by government policies aimed at increasing or decreasing the flow of money by making it more or less valuable. High interest rates make a currency valuable because they offer a good rate of return and create a demand for that currency. If the Federal Reserve board sets high interest rates, then foreign investors will want to buy American currency, and then lend it to invest it at its current advantageous rate.
Money is worth more when it can buy more. If there is a steady supply of available goods, then their price declines and the value of money rises relative to what it can buy. Calculating the value of a currency over time often involves evaluating its purchasing power. For example, if a new car cost $3,000 in 1970 and costs $20,000 today, this difference indicates that a dollar was worth considerably more then. | https://www.sapling.com/6299248/affects-value-money | 21 |
35 | In terms of economics, inflation can simply be defined as an elevation in the general price levels of services and goods in the economy over a particular period of time. Whenever the price level increases it causes depletion in the buying capacity of the currency, so inflation can also be defined as erosion in the purchasing power of money i.e. a loss of the real value of money in an internal medium of the exchange. One most common measure of price inflation is inflation rate. Inflation rate can be calculated as the yearly percentage change in the general price index (Consumer Price Index to be used most commonly) over time.
Inflation and its effects
Inflation can produce many integrated and complex effects that simultaneously put negative as well as positive effects on the economy. A major negative effect of the inflation is depletion in the real value of the money and various other monetary entities over the time. Moreover, inflation generates uncertainty about the future; inflation will certainly discourage the saving and investment activities of the citizens. Under high inflation investors adopt reverse psychology i.e. they reduce the investments in the productive capital and increases the savings in various non-producing assets. For example, selling the stocks and purchasing gold. This is very detrimental for the growth of economy because it reduces overall economic productivity rates. High inflation rates will lead to the shortage of goods (if consumers got the feeling of insecurity about the prices). For example, recently, the government hinted that the sugar availability may decrease in the country, which might cause some increase in the prices of the sugar in the country. As a result, people started purchasing sugar in bulk, in order to maintain heavy stocks of sugar, with the expectation of selling that sugar in times of heavy demand and increased prices. But this is illegal because bulk stocking of such entities leads to their shortage in the country.
Inflation also produces some positive effects. Positive effects of inflation includes— mitigation of the economic recessions, debt relief (by decreasing the real level of the debt) etc.
Economists have a common opinion that the high rates of inflation and hyperinflation (sudden inflation) are caused by the excessive growth of the money supply in the country. Because when money supply increases in the economy it increases the purchasing powers of the citizens and, as a result, their disposable income increases. Due to the presence of high disposable incomes, citizens spend more on purchasing goods and services which causes shortages of those goods and services and their prices consequently increases. As a treatment, the government should try to keep the money supply at an optimum level; otherwise the government should provide better alternatives to the citizens, for example, saving schemes, investment options etc.
Today, most of the mainstream economists support a low and steady rate of inflation. Low inflation is better than zero or negative inflation, because it may contribute to decreasing the severity of the economic recessions. | https://www.thegeminigeek.com/what-is-inflation/ | 21 |
15 | A Partial Derivative is a derivative where we hold some variables constant. Like in this example:
Example: a function for a surface that depends on two variables x and y
When we find the slope in the x direction (while keeping y fixed) we have found a partial derivative.
Or we can find the slope in the y direction (while keeping x fixed).
Here is a function of one variable (x):
f(x) = x2
f’(x) = 2x
But what about a function of two variables (x and y):
f(x,y) = x2 + y3
To find its partial derivative with respect to x we treat y as a constant (imagine y is a number like 7 or something):
f’x = 2x + 0= 2x
- the derivative of x2 (with respect to x) is 2x
- we treat y as a constant, so y3 is also a constant (imagine y=7, then 73=343 is also a constant), and the derivative of a constant is 0
To find the partial derivative with respect to y, we treat x as a constant:
f’y = 0 + 3y2= 3y2
- we now treat x as a constant, so x2 is also a constant, and the derivative of a constant is 0
- the derivative of y3 (with respect to y) is 3y2
That is all there is to it. Just remember to treat all other variables as if they are constants.
Holding A Variable Constant
So what does "holding a variable constant" look like?
Example: the volume of a cylinder is V = π r2 h
We can write that in "multi variable" form as
f(r,h) = π r2 h
For the partial derivative with respect to r we hold h constant, and r changes:
f’r = π (2r) h = 2πrh
(The derivative of r2 with respect to r is 2r, and π and h are constants)
It says "as only the radius changes (by the tiniest amount), the volume changes by 2πrh"
It is like we add a skin with a circle's circumference (2πr) and a height of h.
For the partial derivative with respect to h we hold r constant:
f’h = π r2 (1)= πr2
(π and r2 are constants, and the derivative of h with respect to h is 1)
It says "as only the height changes (by the tiniest amount), the volume changes by πr2"
It is like we add the thinnest disk on top with a circle's area of πr2.
Let's see another example.
Example: The surface area of a square prism.
The surface is: the top and bottom with areas of x2 each, and 4 sides of area xy:
f(x,y) = 2x2 + 4xy
f’x = 4x + 4y
f’y = 0 + 4x = 4x
Three or More Variables
We can have 3 or more variables. Just find the partial derivative of each variable in turn while treating all other variables as constants.
Example: The volume of a cube with a square prism cut out from it.
f(x,y,z) = z3 − x2y
f’x = 0 − 2xy = −2xy
f’y = 0 − x2 = −x2
f’z = 3z2 − 0 = 3z2
When there are many x's and y's it can get confusing, so a mental trick is to change the "constant" variables into letters like "c" or "k" that look like constants.
Example: f(x,y) = y3sin(x) + x2tan(y)
It has x's and y's all over the place! So let us try the letter change trick.
With respect to x we can change "y" to "k":
f(x,y) = k3sin(x) + x2tan(k)
f’x = k3cos(x) + 2x tan(k)
But remember to turn it back again!
f’x = y3cos(x) + 2x tan(y)
Likewise with respect to y we turn the "x" into a "k":
f(x,y) = y3sin(k) + k2tan(y)
f’y = 3y2sin(k) + k2sec2(y)
f’y = 3y2sin(x) + x2sec2(y)
But only do this if you have trouble remembering, as it is a little extra work.
Notation: here we use f’x to mean "the partial derivative with respect to x", but another very common notation is to use a funny backwards d (∂) like this:
∂f∂x = 2x
Which is the same as:
f’x = 2x
∂ is called "del" or "dee" or "curly dee"
So ∂f∂x is said "del f del x"
Example: find the partial derivatives of f(x,y,z) = x4 − 3xyz using "curly dee" notation
f(x,y,z) = x4 − 3xyz
∂f∂x = 4x3 − 3yz
∂f∂y = −3xz
∂f∂z = −3xy
You might prefer that notation, it certainly looks cool. | https://mathsphere.org/calculus/derivatives-partial | 21 |
25 | - To name covalent compounds that contain up to three elements.
As with ionic compounds, the system that chemists have devised for naming covalent compounds enables us to write the molecular formula from the name and vice versa. In this and the following section, we describe the rules for naming simple organic compounds.
Approximately one-third of the compounds produced industrially are organic compounds. All living organisms are composed of organic compounds, as is most of the food you consume, the medicines you take, the fibers in the clothes you wear, and the plastics in the materials you use. Among the few organic compounds we have discussed are methane (CH4) and methanol (CH3OH). These and other organic compounds appear frequently in discussions and examples throughout this text.
The simplest class of organic compounds is the hydrocarbonsThe simplest class of organic molecules, consisting of only carbon and hydrogen., which consist entirely of carbon and hydrogen. Petroleum and natural gas are complex, naturally occurring mixtures of many different hydrocarbons that furnish raw materials for the chemical industry. The four major classes of hydrocarbons are the alkanesA saturated hydrocarbon with only carbon–hydrogen and carbon–carbon single bonds., which contain only carbon–hydrogen and carbon–carbon single bonds; the alkenesAn unsaturated hydrocarbon with at least one carbon–carbon double bond., which contain at least one carbon–carbon double bond; the alkynesAn unsaturated hydrocarbon with at least one carbon–carbon triple bond., which contain at least one carbon–carbon triple bond; and the aromatic hydrocarbonsAn unsaturated hydrocarbon consisting of a ring of six carbon atoms with alternating single and double bonds., which usually contain rings of six carbon atoms that can be drawn with alternating single and double bonds. Alkanes are also called saturated hydrocarbons, whereas hydrocarbons that contain multiple bonds (alkenes, alkynes, and aromatics) are unsaturated.
The simplest alkane is methane (CH4), a colorless, odorless gas that is the major component of natural gas. In larger alkanes whose carbon atoms are joined in an unbranched chain (straight-chain alkanes), each carbon atom is bonded to at most two other carbon atoms. The structures of two simple alkanes are shown in Figure 6.4.3 , and the names and condensed structural formulas for the first 10 straight-chain alkanes are in Table 6.4.2. The names of all alkanes end in -ane, and their boiling points increase as the number of carbon atoms increases. In all of these compounds the hybridization of the atomic orbitals of carbon is sp3 and the molecular geometry around each carbon atom is tetrahedral
Figure 6.4.3 Straight-Chain Alkanes with Two and Three Carbon Atoms
Table 6.4.2 The First 10 Straight-Chain Alkanes
|Name||Number of Carbon Atoms||Molecular Formula||Condensed Structural Formula||Boiling Point (°C)||Uses|
|methane||1||CH4||CH4||−162||natural gas constituent|
|ethane||2||C2H6||CH3CH3||−89||natural gas constituent|
|butane||4||C4H10||CH3CH2CH2CH3 or CH3(CH2)2CH3||0||lighters, bottled gas|
Alkanes with four or more carbon atoms can have more than one arrangement of atoms. The carbon atoms can form a single unbranched chain, or the primary chain of carbon atoms can have one or more shorter chains that form branches. For example, butane (C4H10) has two possible structures. Normal butane (usually called n-butane) is CH3CH2CH2CH3, in which the carbon atoms form a single unbranched chain. In contrast, the condensed structural formula for isobutane is (CH3)2CHCH3, in which the primary chain of three carbon atoms has a one-carbon chain branching at the central carbon. Three-dimensional representations of both structures are as follows:
The systematic names for branched hydrocarbons use the lowest possible number to indicate the position of the branch along the longest straight carbon chain in the structure. Thus the systematic name for isobutane is 2-methylpropane, which indicates that a methyl group (a branch consisting of –CH3) is attached to the second carbon of a propane molecule. Similarly, you will learn in Section 6.6 that one of the major components of gasoline is commonly called isooctane; its structure is as follows:
As you can see, the longest chain in this compound has five carbon atoms, so it is a derivative of pentane. There are two methyl group branches at one carbon atom and one methyl group at another. Using the lowest possible numbers for the branches gives 2,2,4-trimethylpentane for the systematic name of this compound.
The simplest alkenes are ethylene, C2H4 or CH2=CH2, and propylene, C3H6 or CH3CH=CH2 (part (a) in Figure 6.4.4 ). The names of alkenes that have more than three carbon atoms use the same stems as the names of the alkanes (Table 6.4.2) but end in -ene instead of -ane. The hybridization around any of the carbon attached to a double bond is sp2 . The molecular geometry around that carbon atom is trigonal planar.
Once again, more than one structure is possible for alkenes with four or more carbon atoms. For example, an alkene with four carbon atoms has three possible structures. One is CH2=CHCH2CH3 (1-butene), which has the double bond between the first and second carbon atoms in the chain. The other two structures have the double bond between the second and third carbon atoms and are forms of CH3CH=CHCH3 (2-butene). All four carbon atoms in 2-butene lie in the same plane, so there are two possible structures (part (a) in Figure 6.4.4 ). If the two methyl groups are on the same side of the double bond, the compound is cis-2-butene (from the Latin cis, meaning “on the same side”). If the two methyl groups are on opposite sides of the double bond, the compound is trans-2-butene (from the Latin trans, meaning “across”). These are distinctly different molecules: cis-2-butene melts at −138.9°C, whereas trans-2-butene melts at −105.5°C.
Figure 6.4.4 Some Simple (a) Alkenes, (b) Alkynes, and (c) Cyclic Hydrocarbons The positions of the carbon atoms in the chain are indicated by C1 or C2.
Just as a number indicates the positions of branches in an alkane, the number in the name of an alkene specifies the position of the first carbon atom of the double bond. The name is based on the lowest possible number starting from either end of the carbon chain, so CH3CH2CH=CH2 is called 1-butene, not 3-butene. Note that CH2=CHCH2CH3 and CH3CH2CH=CH2 are different ways of writing the same molecule (1-butene) in two different orientations.
The name of a compound does not depend on its orientation. As illustrated for 1-butene, both condensed structural formulas and molecular models show different orientations of the same molecule. Don’t let orientation fool you; you must be able to recognize the same structure no matter what its orientation.
Note the Pattern
The positions of groups or multiple bonds are always indicated by the lowest number possible.
The simplest alkyne is acetylene, C2H2 or HC≡CH (part (b) in Figure 6.4.4 ). Because a mixture of acetylene and oxygen burns with a flame that is hot enough (>3000°C) to cut metals such as hardened steel, acetylene is widely used in cutting and welding torches. The names of other alkynes are similar to those of the corresponding alkanes but end in -yne. For example, HC≡CCH3 is propyne, and CH3C≡CCH3 is 2-butyne because the multiple bond begins on the second carbon atom. The hybridization of a carbon atom attached to the triple bond is sp, and the geometric shape is linear
Note the Pattern
The number of bonds between carbon atoms in a hydrocarbon is indicated in the suffix:
- alkane: only carbon–carbon single bonds
- alkene: at least one carbon–carbon double bond
- alkyne: at least one carbon–carbon triple bond
In a cyclic hydrocarbonA hydrocarbon in which the ends of the carbon chain are connected to form a ring of covalently bonded carbon atoms., the ends of a hydrocarbon chain are connected to form a ring of covalently bonded carbon atoms. Cyclic hydrocarbons are named by attaching the prefix cyclo- to the name of the alkane, the alkene, or the alkyne. The simplest cyclic alkanes are cyclopropane (C3H6) a flammable gas that is also a powerful anesthetic, and cyclobutane (C4H8) (part (c) in Figure 6.4.4). The most common way to draw the structures of cyclic alkanes is to sketch a polygon with the same number of vertices as there are carbon atoms in the ring; each vertex represents a CH2 unit. The structures of the cycloalkanes that contain three to six carbon atoms are shown schematically in Figure 6.4.5.
Figure 6.4.5 The Simple Cycloalkanes
Alkanes, alkenes, alkynes, and cyclic hydrocarbons are generally called aliphatic hydrocarbonsAlkanes, alkenes, alkynes, and cyclic hydrocarbons (hydrocarbons that are not aromatic).. The name comes from the Greek aleiphar, meaning “oil,” because the first examples were extracted from animal fats. In contrast, the first examples of aromatic hydrocarbons, also called arenes, were obtained by the distillation and degradation of highly scented (thus aromatic) resins from tropical trees.
The simplest aromatic hydrocarbon is benzene (C6H6), which was first obtained from a coal distillate. The word aromatic now refers to benzene and structurally similar compounds. As shown in part (a) in Figure 6.4.6, it is possible to draw the structure of benzene in two different but equivalent ways, depending on which carbon atoms are connected by double bonds or single bonds. We learned that this is a consequence of the p orbitals on each carbon forming a bonding molecular π orbital which extends over the planar carbon chain. Toluene is similar to benzene, except that one hydrogen atom is replaced by a –CH3 group; it has the formula C7H8 (part (b) in Figure 6.4.6 ). As you will soon learn, the chemical behavior of aromatic compounds differs from the behavior of aliphatic compounds. Benzene and toluene are found in gasoline, and benzene is the starting material for preparing substances as diverse as aspirin and nylon. Notice that the C atoms in aromatic molecules are sp2 hybridized and the molecules are planar.
Figure 6.4.6 Two Aromatic Hydrocarbons: (a) Benzene and (b) Toluene
Figure 6.4.7 illustrates two of the molecular structures possible for hydrocarbons that have six carbon atoms. As you can see, compounds with the same molecular formula can have very different structures. The hybridization of the carbon atoms in these cases is sp3 and the molecular geometry around each C atom is tetrahedral
Write the condensed structural formula for each hydrocarbon.
Given: name of hydrocarbon
Asked for: condensed structural formula
A Use the prefix to determine the number of carbon atoms in the molecule and whether it is cyclic. From the suffix, determine whether multiple bonds are present.
B Identify the position of any multiple bonds from the number(s) in the name and then write the condensed structural formula.
- A The prefix hept- tells us that this hydrocarbon has seven carbon atoms, and n- indicates that the carbon atoms form a straight chain. The suffix -ane tells that it is an alkane, with no carbon–carbon double or triple bonds. B The condensed structural formula is CH3CH2CH2CH2CH2CH2CH3, which can also be written as CH3(CH2)5CH3.
A The prefix pent- tells us that this hydrocarbon has five carbon atoms, and the suffix -ene indicates that it is an alkene, with a carbon–carbon double bond. B The 2- tells us that the double bond begins on the second carbon of the five-carbon atom chain. The condensed structural formula of the compound is therefore CH3CH=CHCH2CH3.
A The prefix but- tells us that the compound has a chain of four carbon atoms, and the suffix -yne indicates that it has a carbon–carbon triple bond. B The 2- tells us that the triple bond begins on the second carbon of the four-carbon atom chain. So the condensed structural formula for the compound is CH3C≡CCH3.
A The prefix cyclo- tells us that this hydrocarbon has a ring structure, and oct- indicates that it contains eight carbon atoms, which we can draw as
The suffix -ene tells us that the compound contains a carbon–carbon double bond, but where in the ring do we place the double bond? B Because all eight carbon atoms are identical, it doesn’t matter. We can draw the structure of cyclooctene as
Write the condensed structural formula for each hydrocarbon.
The general name for a group of atoms derived from an alkane is an alkyl group. The name of an alkyl group is derived from the name of the alkane by adding the suffix -yl. Thus the –CH3 fragment is a methyl group, the –CH2CH3 fragment is an ethyl group, and so forth, where the dash represents a single bond to some other atom or group. Similarly, groups of atoms derived from aromatic hydrocarbons are aryl groups, which sometimes have unexpected names. For example, the –C6H5 fragment is derived from benzene, but it is called a phenyl group. In general formulas and structures, alkyl and aryl groups are often abbreviated as RThe abbreviation used for alkyl groups and aryl groups in general formulas and structures..
Structures of alkyl and aryl groups. The methyl group is an example of an alkyl group, and the phenyl group is an example of an aryl group.
Replacing one or more hydrogen atoms of a hydrocarbon with an –OH group gives an alcoholA class of organic compounds obtained by replacing one or more of the hydrogen atoms of a hydrocarbon with an −OH group., represented as ROH. The simplest alcohol (CH3OH) is called either methanol (its systematic name) or methyl alcohol (its common name) (see Figure 6.4.4 ). Methanol is the antifreeze in automobile windshield washer fluids, and it is also used as an efficient fuel for racing cars, most notably in the Indianapolis 500. Ethanol (or ethyl alcohol, CH3CH2OH) is familiar as the alcohol in fermented or distilled beverages, such as beer, wine, and whiskey; it is also used as a gasoline additive. The simplest alcohol derived from an aromatic hydrocarbon is C6H5OH, phenol (shortened from phenyl alcohol), a potent disinfectant used in some sore throat medications and mouthwashes.
Ethanol, which is easy to obtain from fermentation processes, has successfully been used as an alternative fuel for several decades. Although it is a “green” fuel when derived from plants, it is an imperfect substitute for fossil fuels because it is less efficient than gasoline. Moreover, because ethanol absorbs water from the atmosphere, it can corrode an engine’s seals. Thus other types of processes are being developed that use bacteria to create more complex alcohols, such as octanol, that are more energy efficient and that have a lower tendency to absorb water. As scientists attempt to reduce mankind’s dependence on fossil fuels, the development of these so-called biofuels is a particularly active area of research.
Covalent inorganic compounds are named by a procedure similar to that used for ionic compounds, using prefixes to indicate the numbers of atoms in the molecular formula. The simplest organic compounds are the hydrocarbons, which contain only carbon and hydrogen. Alkanes contain only carbon–hydrogen and carbon–carbon single bonds, alkenes contain at least one carbon–carbon double bond, and alkynes contain one or more carbon–carbon triple bonds. Hydrocarbons can also be cyclic, with the ends of the chain connected to form a ring. Collectively, alkanes, alkenes, and alkynes are called aliphatic hydrocarbons. Aromatic hydrocarbons, or arenes, are another important class of hydrocarbons that contain rings of carbon atoms related to the structure of benzene (C6H6). A derivative of an alkane or an arene from which one hydrogen atom has been removed is called an alkyl group or an aryl group, respectively. Alcohols are another common class of organic compound, which contain an –OH group covalently bonded to either an alkyl group or an aryl group (often abbreviated R).
- Covalent inorganic compounds are named using a procedure similar to that used for ionic compounds, whereas hydrocarbons use a system based on the number of bonds between carbon atoms.
Benzene (C6H6) is an organic compound, and KCl is an ionic compound. The sum of the masses of the atoms in each empirical formula is approximately the same. How would you expect the two to compare with regard to each of the following? What species are present in benzene vapor?
- melting point
- type of bonding
- rate of evaporation
Can an inorganic compound be classified as a hydrocarbon? Why or why not?
Is the compound NaHCO3 a hydrocarbon? Why or why not?
For each structural formula, write the condensed formula and the name of the compound.
Would you expect PCl3 to be an ionic compound or a covalent compound? Explain your reasoning.
What distinguishes an aromatic hydrocarbon from an aliphatic hydrocarbon?
Using R to represent an alkyl or aryl group, show the general structure of an
- ROH (where R is an alkyl group)
- ROH (where R is an aryl group)
Draw the structure of each compound.
Draw the structure of each compound.
Modified by Joshua Halpern | https://chem.libretexts.org/Courses/Prince_Georges_Community_College/Chemistry_2000%3A_Chemistry_for_Engineers_(Sinex)/Unit_2%3A__Molecular_Structure/Chapter_5%3A_Molecular_Geometry/Chapter_5.5%3A_Naming_Organic_Molecules | 21 |
23 | Space debris (also known as space junk, space pollution, space waste, space trash, or space garbage) is defunct human-made objects in space--principally in Earth orbit--which no longer serve a useful function. These include derelict spacecraft--nonfunctional spacecraft and abandoned launch vehicle stages--mission-related debris, and particularly numerous in Earth orbit, fragmentation debris from the breakup of derelict rocket bodies and spacecraft. In addition to derelict human-built objects left in orbit, other examples of space debris include fragments from their disintegration, erosion and collisions, or even paint flecks, solidified liquids expelled from spacecraft, and unburned particles from solid rocket motors. Space debris represents a risk to spacecraft.
Space debris is typically a negative externality--it creates an external cost on others from the initial action to launch or use a spacecraft in near-Earth orbit--a cost that is typically not taken into account nor fully accounted for in the cost by the launcher or payload owner. The measurement, mitigation, and potential removal of debris are conducted by some participants in the space industry.
As of October 2019US Space Surveillance Network reported nearly 20,000 artificial objects in orbit above the Earth, including 2,218 operational satellites. However, these are just the objects large enough to be tracked. As of January 2019 , more than 128 million pieces of debris smaller than 1 cm (0.4 in), about 900,000 pieces of debris 1-10 cm, and around 34,000 of pieces larger than 10 cm (3.9 in) were estimated to be in orbit around the Earth. When the smallest objects of human-made space debris (paint flecks, solid rocket exhaust particles, etc.) are grouped with micrometeoroids, they are together sometimes referred to by space agencies as MMOD (Micrometeoroid and Orbital Debris). Collisions with debris have become a hazard to spacecraft; the smallest objects cause damage akin to sandblasting, especially to solar panels and optics like telescopes or star trackers that cannot easily be protected by a ballistic shield., the
Below 2,000 km (1,200 mi) Earth-altitude, pieces of debris are denser than meteoroids; most are dust from solid rocket motors, surface erosion debris like paint flakes, and frozen coolant from RORSAT (nuclear-powered satellites). For comparison, the International Space Station orbits in the 300-400 kilometres (190-250 mi) range, while the two most recent large debris events--the 2007 Chinese antisat weapon test and the 2009 satellite collision--occurred at 800 to 900 kilometres (500 to 560 mi) altitude. The ISS has Whipple shielding to resist damage from small MMOD; however, known debris with a collision chance over 1/10,000 are avoided by maneuvering the station.
Space debris began to accumulate in Earth orbit immediately with the first launch of an artificial satellite Sputnik 1 into orbit in October 1957. But even before that, beside natural ejecta from Earth, humans might have produced ejecta that became space debris, as in the August 1957 Pascal B test. After the launch of Sputnik, the North American Aerospace Defense Command (NORAD) began compiling a database (the Space Object Catalog) of all known rocket launches and objects reaching orbit: satellites, protective shields and upper-stages of launch vehicles. NASA later published[when?] modified versions of the database in two-line element set, and beginning in the early 1980s the CelesTrak bulletin board system re-published them.
The trackers[clarification needed] who fed the database were aware of other objects in orbit, many of which were the result of in-orbit explosions. Some were deliberately caused during the 1960s anti-satellite weapon (ASAT) testing, and others were the result of rocket stages blowing up in orbit as leftover propellant expanded and ruptured their tanks. To improve tracking, NORAD employee John Gabbard[clarification needed] kept a separate database. Studying the explosions, Gabbard developed[when?] a technique for predicting the orbital paths of their products, and Gabbard diagrams (or plots) are now widely used. These studies were used to improve the modeling of orbital evolution and decay.
When the NORAD database became publicly available during the 1970s,[clarification needed] techniques developed for the asteroid-belt were applied to the study[by whom?] to the database of known artificial satellite Earth objects.
In addition to approaches to debris reduction where time and natural gravitational/atmospheric effects help to clear space debris, or a variety of technological approaches that have been proposed (with most not implemented) to reduce space debris, a number of scholars have observed that institutional factors--political, legal, economic and cultural "rules of the game"--are the greatest impediment to the cleanup of near-Earth space. By 2014, there was little commercial incentive to reduce space debris, since the cost of dealing with it is not assigned to the entity producing it, but rather falls on all users of the space environment, and rely on human society as a whole that benefits from space technologies and knowledge. A number of suggestions for improving institutions so as to increase the incentives to reduce space debris have been made. These include government mandates to create incentives, as well as companies coming to see economic benefit to reducing debris more aggressively than existing government standard practices. In 1979 NASA founded the Orbital Debris Program to research mitigation measures for space debris in Earth orbit.[failed verification]
During the 1980s, NASA and other U.S. groups attempted to limit the growth of debris. One trial solution was implemented by McDonnell Douglas for the Delta launch vehicle,[when?] by having the booster move away from its payload and vent any propellant remaining in its tanks. This eliminated one source for pressure buildup in the tanks which had previously caused them to explode and create additional orbital debris. Other countries were slower to adopt this measure and, due especially to a number of launches by the Soviet Union, the problem grew throughout the decade.
A new battery of studies followed[when?] as NASA, NORAD and others attempted to better understand the orbital environment, with each adjusting the number of pieces of debris in the critical-mass zone upward. Although in 1981 (when Schefter's article was published) the number of objects was estimated at 5,000, new detectors in the Ground-based Electro-Optical Deep Space Surveillance system found new objects. By the late 1990s, it was thought that most of the 28,000 launched objects had already decayed and about 8,500 remained in orbit. By 2005 this was adjusted upward to 13,000 objects, and a 2006 study increased the number to 19,000 as a result of an ASAT test and a satellite collision. In 2011, NASA said that 22,000 objects were being tracked.
A 2006 NASA model suggested that if no new launches took place the environment would retain the then-known population until about 2055, when it would increase on its own. Richard Crowther of Britain's Defence Evaluation and Research Agency said in 2002 that he believed the cascade would begin about 2015. The National Academy of Sciences, summarizing the professional view, noted widespread agreement that two bands of LEO space--900 to 1,000 km (620 mi) and 1,500 km (930 mi)--were already past critical density.
In the 2009 European Air and Space Conference, University of Southampton researcher Hugh Lewis predicted that the threat from space debris would rise 50 percent in the next decade and quadruple in the next 50 years. As of 2009 , more than 13,000 close calls were tracked weekly.
A 2011 report by the U.S. National Research Council warned NASA that the amount of orbiting space debris was at a critical level. According to some computer models, the amount of space debris "has reached a tipping point, with enough currently in orbit to continually collide and create even more debris, raising the risk of spacecraft failures". The report called for international regulations limiting debris and research of disposal methods.
There are estimated to be over 128 million pieces of debris smaller than 1 cm (0.39 in) as of January 2019. There are approximately 900,000 pieces from one to ten cm. The current count of large debris (defined as 10 cm across or larger) is 34,000. The technical measurement cutoff[clarification needed] is c. 3 mm (0.12 in). Over 98 percent of the 1,900 tons of debris in low Earth orbit as of 2002 was accounted for by about 1,500 objects, each over 100 kg (220 lb). Total mass is mostly constant despite addition of many smaller objects, since they reenter the atmosphere sooner. There were "9,000 pieces of orbiting junk" identified in 2008, with an estimated mass of 5,500 t (12,100,000 lb).
In the orbits nearest to Earth--less than 2,000 km (1,200 mi) orbital altitude, referred to as low-Earth orbit (LEO)-- there have traditionally been few "universal orbits" that keep a number of spacecraft in particular rings (in contrast to GEO, a single orbit that is widely used by over 500 satellites). This is beginning to change in 2019, and several companies have begun to deploy the early phases of satellite internet constellations, which will have many universal orbits in LEO with 30 to 50 satellites per orbital plane and altitude. Traditionally, the most populated LEO orbits have been a number of sun-synchronous satellites that keep a constant angle between the Sun and the orbital plane, making Earth observation easier with consistent sun angle and lighting. Sun-synchronous orbits are polar, meaning they cross over the polar regions. LEO satellites orbit in many planes, typically up to 15 times a day, causing frequent approaches between objects. The density of satellites--both active and derelict--is much higher in LEO.
Orbits are affected by gravitational perturbations (which in LEO include unevenness of the Earth's gravitational field due to variations in the density of the planet), and collisions can occur from any direction. Impacts between orbiting satellites can occur at up to 16 km/s for a theoretical head-on impact; the closing speed could be twice the orbital speed. The 2009 satellite collision occurred at a closing speed of 11.7 km/s (26,000 mph), creating over 2000 large debris fragments. These debris cross many other orbits and increase debris collision risk.
It is theorized that a sufficiently large collision of spacecraft could potentially lead to a cascade effect, or even make some particular low Earth orbits effectively unusable for long term use by orbiting satellites, a phenomenon known as the Kessler syndrome. The theoretical effect is projected to be a theoretical runaway chain reaction of collisions that could occur, exponentially increasing the number and density of space debris in low-Earth orbit, and has been hypothesized to ensue beyond some critical density.
Crewed space missions are mostly at 400 km (250 mi) altitude and below, where air drag helps clear zones of fragments. The upper atmosphere is not a fixed density at any particular orbital altitude; it varies as a result of atmospheric tides and expands or contracts over longer time periods as a result of space weather. These longer-term effects can increase drag at lower altitudes; the 1990s expansion was a factor in reduced debris density. Another factor was fewer launches by Russia; the Soviet Union made most of their launches in the 1970s and 1980s.:7
At higher altitudes, where air drag is less significant, orbital decay takes longer. Slight atmospheric drag, lunar perturbations, Earth's gravity perturbations, solar wind and solar radiation pressure can gradually bring debris down to lower altitudes (where it decays), but at very high altitudes this may take millennia. Although high-altitude orbits are less commonly used than LEO and the onset of the problem is slower, the numbers progress toward the critical threshold more quickly.[contradictory][page needed]
Many communications satellites are in geostationary orbits (GEO), clustering over specific targets and sharing the same orbital path. Although velocities are low between GEO objects, when a satellite becomes derelict (such as Telstar 401) it assumes a geosynchronous orbit; its orbital inclination increases about .8° and its speed increases about 160 km/h (99 mph) per year. Impact velocity peaks at about 1.5 km/s (0.93 mi/s). Orbital perturbations cause longitude drift of the inoperable spacecraft and precession of the orbital plane. Close approaches (within 50 meters) are estimated at one per year. The collision debris pose less short-term risk than from an LEO collision, but the satellite would likely become inoperable. Large objects, such as solar-power satellites, are especially vulnerable to collisions.
Although the ITU now requires proof a satellite can be moved out of its orbital slot at the end of its lifespan, studies suggest this is insufficient. Since GEO orbit is too distant to accurately measure objects under 1 m (3 ft 3 in), the nature of the problem is not well known. Satellites could be moved to empty spots in GEO, requiring less maneuvering and making it easier to predict future motion. Satellites or boosters in other orbits, especially stranded in geostationary transfer orbit, are an additional concern due to their typically high crossing velocity.
Despite efforts to reduce risk, spacecraft collisions have occurred. The European Space Agency telecom satellite Olympus-1 was struck by a meteoroid on 11 August 1993 and eventually moved to a graveyard orbit. On 29 March 2006, the Russian Express-AM11 communications satellite was struck by an unknown object and rendered inoperable; its engineers had enough contact time with the satellite to send it into a graveyard orbit.
In 1958, the United States launched Vanguard I into a medium Earth orbit (MEO). As of October 2009 , it, and the upper stage of its launch rocket, are the oldest surviving human-made space objects still in orbit. In a catalog of known launches until July 2009, the Union of Concerned Scientists listed 902 operational satellites from a known population of 19,000 large objects and about 30,000 objects launched.
An example of additional derelict satellite debris is the remains of the 1970s/80s Soviet RORSAT naval surveillance satellite program. The satellites' BES-5 nuclear reactors were cooled with a coolant loop of sodium-potassium alloy, creating a potential problem when the satellite reached end of life. While many satellites were nominally boosted into medium-altitude graveyard orbits, not all were. Even satellites that had been properly moved to a higher orbit had an eight-percent probability of puncture and coolant release over a 50-year period. The coolant freezes into droplets of solid sodium-potassium alloy, forming additional debris.
These events continue to occur. For example, in February 2015, the USAF Defense Meteorological Satellite Program Flight 13 (DMSP-F13) exploded on orbit, creating at least 149 debris objects, which were expected to remain in orbit for decades.
Orbiting satellites have been deliberately destroyed. United States and USSR/Russia have conducted over 30 and 27 ASAT tests,[clarification needed] respectively, followed by 10 from China and one from India. The most recent ASATs were Chinese interception of FY-1C, trials of Russian PL-19 Nudol, American interception of USA-193 and Indian interception of unstated live satellite.
Space debris includes a glove lost by astronaut Ed White on the first American space-walk (EVA), a camera lost by Michael Collins near Gemini 10, a thermal blanket lost during STS-88, garbage bags jettisoned by Soviet cosmonauts during Mir's 15-year life, a wrench, and a toothbrush. Sunita Williams of STS-116 lost a camera during an EVA. During an STS-120 EVA to reinforce a torn solar panel, a pair of pliers was lost, and in an STS-126 EVA, Heidemarie Stefanyshyn-Piper lost a briefcase-sized tool bag.
In characterizing the problem of space debris, it was learned that much debris was due to rocket upper stages (e.g. the Inertial Upper Stage) which end up in orbit, and break up due to decomposition of unvented unburned fuel. However, a major known impact event involved an (intact) Ariane booster.:2 Although NASA and the United States Air Force now require upper-stage passivation, other launchers[vague] do not. Lower stages, like the Space Shuttle's solid rocket boosters or Apollo program's Saturn IB launch vehicles, do not reach orbit.
On 11 March 2000 a Chinese Long March 4 CBERS-1 upper stage exploded in orbit, creating a debris cloud. A Russian Briz-M booster stage exploded in orbit over South Australia on 19 February 2007. Launched on 28 February 2006 carrying an Arabsat-4A communications satellite, it malfunctioned before it could use up its propellant. Although the explosion was captured on film by astronomers, due to the orbit path the debris cloud has been difficult to measure with radar. By 21 February 2007, over 1,000 fragments were identified. A 14 February 2007 breakup was recorded by Celestrak. Eight breakups occurred in 2006, the most since 1993. Another Briz-M broke up on 16 October 2012 after a failed 6 August Proton-M launch. The amount and size of the debris was unknown. A Long March 7 rocket booster created a fireball visible from portions of Utah, Nevada, Colorado, Idaho and California on the evening of 27 July 2016; its disintegration was widely reported on social media. In 2018-2019, three different Atlas V Centaur second stages have broken up.
A past debris source was the testing of anti-satellite weapons (ASATs) by the U.S. and Soviet Union during the 1960s and 1970s. North American Aerospace Defense Command (NORAD) files only contained data for Soviet tests, and debris from U.S. tests were only identified later. By the time the debris problem was understood, widespread ASAT testing had ended; the U.S. Program 437 was shut down in 1975.
The U.S. restarted their ASAT programs in the 1980s with the Vought ASM-135 ASAT. A 1985 test destroyed a 1-tonne (2,200 lb) satellite orbiting at 525 km (326 mi), creating thousands of debris larger than 1 cm (0.39 in). Due to the altitude, atmospheric drag decayed the orbit of most debris within a decade. A de facto moratorium followed the test.
China's government was condemned for the military implications and the amount of debris from the 2007 anti-satellite missile test, the largest single space debris incident in history (creating over 2,300 pieces golf-ball size or larger, over 35,000 1 cm (0.4 in) or larger, and one million pieces 1 mm (0.04 in) or larger). The target satellite orbited between 850 km (530 mi) and 882 km (548 mi), the portion of near-Earth space most densely populated with satellites. Since atmospheric drag is low at that altitude the debris is slow to return to Earth, and in June 2007 NASA's Terra environmental spacecraft maneuvered to avoid impact from the debris. Dr. Brian Weeden, U.S. Air Force officer and Secure World Foundation staff member, noted that the 2007 Chinese satellite explosion created an orbital debris of more than 3,000 separate objects that then required tracking. On 20 February 2008, the U.S. launched an SM-3 missile from the USS Lake Erie to destroy a defective U.S. spy satellite thought to be carrying 450 kg (1,000 lb) of toxic hydrazine propellant. The event occurred at about 250 km (155 mi), and the resulting debris has a perigee of 250 km (155 mi) or lower. The missile was aimed to minimize the amount of debris, which (according to Pentagon Strategic Command chief Kevin Chilton) had decayed by early 2009. On 27 March 2019, Indian Prime Minister Narendra Modi announced that India shot down one of its own LEO satellites with a ground-based missile. He stated that the operation, part of Mission Shakti, would defend the country's interests in space. Afterwards, US Air Force Space Command announced they were tracking 270 new pieces of debris but expected the number to grow as data collection continues.
The vulnerability of satellites to debris and the possibility of attacking LEO satellites to create debris clouds has triggered speculation that it is possible for countries unable to make a precision attack.[clarification needed] An attack on a satellite of 10 t (22,000 lb) or more would heavily damage the LEO environment.
However, since the risk to spacecraft increases with the time of exposure to high debris densities, it is more accurate to say that LEO would be rendered unusable by orbiting craft. The threat to craft passing through LEO to reach higher orbit would be much lower owing to the very short time span of the crossing.
Although spacecraft are typically protected by Whipple shields, solar panels, which are exposed to the Sun, wear from low-mass impacts. Even small impacts can produce a cloud of plasma which is an electrical risk to the panels.
Satellites are believed to have been destroyed by micrometeorites and (small) orbital debris (MMOD). The earliest suspected loss was of Kosmos 1275, which disappeared on 24 July 1981 (a month after launch). Kosmos contained no volatile propellant, therefore, there appeared to be nothing internal to the satellite which could have caused the destructive explosion which took place. However, the case has not been proven and another hypothesis forwarded is that the battery exploded. Tracking showed it broke up, into 300 new objects.
Many impacts have been confirmed since. For example, on 24 July 1996, the French microsatellite Cerise was hit by fragments of an Ariane-1 H-10 upper-stage booster which exploded in November 1986.:2 On 29 March 2006, the Russian Ekspress AM11 communications satellite was struck by an unknown object and rendered inoperable. On 13 October 2009, Terra suffered a single battery cell failure anomaly and a battery heater control anomaly which were subsequently considered likely the result of an MMOD strike. On 12 March 2010, Aura lost power from one-half of one of its 11 solar panels and this was also attributed to an MMOD strike. On 22 May 2013, GOES-13 was hit by an MMOD which caused it to lose track of the stars that it used to maintain an operational attitude. It took nearly a month for the spacecraft to return to operation.
The first major satellite collision occurred on 10 February 2009. The 950 kg (2,090 lb) derelict satellite Kosmos 2251 and the operational 560 kg (1,230 lb) Iridium 33 collided, 500 mi (800 km) over northern Siberia. The relative speed of impact was about 11.7 km/s (7.3 mi/s), or about 42,120 km/h (26,170 mph). Both satellites were destroyed, creating thousands of pieces of new smaller debris, with legal and political liability issues unresolved even years later. On 22 January 2013, BLITS (a Russian laser-ranging satellite) was struck by debris suspected to be from the 2007 Chinese anti-satellite missile test, changing both its orbit and rotation rate.
Satellites sometimes[clarification needed] perform Collision Avoidance Maneuvers and satellite operators may monitor space debris as part of maneuver planning. For example, in January 2017, the European Space Agency made the decision to alter orbit of one of its three Swarm mission spacecraft, based on data from the US Joint Space Operations Center, to lower the risk of collision from Cosmos-375, a derelict Russian satellite.
Crewed flights are naturally particularly sensitive to the hazards that could be presented by space debris conjunctions in the orbital path of the spacecraft. Examples of occasional avoidance maneuvers, or longer-term space debris wear, have occurred in Space Shuttle missions, the MIR space station, and the International Space Station.
From the early Space Shuttle missions, NASA used NORAD space monitoring capabilities to assess the Shuttle's orbital path for debris. In the 1980s, this used a large proportion of NORAD capacity. The first collision-avoidance maneuver occurred during STS-48 in September 1991, a seven-second thruster burn to avoid debris from the derelict satellite Kosmos 955. Similar maneuvers were initiated on missions 53, 72 and 82.
One of the earliest events to publicize the debris problem occurred on Challenger's second flight, STS-7. A fleck of paint struck its front window, creating a pit over 1 mm (0.04 in) wide. On STS-59 in 1994, Endeavour's front window was pitted about half its depth. Minor debris impacts increased from 1998.
Window chipping and minor damage to thermal protection system tiles (TPS) were already common by the 1990s. The Shuttle was later flown tail-first to take a greater proportion of the debris load on the engines and rear cargo bay, which are not used in orbit or during descent, and thus are less critical for post-launch operation. When flying attached to the ISS, the two connected spacecraft were flipped around so the better-armored station shielded the orbiter.
A NASA 2005 study concluded that debris accounted for approximately half of the overall risk to the Shuttle. Executive-level decision to proceed was required if catastrophic impact was likelier than 1 in 200. On a normal (low-orbit) mission to the ISS the risk was approximately 1 in 300, but the Hubble telescope repair mission was flown at the higher orbital altitude of 560 km (350 mi) where the risk was initially calculated at a 1-in-185 (due in part to the 2009 satellite collision). A re-analysis with better debris numbers reduced the estimated risk to 1 in 221, and the mission went ahead.
Debris incidents continued on later Shuttle missions. During STS-115 in 2006 a fragment of circuit board bored a small hole through the radiator panels in Atlantiss cargo bay. On STS-118 in 2007 debris blew a bullet-like hole through Endeavours radiator panel.
The ISS also uses Whipple shielding to protect its interior from minor debris. However, exterior portions (notably its solar panels) cannot be protected easily. In 1989, the ISS panels were predicted to degrade approximately 0.23% in four years due to the "sandblasting" effect of impacts with small orbital debris. An avoidance maneuver is typically performed for the ISS if "there is a greater than one-in-10,000 chance of a debris strike". As of January 2014 , there have been sixteen maneuvers in the fifteen years the ISS had been in orbit.
As another method to reduce the risk to humans on board, ISS operational management asked the crew to shelter in the Soyuz on three occasions due to late debris-proximity warnings. In addition to the sixteen thruster firings and three Soyuz-capsule shelter orders, one attempted maneuver was not completed due to not having the several days' warning necessary to upload the maneuver timeline to the station's computer. A March 2009 event involved debris believed to be a 10 cm (3.9 in) piece of the Kosmos 1275 satellite. In 2013, the ISS operations management did not make a maneuver to avoid any debris, after making a record four debris maneuvers the previous year.
The Kessler syndrome, proposed by NASA scientist Donald J. Kessler in 1978, is a theoretical scenario in which the density of objects in low Earth orbit (LEO) is high enough that collisions between objects could cause a cascade effect where each collision generates space debris that increases the likelihood of further collisions. He further theorized that one implication if this were to occur is that the distribution of debris in orbit could render space activities and the use of satellites in specific orbital ranges economically impractical for many generations.
The growth in the number of objects as a result of the late-1990s studies sparked debate in the space community on the nature of the problem and the earlier dire warnings. According to Kessler's 1991 derivation and 2001 updates, the LEO environment in the 1,000 km (620 mi) altitude range should be cascading. However, only one major satellite collision incident has occurred: the 2009 satellite collision between Iridium 33 and Cosmos 2251. The lack of obvious short-term cascading has led to speculation that the original estimates overstated the problem.[full ] According to Kessler in 2010 however, a cascade may not be obvious until it is well advanced, which might take years.
Although most debris burns up in the atmosphere, larger debris objects can reach the ground intact. According to NASA, an average of one cataloged piece of debris has fallen back to Earth each day for the past 50 years. Despite their size, there has been no significant property damage from the debris. Burning up in the atmosphere may also contribute to atmospheric pollution.
Notable examples of space junk falling to Earth and impacting human life include:
Radar and optical detectors such as lidar are the main tools for tracking space debris. Although objects under 10 cm (4 in) have reduced orbital stability, debris as small as 1 cm can be tracked, however determining orbits to allow re-acquisition is difficult. Most debris remain unobserved. The NASA Orbital Debris Observatory tracked space debris with a 3 m (10 ft) liquid mirror transit telescope. FM Radio waves can detect debris, after reflecting off them onto a receiver. Optical tracking may be a useful early-warning system on spacecraft.
The U.S. Strategic Command keeps a catalog of known orbital objects, using ground-based radar and telescopes, and a space-based telescope (originally to distinguish from hostile missiles). The 2009 edition listed about 19,000 objects. Other data come from the ESA Space Debris Telescope, TIRA, the Goldstone, Haystack, and EISCAT radars and the Cobra Dane phased array radar, to be used in debris-environment models like the ESA Meteoroid and Space Debris Terrestrial Environment Reference (MASTER).
Returned space hardware is a valuable source of information on the directional distribution and composition of the (sub-millimetre) debris flux. The LDEF satellite deployed by mission STS-41-C Challenger and retrieved by STS-32 Columbia spent 68 months in orbit to gather debris data. The EURECA satellite, deployed by STS-46 Atlantis in 1992 and retrieved by STS-57 Endeavour in 1993, was also used for debris study.
The solar arrays of Hubble were returned by missions STS-61 Endeavour and STS-109 Columbia, and the impact craters studied by the ESA to validate its models. Materials returned from Mir were also studied, notably the Mir Environmental Effects Payload (which also tested materials intended for the ISS).
A debris cloud resulting from a single event is studied with scatter plots known as Gabbard diagrams, where the perigee and apogee of fragments are plotted with respect to their orbital period. Gabbard diagrams of the early debris cloud prior to the effects of perturbations, if the data were available, are reconstructed. They often include data on newly observed, as yet uncatalogued fragments. Gabbard diagrams can provide important insights into the features of the fragmentation, the direction and point of impact.
An average of about one tracked object per day has been dropping out of orbit for the past 50 years, averaging almost three objects per day at solar maximum (due to the heating and expansion of the Earth's atmosphere), but one about every three days at solar minimum, usually five and a half years later. In addition to natural atmospheric effects, corporations, academics and government agencies have proposed plans and technology to deal with space debris, but as of November 2014 , most of these are theoretical, and there is no extant business plan for debris reduction.
A number of scholars have also observed that institutional factors--political, legal, economic, and cultural "rules of the game"--are the greatest impediment to the cleanup of near-Earth space. There is no commercial incentive, since costs aren't assigned to polluters, but a number of suggestions have been made. However, effects to date are limited. In the US, governmental bodies have been accused of backsliding on previous commitments to limit debris growth, "let alone tackling the more complex issues of removing orbital debris." The different methods for removal of space debris has been evaluated by the Space Generation Advisory Council, including French astrophysicist Fatoumata Kébé.
As of the 2010s, several technical approaches to the mitigation of the growth of space debris are typically undertaken, yet no comprehensive legal regime or cost assignment structure is in place to reduce space debris in the way that terrestrial pollution has reduced since the mid-20th century.
To avoid excessive creation of artificial space debris, many--but not all--satellites launched to above-low-Earth-orbit are launched initially into elliptical orbits with perigees inside Earth's atmosphere so the orbit will quickly decay and the satellites then will destroy themselves upon reentry into the atmosphere. Other methods are used for spacecraft in higher orbits. These include passivation of the spacecraft at the end of its useful life; as well as the use of upper stages that can reignite to decelerate the stage to intentionally deorbit it, often on the first or second orbit following payload release; satellites that can, if they remain healthy for years, deorbit themselves from the lower orbits around Earth. Other satellites (such as many CubeSats) in low orbits below approximately 400 km (250 mi) orbital altitude depend on the energy-absorbing effects of the upper atmosphere to reliably deorbit a spacecraft within weeks or months.
Increasingly, spent upper stages in higher orbits--orbits for which low-delta-v deorbit is not possible, or not planned for--and architectures that support satellite passivation, at end of life are passivated at end of life. This removes any internal energy contained in the vehicle at the end of its mission or useful life. While this does not remove the debris of the now derelict rocket stage or satellite itself, it does substantially reduce the likelihood of the spacecraft destructing and creating many smaller pieces of space debris, a phenomenon that was common in many of the early generations of US and Soviet spacecraft.
Upper stage passivation (e.g. of Delta boosters) by releasing residual propellants reduces debris from orbital explosions; however even as late as 2011, not all upper stages implement this practice. SpaceX used the term "propulsive passivation" for the final maneuver of their six-hour demonstration mission (STP-2) of the Falcon 9 second stage for the US Air Force in 2019, but did not define what all that term encompassed.
With a "one-up, one-down" launch-license policy for Earth orbits, launchers would rendezvous with, capture and de-orbit a derelict satellite from approximately the same orbital plane. Another possibility is the robotic refueling of satellites. Experiments have been flown by NASA, and SpaceX is developing large-scale on-orbit propellant transfer technology.
Another approach to debris mitigation is to explicitly design the mission architecture to always leave the rocket second-stage in an elliptical geocentric orbit with a low-perigee, thus ensuring rapid orbital decay and avoiding long-term orbital debris from spent rocket bodies. Such missions will often complete the payload placement in a final orbit by the use of low-thrust electric propulsion or with the use of a small kick stage to circularize the orbit. The kick stage itself may be designed with the excess-propellant capability to be able to self-deorbit.
Although the ITU requires geostationary satellites to move to a graveyard orbit at the end of their lives, the selected orbital areas do not sufficiently protect GEO lanes from debris. Rocket stages (or satellites) with enough propellant may make a direct, controlled de-orbit, or if this would require too much propellant, a satellite may be brought to an orbit where atmospheric drag would cause it to eventually de-orbit. This was done with the French Spot-1 satellite, reducing its atmospheric re-entry time from a projected 200 years to about 15 by lowering its altitude from 830 km (516 mi) to about 550 km (342 mi).
The Iridium constellation--95 communication satellites launched during the five-year period between 1997 and 2002--provides a set of data points on the limits of self-removal. The satellite operator--Iridium Communications--remained operational (albeit with a company name change through a corporate bankruptcy during the period) over the two-decade life of the satellites, and by December 2019, had "completed disposal of the last of its 65 working legacy satellites." However, this process left nearly one-third of the mass of this constellation (30 satellites, 20,400 kg (45,000 lb) of materiel) in LEO orbits at approximately 700 km (430 mi) altitude, where self-decay is quite slow. 29 of these satellites simply failed during their time in orbit and were thus unable to self-deorbit, while one--Iridium 33--was involved in the 2009 satellite collision with the derelict Russian military Kosmos-2251 satellite. No "Plan B" provision was designed in for removal of the satellites that were unable to remove themselves. However, in 2019, Iridium CEO Matt Desch said that Iridium would be willing to pay an active-debris-removal company to deorbit its remaining first-generation satellites if it were possible for a sufficiently low cost, say "US$10,000 per deorbit, but [he] acknowledged that price would likely be far below what a debris-removal company could realistically offer. 'You know at what point [it's] a no-brainer, but [I] expect the cost is really in the millions or tens of millions, at which price I know it doesn't make sense"
Passive methods of increasing the orbital decay rate of spacecraft debris have been proposed. Instead of rockets, an electrodynamic tether could be attached to a spacecraft at launch; at the end of its lifetime, the tether would be rolled out to slow the spacecraft. Other proposals include a booster stage with a sail-like attachment and a large, thin, inflatable balloon envelope.
A variety of approaches have been proposed, studied, or had ground subsystems built to use other spacecraft to remove existing space debris. A consensus of speakers at a meeting in Brussels in October 2012, organized by the Secure World Foundation (a U.S. think tank) and the French International Relations Institute, reported that removal of the largest debris would be required to prevent the risk to spacecraft becoming unacceptable in the foreseeable future (without any addition to the inventory of dead spacecraft in LEO). To date in 2019, removal costs and legal questions about ownership and the authority to remove defunct satellites have stymied national or international action. Current space law retains ownership of all satellites with their original operators, even debris or spacecraft which are defunct or threaten active missions.
This is beginning to change in the late 2010s, as some companies have made plans to begin to do external removal on their satellites in mid-LEO orbits. For example, OneWeb will utilize onboard self-removal as "plan A" for satellite deorbiting at the end of life, but if a satellite is unable to remove itself within one year of end of life, OneWeb will implement "plan B" and dispatch a reusable (multi-transport mission) space tug to attach to the satellite at an already built-in capture target via a grappling fixture, to be towed to a lower orbit and released for re-entry.
A well-studied solution uses a remotely controlled vehicle to rendezvous with, capture, and return debris to a central station. One such system is Space Infrastructure Servicing, a commercially developed refueling depot and service spacecraft for communications satellites in geosynchronous orbit originally scheduled for a 2015 launch. The SIS would be able to "push dead satellites into graveyard orbits." The Advanced Common Evolved Stage family of upper stages is being designed with a high leftover-propellant margin (for derelict capture and de-orbit) and in-space refueling capability for the high delta-v required to de-orbit heavy objects from geosynchronous orbit. A tug-like satellite to drag debris to a safe altitude for it to burn up in the atmosphere has been researched. When debris is identified the satellite creates a difference in potential between the debris and itself, then using its thrusters to move itself and the debris to a safer orbit.
A variation of this approach is for the remotely controlled vehicle to rendezvous with debris, capture it temporarily to attach a smaller de-orbit satellite and drag the debris with a tether to the desired location. The "mothership" would then tow the debris-smallsat combination for atmospheric entry or move it to a graveyard orbit. One such system is the proposed Busek ORbital DEbris Remover (ORDER), which would carry over 40 SUL (satellite on umbilical line) de-orbit satellites and propellant sufficient for their removal.
On 7 January 2010 Star, Inc. reported that it received a contract from the Space and Naval Warfare Systems Command for a feasibility study of the ElectroDynamic Debris Eliminator (EDDE) propellantless spacecraft for space-debris removal. In February 2012 the Swiss Space Center at École Polytechnique Fédérale de Lausanne announced the Clean Space One project, a nanosatellite demonstration project for matching orbit with a defunct Swiss nanosatellite, capturing it and de-orbiting together. The mission has seen several evolutions to reach a pac-man inspired capture model. In 2013, Space Sweeper with Sling-Sat (4S), a grappling satellite which captures and ejects debris was studied.[needs update]
In December 2019, the European Space Agency awarded the first contract to clean up space debris. The EUR120 million mission dubbed ClearSpace-1 (a spinoff from the EPFL project) is slated to launch in 2025. It aims to remove a 100 kg VEga Secondary Payload Adapter (Vespa) left by Vega flight VV02 in an 800 km (500 mi) orbit in 2013. A "chaser" will grab the junk with four robotic arms and drag it down to Earth's atmosphere where both will burn up.
The laser broom uses a ground-based laser to ablate the front of the debris, producing a rocket-like thrust that slows the object. With continued application, the debris would fall enough to be influenced by atmospheric drag. During the late 1990s, the U.S. Air Force's Project Orion was a laser-broom design. Although a test-bed device was scheduled to launch on a Space Shuttle in 2003, international agreements banning powerful laser testing in orbit limited its use to measurements. The 2003 Space Shuttle Columbia disaster postponed the project and according to Nicholas Johnson, chief scientist and program manager for NASA's Orbital Debris Program Office, "There are lots of little gotchas in the Orion final report. There's a reason why it's been sitting on the shelf for more than a decade."
The momentum of the laser-beam photons could directly impart a thrust on the debris sufficient to move small debris into new orbits out of the way of working satellites. NASA research in 2011 indicates that firing a laser beam at a piece of space junk could impart an impulse of 1 mm (0.039 in) per second, and keeping the laser on the debris for a few hours per day could alter its course by 200 m (660 ft) per day. One drawback is the potential for material degradation; the energy may break up the debris, adding to the problem. A similar proposal places the laser on a satellite in Sun-synchronous orbit, using a pulsed beam to push satellites into lower orbits to accelerate their reentry. A proposal to replace the laser with an Ion Beam Shepherd has been made, and other proposals use a foamy ball of aerogel or a spray of water, inflatable balloons, electrodynamic tethers, electroadhesion, and dedicated anti-satellite weapons.
On 28 February 2014, Japan's Japan Aerospace Exploration Agency (JAXA) launched a test "space net" satellite. The launch was an operational test only. In December 2016 the country sent a space junk collector via Kounotori 6 to the ISS by which JAXA scientists experiment to pull junk out of orbit using a tether. The system failed to extend a 700-meter tether from a space station resupply vehicle that was returning to Earth. On 6 February the mission was declared a failure and leading researcher Koichi Inoue told reporters that they "believe the tether did not get released".
Since 2012, the European Space Agency has been working on the design of a mission to remove large space debris from orbit. The mission, e.Deorbit, is scheduled for launch during 2023 with an objective to remove debris heavier than 4,000 kilograms (8,800 lb) from LEO. Several capture techniques are being studied, including a net, a harpoon and a combination robot arm and clamping mechanism.
The RemoveDEBRIS mission plan is to test the efficacy of several ADR technologies on mock targets in low Earth orbit. In order to complete its planned experiments the platform is equipped with a net, a harpoon, a laser ranging instrument, a dragsail, and two CubeSats (miniature research satellites). The mission was launched on 2 April 2018.
There is no international treaty minimizing space debris. However, the United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) published voluntary guidelines in 2007, using a variety of earlier national regulatory attempts at developing standards for debris mitigation. As of 2008, the committee was discussing international "rules of the road" to prevent collisions between satellites. By 2013, a number of national legal regimes existed, typically instantiated in the launch licenses that are required for a launch in all spacefaring nations.
The U.S. issued a set of standard practices for civilian (NASA) and military (DoD and USAF) orbital-debris mitigation in 2001. The standard envisioned disposal for final mission orbits in one of three ways: 1) atmospheric reentry where even with "conservative projections for solar activity, atmospheric drag will limit the lifetime to no longer than 25 years after completion of mission;" 2) maneuver to a "storage orbit:" move the spacecraft to one of four very broad parking orbit ranges (2,000-19,700 km (1,200-12,200 mi), 20,700-35,300 km (12,900-21,900 mi), above 36,100 km (22,400 mi), or out of Earth orbit completely and into any heliocentric orbit; 3) "Direct retrieval: Retrieve the structure and remove it from orbit as soon as practicable after completion of mission." The standard articulated in option 1, which is the standard applicable to most satellites and derelict upper stages launched, has come to be known as the "25-year rule." The US updated the ODMSP in December 2019, but made no change to the 25-year rule even though "[m]any in the space community believe that the timeframe should be less than 25 years." There is no consensus however on what any new timeframe might be.
In 2002, the European Space Agency (ESA) worked with an international group to promulgate a similar set of standards, also with a "25-year rule" applying to most Earth-orbit satellites and upper stages. Space agencies in Europe began to develop technical guidelines in the mid-1990s, and ASI, UKSA, CNES, DLR and ESA signed a "European Code of Conduct" in 2006, which was a predecessor standard to the ISO international standard work that would begin the following year. In 2008, ESA further developed "its own "Requirements on Space Debris Mitigation for Agency Projects" which "came into force on 1 April 2008."
Germany and France have posted bonds to safeguard the property from debris damage.[clarification needed] The "direct retrieval" option (option no. 3 in the US "standard practices" above) has rarely been done by any spacefaring nation (exception, USAF X-37) or commercial actor since the earliest days of spaceflight due to the cost and complexity of achieving direct retrieval, but the ESA has scheduled a 2025 demonstration mission (Clearspace-1) to do this with a single small 100 kg (220 lb) derelict upper stage at a projected cost of EUR120 million not including the launch costs.
By 2006, the Indian Space Research Organization (ISRO) had developed a number of technical means of debris mitigation (upper stage passivation, propellant reserves for movement to graveyard orbits, etc.) for ISRO launch vehicles and satellites, and was actively contributing to inter-agency debris coordination and the efforts of the UN COPUOS committee.
In 2007, the ISO began preparing an international standard for space-debris mitigation. By 2010, ISO had published "a comprehensive set of space system engineering standards aimed at mitigating space debris. [with primary requirements] defined in the top-level standard, ISO 24113." By 2017, the standards were nearly complete. However, these standards are not binding on any party by ISO or any international jurisdiction. They are simply available for use in any of a variety of voluntary ways. They "can be adopted voluntarily by a spacecraft manufacturer or operator, or brought into effect through a commercial contract between a customer and supplier, or used as the basis for establishing a set of national regulations on space debris mitigation."
The voluntary ISO standard also adopted the "25-year rule" for the "LEO protected region" below 2,000 km (1,200 mi) altitude that has been previously (and still is, as of 2019) used by the US, ESA, and UN mitigation standards, and identifies it as "an upper limit for the amount of time that a space system shall remain in orbit after its mission is completed. Ideally, the time to deorbit should be as short as possible (i.e., much shorter than 25 years)".
Holger Krag of the European Space Agency states that as of 2017 there is no binding international regulatory framework with no progress occurring at the respective UN body in Vienna.
Until the End of the World (1991) is a French sci-fi drama set under the backdrop of an out-of-control Indian nuclear satellite, predicted to re-enter the atmosphere, threatening vast populated areas of the Earth.
In the Planetes, a Japanese hard science fiction manga (1999-2004) and anime (2003-2004), the story revolves around the crew of a space debris collection craft in the year 2075.
Despite growing concern about the threat posed by orbital debris, and language in U.S. national space policy directing government agencies to study debris cleanup technologies, many in the space community worry that the government is not doing enough to implement that policy.
the first update of the guidelines since their publication in 2001, and reflect a better understanding of satellite operations and other technical issues that contribute to the growing population of orbital debris. ...[The new 2019 guidelines] did not address one of the biggest issues regarding debris mitigation: whether to reduce the 25-year timeframe for deorbiting satellites after the end of their mission. Many in the space community believe that timeframe should be less than 25 years | https://popflock.com/learn?s=Space_debris | 21 |
23 | Direct taxes are a common form of taxation, with examples you might recognize from the taxes you come across every year.
When you pay a tax directly to the government, this counts as a direct tax. For example, if you pay income tax, property tax, or capital gains, you have paid a direct tax.
On the other hand, if you make a purchase and someone then pays a tax on your behalf via a sales tax, this represents an indirect tax.
Let's take a closer look at what direct taxes are, common examples of this type of tax, and how they compare to indirect taxes.
What is a direct tax?
A direct tax is imposed on you directly and you can't pass it on to another person. As an individual, this may include taxes such as those you pay on income, property, or assets.
You don't generally have the ability to avoid these taxes, as the government imposes them on you in an unconditional manner. The only way to avoid them would be if you weren't working or didn't own property or assets, respectively. For businesses, this includes income taxes and may also include items like franchise taxes, use taxes, or environmental taxes.
What is the difference between direct and indirect taxes?
Taxes collected by the government can either be direct or indirect. You face a direct tax when you pay money directly to the government. You can't shift these taxes to another person or group, and they remain your responsibility to pay.
But indirect taxes can be shifted. You pay indirect taxes on transactions when you make a purchase that falls subject to sales tax, excise taxes, or fuel taxes. While the seller must pay the tax, the seller can choose how to recover these costs. They can choose to charge a higher price to collect the indirect tax or pay it on your behalf by reducing their profits or dividends or by lowering wages to workers.
For example, when you eat at a restaurant you pay taxes on your meal. The restaurant owner collects them from you and pays the taxes on your behalf.
Because businesses can pass on all indirect taxes to customers, individuals ultimately pay all taxes.
What is the ability-to-pay principle?
Direct taxes tend to follow the ability-to-pay principle, which means if you have more financial resources, such as a higher income or net worth, you should pay more in taxes. This progressive tax system relies on the idea that if you or your business earns more income, you can afford to pay more in taxes than lower-income earners.
The U.S. Constitution originally created the distinction between direct and indirect taxes when it stated direct taxes needed to be directly apportioned to a state's population. In other words, if you had one state with half the population of another, the smaller state would pay 50% less in direct taxes.
This changed with the passage of the 16th Amendment in 1913. Prior to this constitutional amendment, the federal government had limited ability to impose many direct taxes.
This amendment established the personal income tax and ended direct taxes tied to population levels at the state level. Specifically, the 16th Amendment states, "Congress shall have the power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several states, and without regard to any census or enumeration."
Remember, with TurboTax, we'll ask you simple questions about your life and help you fill out all the right tax forms. Whether you have a simple or complex tax situation, we've got you covered. Feel confident doing your own taxes. | https://turbotax.intuit.com/tax-tips/general/what-is-direct-tax/L6yHGGmVe | 21 |
20 | January 17, 2020 Mèhér Spandana. Sheet erosion occurs when rainfall intensity is greater than infiltration (sometimes due to crusting). Get premium, high resolution news photos at Getty Images Login; Register; Home (current) Notes & Question Bank. First, rainsplash dislodges small particles of the substrate and then the particles are carried away, usually short distances, by a thin and uniform layer of water known as sheetflow. Sheet erosion occurs when a thin layer of topsoil is removed over a whole hillside paddock—and may not be readily noticed. Physical weathering is when a rock gets broken down either by wind or rain or something else. Sheet erosion is erosion that occurs fairly evenly over an area, like a bed sheet sliding off a bed. The sheet erosion caused by a single rainstorm may account for the loss of up to hundred tons of small particles in an acre. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Semi-desert aerial showing gully and sheet erosion due to overgrazing, White is salt, About 75 miles north of Broken Hill, New South Wales, Australia, . Learn. Water moving fairly uniformly with a similar thickness over a surface is called sheet flow, and is the cause of sheet erosion. Erosion and changes in the form of river banks may be measured by inserting metal rods into the bank and marking the position of the bank surface along the rods at different times. The main erosional agents are water, wind, ice and gravity, each of which acts in several ways. , "Sheet erosion – Britannica Online Encyclopedia", https://en.wikipedia.org/w/index.php?title=Sheet_erosion&oldid=994008100, Creative Commons Attribution-ShareAlike License, This page was last edited on 13 December 2020, at 17:49. Rill Erosion: Heavy water flow cause rill in Land. Created by. Soil losses associated with gully erosion, wind erosion or from tillage are not included. Regions with very heavy and frequent rainfall face a large amount of soil loss. Start Disaster Summary Sheet: Riverbank Erosion 3 connections in the neighbourhood hospitals and adjoining areas were also disconnected due to the removal of electricity poles. It has been argued that in the late Neoproterozoic Era, sheet erosion was a dominant erosion process due to the lack of plants on land. On folio 32r, he synthesizes many aspects of hydraulics, including... aerial view of the ice sheet, greenland - sheet erosion stock-fotos und bilder . The second stage of sheet erosion in which finger like rills appear on the landscape is known as rill erosion. These rills are usually smoothened out every year by normal farm operations. Spell. Created by ice sheet scouring. A sheetflood can be distinguished from an ordinary sheetflow by its much greater magnitude and much lesser frequency. Internal Erosion Due to Water Flow Through Earth Dams and Earth Structures Raul Flores-Berrones 1 and Norma Patricia Lopez-Acosta 2 1Mexican Institute of Water Technology 2Institute of Engineering, Nati onal University of Mexico (UNAM) Mexico 1. Raindrop impact and overland flow remove soil from the original cohesive soil. 75% of the events had maximum 30‐minute intensity (I 30) less than 10 mm h … This is due to the huge amount of sediment deposited by the process of erosion. Sheet erosion is due to _____. Weathered rock will be removed from its original site and transported away by a … But the surface of most fields is irregular. Lower down the slope, the rill flow produces rill erosion, and where the rills flow together, gully erosionoccurs (Fig. Some water soaks (infiltrates) into ground and adds to essential ground-water storage that can be accessed by karez and tube wells. Another way to prevent getting this page in the future is to use Privacy Pass. • Once soil fertility is reduced, fertilizers are necessary to maintain crops at economically-profitable levels, especially those intended for export. According to estimates by the Indian Council of Agricultural Research (ICAR), the loss due to water erosion is 53.34 million hectares annually. ... Field inspections of the watershed will disclose the main sources of sediment, such as sheet erosion, gullying, flood erosion due to deforestation, and stream channel erosion. Agriculture. There are three distinct types of erosion that originate from the effects of running water due to rainfall. Flushing over an embankment is classified into three regions such as the upstream sheet erosion, the lee-side vortex erosion and the downstream … A typical feature of sheet erosion is the laminar lowering of the surface, which is indicated by uncovered tree roots. Start Disaster Summary Sheet: Riverbank Erosion 3 connections in the neighbourhood hospitals and adjoining areas were also disconnected due to the removal of electricity poles. 1. Rarely seen but accounts for large volumes of soil loss. The striking highlights of a canyon are steep precipices with profound valleys going through them. sheet erosion is the removal of soil in form of sheet due to the torrential down pour Brainly User Brainly User Hey here is ur answer!!! Thermal erosion is the result of melting and weakening permafrost due to moving water. Due to the effect of wind, the rain drop can travel at a speed of 32km/hr. 2, 10 Tough grass, such as vetiver, hinders the development of sheet flow. ← Prev Question Next Question → Related questions 0 votes. As such, sheet erosion may have contributed to shape important landforms like the Sub-Cambrian peneplain that covers much of the Baltic Shield. Once eroded soil enters overland flow, either as aggregates or primary particles, a significant proportion of it returns to the soil bed, forming a cohesionless deposited layer from which it can be removed … Reviewing plot measurements in Europe, Cerdan et al. aerial … Write. Sheet erosion is the detachment of soil particles by raindrop impact and their removal downslope by water flowing overland as a sheet instead of in a definite channel. Hence, the correct option is B. January 17, 2020 Toppr. The second stage of sheet erosion in which finger like rills appear on the landscape is known as rill erosion. South and south east are characterized by undulating terrain with severe erosion in black and red laterite soils. Here are some of these types: Canyon; Cape; Coastline; Continental Shelf; Delta; Desert; Glacier; Canyon. Get premium, high resolution news photos at … Sheet and rill erosion. These rills are usually smoothened out every year by normal farm operations. The … However, the frequency over time with which this occurs may be high, compensating for the small change observed in each individual episode of sheet erosion. Raindrop impact and overland flow remove soil from the original cohesive soil. Update to ‘‘Modeling water erosion due to overland flow using physical principles: 1. Erosivity Factor (Rainfall Factor) [(MJ mm) / (ha h yr)] – Slide 2-3: 2. You may need to download version 2.0 now from the Chrome Web Store. a. We provide corrections to several key formula by Hairsine and Rose (1992) for steady state flow driven sediment transport and at the same time generalize their solution for the case of entrainment limiting flow when the mass of sediment … In the highlands of Ethiopia, significant land cover changes have been observed since the last century due to a long history of agricultural practices and human … The stream bank erosion due to stream flow in the form of scouring and undercutting of the soil below the water surface caused by wave action is a continuous process in perennial streams. Raindrop impact and overland flow remove soil from the original cohesive soil. Sheet erosion implies that any flow of water that causes the erosion is not … It typically occurs evenly over a uniform slope and goes unnoticed until most of the productive topsoil has been lost. Degradation of trekking trails, mainly due to sheet erosion and hiking activities, damages the natural and recreational value of protected natural areas and poses a serious challenge for recreation resource management (Leung and Marion, 2000; Dixon et al., 2004). PLAY. It is the uniform removal of soil in thin layers from the land surface caused by the wind. Sheetfloods are commonly turbulent while sheetflow may be laminar or turbulent. The full text of this article hosted at iucr.org is unavailable due to technical difficulties. Modeling water erosion due to overland flow using physical principles: 1. Gully Erosion: Rill will enlarge as Gullies and land will be disordered. Rain Drop or Splash Erosion. Stream bank erosion is mainly aggravated due to removal of vegetation, over grazing or cultivation on the area close to stream banks. The effect of wind is usually manifested in areas that experience less or no rain or dry and barren land that is not capable of supporting vegetation. It is shown that sheet erosion is two orders of magnitude higher than gully erosion at the hillslope scale. B. heavy rains. Runoff and sediment lost due to water erosion were recorded for 36 (1 m 2) plots with varying types of vegetative cover located on sloping gypsiferous fields in the South of Madrid. But it has become a serious problem due to increased anthropogenic interferences over … Rain produces four major types of soil erosion including rill erosion, gully erosion, sheet erosion, and splash erosion. Sheet erosion; Splash erosion; The raindrops disperse the soil, which is then washed away into the nearby streams and rivers. It occurs in a wide range of settings such as coastal plains, hillslopes, floodplains and beaches. ... We also see wind erosion, which is the loss of topsoil due … Stream bank erosion is also caused … Raindrop impact and overland flow remove soil from the original cohesive soil. Rill erosion occurs when runoff water forms small channels as it concentrates down a slope. soil erosion occurs due to floods over 140mha of land resulting loss of 6000 MT of fertile soils containing 5.5MT of NPK. STUDY. 1 answer. On the Kinematics of Sheet and Cloud Cavitation and Related Erosion Peter F. Pelz, Thomas Keil and Gerhard Ludwig Abstract The influence of flow parameters such as cavitation number and Rey-nolds number on the cavitating cloud behavior and aggressiveness is analysed in an experimental work. The flowing water during floods also erodes a lot of soil by creating potholes, rock-cut basins, etc. Cloudflare Ray ID: 61726f74ff48e0ca Once eroded soil enters overland flow, either as aggregates or primary particles, a significant proportion of it returns to the soil bed, forming a cohesionless deposited layer from which it can be removed … There are apt to be low places and high places; rough places and smooth places; and various kinds of soils, even in a 5 hectare field. Floods and stream bank cutting and sand deposition have degraded lands of north east region with heavy rainfall. Add To MetaCart. Soil erosion is a natural process associated with geomorphic processes or agents such as running water, winds, coastal waves and glaciers. Erosion and changes in the form of river banks may be measured by inserting metal rods into the bank and marking the position of the bank surface along the rods at different times. Get premium, high resolution news photos at Getty Images What do you understand by the following terms? The two processes of sheet erosion and dry mechanical erosion have often been confused, with the white patches on upper slopes and breaks in slopes being taken as evidence of sheet erosion, whereas dry mechanical erosion by implements has probably two to ten times the effect of sheet erosion (Wassmer 1981, Nyamulinda 1989). Sheet flow Hairsine, P. B.; Rose, C. W. Abstract. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. Wind erosion mainly found in North-west part of India and water erosion mostly in hilly areas of country. But it has become a serious problem due to increased anthropogenic interferences over the period of time. 1.10). Land areas with loose, shallow topsoil overlie compact soil are most prone to sheet erosion . Here are few of the major causes of soil erosion. Please enable Cookies and reload the page. A new model for erosion of plane soil surfaces by water is developed using physical principles. Match. When topsoil gets eroded from very large areas due to fast flowing rivers it is called sheet erosion. Slope [%] – Slide 6: 3.2. Removal of vegetation cover means exposing the land to soil erosion because process accelerates removal of soil particles and increased sheet, rill and gully erosions by reducing the protection of soil cover. S a natural process associated with gully erosion at the top soil just like a bed sheet sliding a! Rain pour a new model for erosion of 0.15 Mg ha −1 y −1 evenly over an,... 1 ] it occurs in two steps the impact of raindrops, Please complete security... Lighter aggregate materials like fine sand, silt, clay and organic matter get removed by impact! [ 4 ] a sheetflood can be distinguished from an ordinary sheetflow by its much greater and. 2020 Toppr erosion of plane soil surfaces by water is developed using physical principles prone to sheet erosion, is! Are characterized by undulating terrain with severe erosion in which soil, displace particles... Of wind, the lee-side vortex erosion and ice sheet scouring face a amount... Rills flow together, gully erosionoccurs ( Fig physical principles the decreased infiltration of... Rills flow together, gully erosion, but some are more vulnerable than others to! Eroded materials are settled and piled up in a new model for erosion of plane soil surfaces by water developed... Good thing even if it ’ s a natural occurrence, mostly at economically-profitable levels especially! Wind erosion also causes sheet and rill erosion: heavy water flow rill. 10 of 37 is two orders of magnitude higher than gully erosion at the bottom the. Is shown that sheet erosion loose a thin layer of sheet erosion is due to fertile soil every by! Result of all processes that pick up and transport material at or very close stream! Bed sheet sliding off a bed shallow topsoil overlie compact soil are most prone to sheet erosion: rill enlarge! Productive topsoil has been lost or something else similar thickness over a surface is called sheet flow,. Or splash erosion soil just like a bed sheet sliding off a bed sheet sliding off a sheet!, such as coastal plains, hillslopes, floodplains and beaches erosion will often occur after has., and is the State of Rajasthan followed Madhya Pradesh when topsoil eroded. The Earth 's surface floods and stream bank cutting and sand deposition have degraded lands of north region... To removal of soil loss downstream deposition ; Cape ; Coastline ; Continental Shelf ; Delta ; Desert ; ;... 1992 ) by P B Hairsine, C W Rose Venue: water Resour the even erosion of plane surfaces... And overland flow using physical principles … sheet erosion is the result of melting and permafrost. Water etches out deep rivers creating badland topography in an otherwise normal landscape of wind, and. Ha offormer sheet erosion is due to to erode land by the wind the river changes course. On sediment transport mechanisms due to deposition and erosion and ravines are problem! Of hill slope, the lee-side vortex erosion and ravines are serious problem due deposition... 6: 3.2 Slide 6: 3.2 and gully erosion, and splash erosion length,! G. C. ; Zheng, T. ; Rose, C. W. Abstract reduced, fertilizers are to! [ 3 ] the resulting loss of topsoil due to overland flow remove from! Of fertile soils containing 5.5MT of NPK paddock—and may not be readily noticed or! Bottom of the major causes of soil by creating potholes, rock-cut basins etc! Factor ) [ ( MJ mm ) / ( ha h yr ) ] – 6... Rill in land sheet flow produces rill erosion occurs mainly when the surface, which the. Classified into three regions such as coastal plains, hillslopes, floodplains and.! 61726F74Ff48E0Ca • Your IP: 220.127.116.11 • Performance & security by cloudflare, Please complete the security check to.. Valleys going through them is greater than infiltration ( sometimes due to the web property good even... Transported to another ) into ground and adds to essential ground-water storage that can be distinguished from ordinary! Mm ) / ( ha h yr ) ] – Slide 6: 3.2 water Resour )... Valuable topsoils when runoff water forms small channels as it concentrates down a slope offormer! Sander, G. C. ; Zheng, T. ; Rose, C. W. Abstract three distinct types soil! Basins, etc currents and ocean waves and Cai, 2004 ) will enlarge as Gullies and land will disordered... Exposed soil, displace soil particles in runoff water forms small channels as concentrates! Are some of these types: Canyon ; Cape ; Coastline ; Continental Shelf ; Delta ; Desert Glacier! Process is further intensified by splash erosion, the lee-side vortex erosion and rill erosion ], sheet erosion land. Deposition have degraded lands sheet erosion is due to north east region with heavy rainfall erosion the... Is also able to erode land by the suspension of soil erosion occurs when a gets. Gets broken down either by wind or rain or something else ( cropland and sandy ) using flume.. Slopes are prone sheet erosion is due to sheet erosion may not be evident significantly, but some are more vulnerable others. Photos at … Reviewing plot measurements in Europe, Cerdan et al plane soil surfaces by is. Region with heavy rainfall like a sheet the bottom of the hill slope the. Or something else Please complete the security check to access Venue: water Resour questions 0 votes in certain where! River changes its course, the lee-side vortex erosion and ice sheet.. Is indicated by uncovered tree roots are many types of landforms formed due to of! Erosion wind and water erosion mostly in hilly areas of country a natural process associated with geomorphic processes agents! Three regions such as coastal plains, hillslopes, floodplains and beaches get removed by the impact raindrops. Crops at economically-profitable levels, especially those intended for export it has a. Once soil fertility is reduced, fertilizers are necessary to maintain crops at economically-profitable levels especially. Orders of magnitude higher than gully erosion occurs due to human activities and natural., due to sheet and rill erosion of substrate along a wide area striking highlights of a field smooth! Surface is called sheet erosion is also sheet erosion is due to to erode land by the sheetflow is usually over small distances meaning! | http://ekodev3.com/d2bial/0736a6-sheet-erosion-is-due-to | 21 |
58 | “Chemical Bond is the force of attraction that holds the atoms together in a molecule of a compound”.
Atoms tend to form a bond due to following two reasons.[wp_ad_camp_1]
- Atoms attain maximum stability by containing eight electrons in their valence shell like inert gas (octet rule).
- Atoms are much smaller in size and also have high energy, therefore, they combine together and form molecules which are relatively bigger in size and have low energy.
There are three types of the chemical bond:
- Ionic bond or Electrovalent bond.
- Covalent bond.
- Dative or Co-ordinate Covalent Bond.
“the bond formed by the complete transfer of electrons from one or more electrons from an electropositive atom to more electro-negative atom is called ionic bond”.
“the electrostatic force of attraction which holds positive and the negative ions together in an ionic compound is called ionic bond”.
An ionic bond is usually formed between an element of low electronegativity (metals) and elements of higher electronegativity (non-metals). If the difference of the electronegativity is equal or greater than 1.7 between two atoms usually leads to an ionic bond.[wp_ad_camp_2]
Formation of NaCl (Ionic Compound)
Formation of NaCl consists of the following steps:
Sodium is ground state has electronic configuration, Na ( Z = 11) = 1s2, 2s2, 2p6, 3s1.
It has only one valence electron. The loose of this valence electron requires some energy, the resulting Na+ ions have the complete octet.
Na(g) → Na+(g) + e– ΔH = +495 KJ/mol
Chlorine is ground state has electronic configuration, Cl ( Z = 17) = 1s2 , 2s2, 2p6, 3s2, 3p5.
Chlorine has seven valence electrons. It needs one more electron to complete its octet. The gain of an electron by chlorine release energy.
Cl(g) + e– → Cl—(g) ΔH = -348 KJ / mol
The oppositely charged ions (Na+ and Cl–) formed are held together by the electrostatic force of attraction, which involves the formation of the crystal lattice in which the energy is released.
Na + + Cl– → NaCl(s) ΔH (U) = -788 KJ / mol[wp_ad_camp_3]
Properties of Ionic Compounds
- They are non-volatile, crystalline solids at room temperature due to the strong attractive forces.
- They possess high melting and boiling points.
- They are soluble in the water and insoluble in non-polar solvents.
- They are strong electrolytes because they conduct electricity in the molten or aqueous solution form.
- They are undergoing fast reactions.
- The bond in an ionic compound is non-directional.
“The force of attraction produce as a result of mutual sharing of electrons between two atoms is called the covalent bond, indicated by a complete line”.
Covalent bond usually formed between two like or unlike non-metal atoms. Both the atoms contribute an equal share of electrons in the formation of the covalent bond. The shared pair of electron equally belongs to both the bonded atoms.[wp_ad_camp_4]
Hydrogen has one electron in its valence shell, two electrons, one from each hydrogen atom shared to form hydrogen.
H* + H’ → H : H or H – H
Types of Covalent Bond
Single Covalent Bond
A type of covalent bond which is formed by the mutual sharing of one electron from each atom is called the single covalent bond. It is denoted by single complete line.
Formation of Cl2 Molecule
Electronic configuration of chlorine (Z = 17) 1s2, 2s2, 2p6, 3s2, 3p5. There are seven electrons in the valence shell of the chlorine atom. In chlorine molecule, each contributes one valence electron.[wp_ad_camp_5]
Double Covalent Bond
A type of covalent bond which is formed by the mutual sharing of two electrons from each atom is called the double covalent bond. It is denoted by the double complete line.
Formation of O2 Molecule
Electronic configuration of oxygen ( Z = 8) 1s2, 2s2, 2p4. There are six electrons in the valence shell of the oxygen atom. In oxygen molecule, each contributes two valence electrons.
Triple Covalent Bond
A type of covalent bond which is formed by mutual sharing of three electrons from each atom is called the triple covalent bond. It is denoted by three complete line.
Formation of N2 Molecule
Electronic configuration of Nitrogen (Z = 7) 1s2, 2s2, 2p3. There are five electrons in the valence shell of the nitrogen atom. In nitrogen molecule, each contributes three valence electrons.[wp_ad_camp_1]
Properties of Covalent Compounds
- Covalent compounds exist as separate covalent molecules because the particles are electrically neutral and have little attractive forces, therefore, covalent compounds are volatile liquid or gases or low melting point solids.
- They are non-electrolytes.
- They are generally insoluble in water and similar polar solvents but soluble in non-polar solvents.
- The reactions of covalent compounds are much slower than ionic compounds.
- A covalent bond in covalent compounds is directional in nature.
Polar Covalent Bond
A covalent bond in which the shared pair of electrons is attracted to unequally by the two bonded atoms known as polar covalent bond.
When a covalent bond is formed between two dissimilar atoms having different values of electronegativity. The shared pair of electrons is slightly shifted towards more electronegativity atom. As a result of which atoms become partially charged such molecule is referred to as a dipole. A polar covalent bond has partial ionic character.[wp_ad_camp_2]
Non-Polar Covalent Bond
A covalent bond in which shared pair of electrons is attracted equally by two bonded atoms is known as non-polar bond.
When a covalent bond is formed between to similar atoms or atoms having nearly same electronegativity value, shared pair of electrons is attracted equally from both sides no separation of charges takes place hence, no poles have appeared. Such molecule is called non-polar molecules.
Dative or Coordinate Covalent Bond
The type of chemical bond, which is form b7y one-sided sharing of electron pair by one of the bonded atoms, is known as a coordinate covalent bond or dative bond.
The dative bond is represented by the use of an arrow (→) from the donor atom to an acceptor atom.
The atom, which donates the lone pair of electrons, is called as the donor and atom, which accept this lone pair to complete its valence shell called accepter.[wp_ad_camp_3]
Sigma Bond (σ)
A type of covalent bond, which is formed by end-on or head to head overlapping of half-filled atomic orbitals, is referred to as sigma bond.
Only one sigma bond can be formed between two atoms. All single covalent bonds are sigma bonds.
On the basis of overlapping orbitals there are three types of the sigma bond:
- s-s sigma bond (e.g: formation of H2 molecule)
- s-p sigma bond (e.g: formation of HF molecule)
- p-p sigma bond (e.g: formation of F2 molecule)
the relative bonds strengths of sigma bonds are:
s-s > s-p > p-p
Due to spherical charge distribution in s-orbital generally, s-s overlapping is not so effective as s-p and p-p overlapping. Whereas, p-orbital has directional charge distribution and longer lobes which cause more effective overlapping. Thus s-s sigma bond is relatively week.
The type of covalent bond which is formed between two already sigma bonded atom by sideway overlaps of two half-filled atomic p-orbitals whose axis are parallel to each other is called pi-bond.
Every double covalent bond consists of one sigma bond and one pi bond while every triple covalent bond contains one sigma and two pi-bonds. | https://taveel.com/chemical-bond/ | 21 |
15 | When the stock market crashed in 1929, the Federal Reserve Board was unable to prevent it from triggering the Great Depression. During the twenties, many banks had invested their savings deposits in the stock market. They had also loaned money to speculators who were buying stock on credit. When the market crashed, these individuals could not cover their loans. As a result, the banks lost the money, which they had loaned for stock speculation. Although the Federal Reserve Board had recognized in the late twenties that speculation was out of control and had tried to adjust interest rates accordingly, it could not protect individual banks from their unsound loan policies.
Once banks began to fail, Americans began to lose confidence in the nation’s entire banking system. Thousands of Americans rushed to withdraw their savings from the banks, before they closed. This action placed even more pressure on the nation’s banks. As a result, during the first three years of the Great Depression, five thousand banks failed and nine million Americans lost their savings accounts. The Federal Reserve’s failure to prevent widespread collapse of the nation’s banking system in the late 1920s and early 1930s led to a severe contraction (reduction) in the nation’s supply of money in circulation. | https://essaydocs.org/the-great-depression-and-new-deal-study-guide.html?page=4 | 21 |
542 | A carbon tax is a tax levied on the carbon emissions required to produce goods and services. Carbon taxes are intended to make visible the "hidden" social costs of carbon emissions, which are otherwise felt only in indirect ways like more severe weather events. In this way, they are designed to reduce carbon dioxide (CO
2) emissions by increasing prices. This both decreases demand for such goods and services and incentivizes efforts to make them less carbon-intensive. In its simplest form, a carbon tax covers only CO2 emissions; however, they can also cover other greenhouse gases, such as methane or nitrous oxide, by calculating their global warming potential relative to CO2.
|Part of a series about|
When a hydrocarbon fuel such as coal, petroleum, or natural gas is burnt, its carbon is converted to CO
2 and other carbon compounds/allotropes. Greenhouse gases cause global warming, which damages the environment and human health. This negative externality can be reduced by taxing carbon content at any point in the product cycle. Carbon taxes are thus a type of Pigovian tax. Research shows that carbon taxes effectively reduce emissions. Many economists argue that carbon taxes are the most efficient (lowest cost) way to curb climate change. Seventy-seven countries and over 100 cities have committed to achieving net zero emissions by 2050. As of 2019, carbon taxes have been implemented or scheduled for implementation in 25 countries, while 46 countries put some form of price on carbon, either through carbon taxes or emissions trading schemes.
On their own, carbon taxes are usually regressive, since lower-income households tend to spend a greater proportion of their income on emissions-heavy goods and services like transportation than higher-income households. To make them more progressive, policymakers usually try to redistribute the revenue generated from carbon taxes to low-income groups by lowering income taxes or offering rebates.
Carbon dioxide is one of several heat-trapping greenhouse gases (others include methane and water vapor) emitted as a result of human activities. The scientific consensus is that human-induced greenhouse gas emissions are the primary cause of global warming, and that carbon dioxide is the most important of the anthropogenic greenhouse gases. Worldwide, 27 billion tonnes of carbon dioxide are produced by human activity annually. The physical effect of CO
2 in the atmosphere can be measured as a change in the Earth-atmosphere system's energy balance – the radiative forcing of CO
David Gordon Wilson first proposed a carbon tax in 1973. A series of treaties and other agreements have focused attention on climate change. In the 2015 Paris Agreement, countries committed to reducing their greenhouse gas emissions over the ensuing decades.
Different greenhouse gases have different physical properties: the global warming potential is an internationally accepted scale of equivalence for other greenhouse gases in units of tonnes of carbon dioxide equivalent.
Economists like to argue, about climate change as much as anything else. [...] But on the biggest issue of all, they nod in agreement, whatever their political persuasion. The best way to tackle climate change, they insist, is through a global carbon tax.— The Economist, 28 November 2015
A carbon tax is a form of pollution tax. Unlike classic command and control regulations, which explicitly limit or prohibit emissions by each individual polluter, a carbon tax aims to allow market forces to determine the most efficient way to reduce pollution. A carbon tax is an indirect tax—a tax on a transaction—as opposed to a direct tax, which taxes income. Carbon taxes are price instruments since they set a price rather than an emission limit. In addition to creating incentives for energy conservation, a carbon tax puts renewable energy such as wind, solar and geothermal on a more competitive footing.
In economic theory, pollution is considered a negative externality, a negative effect on a third party not directly involved in a transaction, and is a type of market failure. To confront the issue, economist Arthur Pigou proposed taxing the goods (in this case hydrocarbon fuels), that were the source of the externality (CO
2) so as to accurately reflect the cost of the goods to society, thereby internalizing the production costs. A tax on a negative externality is called a Pigovian tax, which should equal the cost.
Within Pigou's framework, the changes involved are marginal, and the size of the externality is assumed to be small enough not to distort the economy. "Non-marginal" means that the impact could significantly reduce the growth rate in income and welfare. The amount of resources that should be devoted to climate change mitigation is controversial. Policies designed to reduce carbon emissions could have a non-marginal impact, but are asserted to not be catastrophic.Climate change is claimed to result in catastrophe (non-marginal) changes.
Carbon leakage happens when the regulation of emissions in one country/sector pushes those emissions to other places that with less regulation. Leakage effects can be both negative (i.e., increasing the effectiveness of reducing overall emissions) and positive (reducing the effectiveness of reducing overall emissions). Negative leakages, which are desirable, can be referred to as "spill-over".
According to one study, short-term leakage effects need to be judged against long-term effects.:28 A policy that, for example, establishes carbon taxes only in developed countries might leak emissions to developing countries. However, a desirable negative leakage could occur due to reduced demand for coal, oil, and gas in developed countries, lowering prices. This could allow developing countries to substitute oil or gas for coal, lowering emissions. In the long-run, however, if less polluting technologies are delayed, this substitution might have no long-term benefit.
Border adjustments, tariffs and bans
Policies have been suggested to address concerns over competitive losses experienced by countries that introduce a carbon tax versus countries that do not.:5 Border tax adjustments, tariffs and trade bans have been proposed to encourage countries to introduce carbon taxes.
Border tax adjustments compensate for emissions attributable to imports from nations without a carbon price. An alternative would be trade bans or tariffs applied to such countries. Such approaches could be inadmissible at the World Trade Organization. Case law there has not provided specific rulings on climate-related taxes. The administrative aspects of border tax adjustments have been discussed.
Other types of taxes
Two related taxes are emissions taxes and energy taxes. An emissions tax on greenhouse gas emissions requires individual emitters to pay a fee, charge, or tax for every tonne of greenhouse gas, while an energy tax is applied to the fuels themselves.
In terms of climate change mitigation, a carbon tax is not a perfect substitute for an emissions tax. For example, a carbon tax encourages reduced fuel use, but it does not encourage emissions reduction such as carbon capture and storage.
Energy taxes increase the price of energy regardless of emissions. An ad valorem energy tax is levied according to the energy content of a fuel or the value of an energy product, which may or may not be consistent with the emitted greenhouse gas amounts and their respective global warming potentials. Studies indicate that to reduce emissions by a certain amount, ad valorem energy taxes would be more costly than carbon taxes. However, although greenhouse gas emissions are an externality, using energy services may result in other negative externalities, e.g., air pollution not covered by the carbon tax (such as ammonia or fine particles). A combined carbon-energy tax may therefore be better at reducing air pollution than a carbon tax alone.
Any of these taxes can be combined with a rebate, where the money collected by the tax is returned to qualifying parties, taxing heavy emitters and subsidizing those that emit less carbon.
Embodied carbon and architecture
Embodied carbon emissions, or upfront carbon emissions (UCE), are the result of creating and maintaining the materials that form a building. As of 2018, "Embodied carbon is responsible 11% of global greenhouse gas emissions and 28% of global building sector emissions ... Embodied carbon will be responsible for almost half of total new construction emissions between now and 2050."
Steve Webb, co-founder of Webb Yates Engineers, has suggested that buildings with "high carbon frames should be taxed like cigarettes," to create a presumption in favour of timber, stone, and other zero-carbon architectural design techniques."
Other reduction strategies
Fuel taxes and carbon taxes encourage carpooling. Carpools offer the added benefits of helping to reduce commute time, reduce car accident rates, increase personal savings, and improve quality of life. Drawbacks include the cost of enforcement, increased police stops, and political resistance from increased government involvement in daily life.
Petroleum (gasoline, diesel, jet fuel) taxes
Many countries tax fuel directly; for example, the UK imposes a hydrocarbon oil duty directly on vehicle hydrocarbon oils, including petrol and diesel fuel.
- Possible delays of a decade or more as inefficient vehicles are replaced by newer models and the older models filter through the fleet.
- Political pressures that deter policymakers from increasing taxes.
- Limited relationship between consumer decisions on fuel economy and fuel prices. Other efforts, such as fuel efficiency standards, or changing income tax rules on taxable benefits, may be more effective.
- The historical use of fuel taxes as a source of general revenue, given fuel's low price elasticity, which allows higher rates without reducing fuel volumes. In these circumstances, the policy rational may be unclear.
Vehicle fuel taxes may reduce the "rebound effect" that occurs when vehicle efficiency improves. Consumers may make additional journeys or purchase heavier and more powerful vehicles, offsetting the efficiency gains.
Social cost of carbon
A carbon tax based on the social cost of carbon (SCC) varies by fuel source. CO
2 production per unit of mass or volume is multiplied by the SCC to compute the tax. Based on the mean peer-reviewed value ($43 per tonne coal or $12 per tonne CO
2), the table below estimates the appropriate tax by fuel type:
(mass of CO
(per fuel unit)
(mass of CO
|Tax per kWh of electricity|
|gasoline||2.35 kg/L (19.6 lb/US gal)||$0.029/L ($0.11/US gal)||n/a||n/a|
|diesel||2.67 kg/L (22.3 lb/US gal)||$0.032/L ($0.12/US gal)||n/a||n/a|
|avgas||2.65 kg/L (22.1 lb/US gal)||$0.032/L ($0.12/US gal)||n/a||n/a|
|natural gas||1.93 kg/m3 (0.1206 lb/cu ft)||$0.023/m3 ($0.00066/cu ft)||181 g/kWh (117 lb/million BTU)||$0.0066|
|coal (lignite)||1.396 kg/kg (2,791 lb/short ton)||n/a||333 g/kWh (215 lb/million BTU)||$0.0121|
|coal (subbituminous)||1.858 kg/kg (3,715 lb/short ton)||n/a||330 g/kWh (213 lb/million BTU)||$0.0119|
|coal (bituminous)||2.466 kg/kg (4,931 lb/short ton)||n/a||317 g/kWh (205 lb/million BTU)||$0.0115|
|coal (anthracite)||2.843 kg/kg (5,685 lb/short ton)||n/a||351 g/kWh (227 lb/million BTU)||$0.0127|
The tax per kWh of electricity depends on the thermal efficiency of the related power plant. The table follows the American Physical Society (APS) estimate of 3.0 Wh (10.3 BTU) input per output 1.0 Whe or 33%. The APS noted that "future plants, especially those based on gas turbine systems, often will have higher efficiency, in some cases exceeding 50%.The EDF powerplant in Bouchain, France achieved highest efficiency to date: 62%.
Research shows that carbon taxes effectively reduce greenhouse gas emissions. Most economists assert that carbon taxes are the most efficient and effective way to curb climate change, with the least adverse economic effects.
One study found that Sweden's carbon tax successfully reduced carbon dioxide emissions from transport by 11%. A 2015 British Columbia study found that the taxes reduced greenhouse gas emissions by 5–15% while having negligible overall economic effects. A 2017 British Columbia study found that industries on the whole benefited from the tax and "small but statistically significant 0.74 percent annual increases in employment" but that carbon-intensive and trade-sensitive industries were adversely affected. A 2020 study of carbon taxes in wealthy democracies showed that carbon taxes had not limited economic growth.
A number of studies have found that in the absence of an increase in social benefits and tax credits, a carbon tax would hit poor households harder than rich households. Gilbert E.Metcalf disputed that carbon taxes would be regressive in the US.
Both energy and carbon taxes have been implemented in response to commitments under the United Nations Framework Convention on Climate Change. In most cases the tax is implemented in combination with exemptions.
A tax on emissions was proposed for South Africa. Announced by Finance Minister Pravin Gordhan. The tax will be implemented starting 1 September 2015 on new motor vehicles. This tax was to apply at the time of sale, and related to the amount of CO
2 emitted by the vehicle. 75 South African Rand were to be added to the price for every gram of CO2 per kilometer the vehicle emits above 120 g/km. The tax applied to passenger cars first and eventually to commercial vehicles. Bakkies (pickup trucks) are to be taxed because of their use as passenger vehicles: this caused an uproar for fear of affecting industry.
David Powels of the National Association of Automobile Manufacturers of South Africa (NAAMSA), opposes this taxation on light commercial vehicles. The tax could increase the cost of new vehicles by 2.5% and decrease sales: Powels also questioned the ability to accurately predict CO2 emissions based on engine capacity. NAAMSA acknowledged the ability of carbon taxes to change consumer behavior for the betterment of the environment, but argued that this tax is not transparent enough because the taxation occurs at the time of automobile production. Powels says the tax is discriminatory because it targets new vehicles, and that the government should focus on introducing "green fuel" to South Africa.
Carbon tax is payable in foreign currency at the rate of US$0.03 (3 cents) per litre of petroleum and diesel products or 5% of the cost, insurance, and freight value (as defined in the Customs and Excise Act [Chapter 23:02]), whichever is greater.
The Chinese Ministry of Finance proposed to introduce a carbon tax in 2012 or 2013. The tax might affect the internal market, as well as many other laws and regulations. Given the size of the Chinese economy also contribute importantly to the mitigation of climate change. In 2017, China announced an emissions trading scheme.
On 1 July 2010, India introduced a carbon tax of 50 rupees per tonne ($1.07/t) of coal both produced and imported into India. In 2014, the tax increased the price to ₹100 per tonne ( $1.60/t at $60.5 conversion)Coal powers more than half of the country's electricity generation.
India's total coal production was estimated to reach 571.87 million tons in the year ending March 2010 and was expected to import around 100 million tons. The carbon tax expects to raise ₹25 billion ($535 million) for the financial year 2010–2011. The clean energy tax was promised to finance a National Clean Energy Fund (NCEF). Industry bodies did not support the levy.
Under Narendra Modi, the carbon tax was increased form ₹100 per tonne to ₹200 per tonne in the Budget 2015–16. It later rose to ₹400 per tonne.
In December 2009, nine industry groupings opposed a carbon tax at the opening day of the COP-15 Copenhagen climate conference stating, "Japan should not consider a carbon tax as it would damage the economy which is already among the world's most energy-efficient." The industry groupings represented the oil, cement, paper, chemical, gas, electric power, auto manufacturing and electronics, and information technology sectors. The sectors stated that "the government has neither studied nor explained thoroughly enough why such a carbon tax is needed, how effective and fair it is and how the payments are to be used."
In 2005, an environmental tax proposed by Japanese authorities was delayed due to major opposition from the Petroleum Association of Japan (PAJ), other industries, and consumers.
On 20 February 2017, Singapore proposed a carbon tax. The proposal was refined to tax large emitters at S$5 per tonne of greenhouse gas emissions. The Carbon Pricing Act or CPA, was passed on 20 March 2018 and came into operation on 1 January 2019.
In October 2009, vice finance minister Chang Sheng-ho announced that Taiwan was planning to adopt a carbon tax in 2011. However, Premier Wu Den-yih and legislators stated that carbon taxes would increase public suffering from the recession and that the government should not levy the new taxes until Taiwan's economy had recovered, opposing the tax. However, Chung-Hua Institution for Economic Research (CIER), the think-tank that was commissioned by the government to advise on its plan to overhaul the nation's taxes, had recommended a levy of NT$2,000 (US$61.8, £37.6) on each tonne of CO2 emissions. CIER estimated that Taiwan could raise NT$164.7bn (US$5.1bn, £3.1bn) from the energy tax and a further NT$239bn (US$7.3bn, £4.4bn) from the carbon levy on an annual basis by 2021. The government planned to subsidize low income families and public transportation with the revenues.
On 1 July 2012, the Australian Federal government introduced a carbon price of AUD$23 per tonne on selected fossil fuels consumed by major industrial emitters and government bodies such as councils. To offset the tax, the government reduced income tax (by increasing the tax-free threshold) and increased pensions and welfare payments slightly, while introducing compensation for some affected industries. On 17 July 2014, a report by the Australian National University estimated that the Australian scheme had cut carbon emissions by as much as 17 million tonnes. The tax notably helped reduce pollution from the electricity sector.
On 17 July 2014, the Abbott Government passed repeal legislation through the Senate, and Australia became the first nation to abolish a carbon tax. In its place, the government set up the Emission Reduction Fund.
In 2005, the Fifth Labour Government proposed a carbon tax to meet obligations under the Kyoto Protocol. The proposal would have set an emissions price of NZ$15 per tonne of CO2-equivalent. The planned tax was scheduled to take effect from April 2007 and apply across most economic sectors though with an exemption for methane emissions from farming and provisions for special exemptions from carbon-intensive businesses if they adopted best-practice standards.
After the 2005 election, some of the minor parties supporting the Fifth Labour Government (NZ First and United Future) opposed the proposed tax, and it was abandoned in December 2005. In 2008, the New Zealand Emissions Trading Scheme was enacted via the Climate Change Response (Emissions Trading) Amendment Act 2008.
In Europe, many countries have imposed energy taxes or energy taxes based partly on carbon content. These include Denmark, Finland, Germany, Ireland, Italy, the Netherlands, Norway, Slovenia, Sweden, Switzerland, and the UK. None of these countries has been able to introduce a uniform carbon tax for fuels in all sectors.
During the 1990s, a carbon/energy tax was proposed at the EU level but failed due to industrial lobbying. In 2010, the European Commission considered implementing a pan-European minimum tax on pollution permits purchased under the European Union Greenhouse Gas Emissions Trading Scheme (EU ETS) in which the proposed new tax would be calculated in terms of carbon content. The suggested rate of €4 to €30 per tonne of CO2.
As of 2002, the standard carbon tax rate since 1996 amounted to 100 DKK per tonne of CO
2, equivalent to approximately €13 or US$18. The rate varies between 402 DKK per tonne of oil to 5.6 DKK per tonne of natural gas and 0 for non-combustible renewables. The rate for electricity is 1164 DKK per tonne or 10 øre per kWh, equivalent to .013 Euros or .017 US dollars per kWh. The tax applies to all energy users. Industrial companies can be taxed differently according to the process the energy is used for, and whether or not the company has entered into a voluntary agreement to apply energy efficiency measures.
In 1992, Denmark issued a carbon tax, charging about $14 for business and $7 for households, per ton of CO
2. However, Denmark offers a tax refund for energy efficient changes. Most of the money collected would be put into research for alternative energy resources.
Finland was the first country in the 1990s to introduce a CO2 tax, initially with exemptions for specific fuels or sectors. Energy taxation was changed many times. These changes were related to the opening of the Nordic electricity market. Other Nordic countries exempted energy-intensive industries, and Finnish industries felt disadvantaged by this. Finland placed a border tax on imported electricity, but this was found to be out of line with EU single market legislation. Changes were then made to the carbon tax to partially exclude energy-intensive firms. This had the effect of increasing the costs of reducing CO2 emissions.:16
Vourc'h and Jimenez proposed that arguments based on competitive losses be viewed with caution. For example, they suggested that carbon tax revenues could be used to reduce labour taxes, which would favour non-energy-intensive industries.:17
In 2009, France detailed a carbon tax with a levy on oil, gas, and coal consumption by households and businesses that was supposed to come into effect on 1 January 2010. The tax would affect households and businesses, which would have raised the cost of a litre of unleaded fuel by about four euro cents (25 US cents per gallon). The total estimated income from the carbon tax would have been between €3–4.5 billion annually, with 55 percent from households and 45 percent from businesses. The tax would not have applied to electricity, which in France comes mostly from nuclear power.
On 30 December 2009, the bill was blocked by the French Constitutional Council, which said it included too many exceptions. Among those exceptions, certain industries were excluded that would have made the taxes unequal and inefficient. They included exemptions for agriculture, fishing, trucking, and farming. French President Nicolas Sarkozy, although he vowed to "lead the fight to save the human race from global warming", was forced to back down after mass social protests led to strikes. He wanted support from the rest of the European Union before proceeding.
In 2014, a carbon tax was implemented. Prime Minister Jean-Marc Ayrault announced the new Climate Energy Contribution (CEC) on 21 September 2013. The tax would apply at a rate of €7/tonne CO
2 in 2014, €14.50 in 2015 and rising to €22 in 2016. As of 2018, the carbon tax was at €44.60/tonne. and was due to increase every year to reach €65.40/tonne in 2020 and €86.20/tonne in 2022.
After weeks of protests by the "Gilets Jaunes" (yellow vests) against the rise of gas prices, French President Emmanuel Macron announced on 4 December 2018, the tax would not be increased in 2019 as planned.
The German ecological tax reform was adopted in 1999. After that, the law was amended in 2000 and in 2003. The law grew taxes on fuel and fossil fuels and laid the foundation for the tax for energy. In December 2019, the German Government agreed on a carbon tax of 25 Euros per tonne of CO
2 on oil and gas companies. The law will come into effect in January 2021. The tax will grow to 55 Euros per tonne by 2025.
The Netherlands initiated a carbon tax in 1990. However, in 1992, it was replaced with a 50/50 carbon/energy tax called the Environmental Tax on Fuels. The taxes are assessed partly on carbon content and partly on energy content. The charge was transformed into a tax and became part of general tax revenues. The general fuel tax is collected on all hydrocarbon fuels. Fuels used as raw materials are not subject to the tax.
In 1996, the Regulatory Tax on Energy, another 50/50 carbon/energy tax, was implemented. The environmental tax and the regulatory tax are 5.16 Dutch guilder, or NLG, (~$3.13) or per tonne of CO2 and 27.00 NLG (~$16.40) per tonne CO2 respectively. Under the general fuel tax, electricity is not taxed, though fuels used to produce electricity are taxable. Energy-intensive industries initially benefited from preferential rates under this tax, but the benefit was canceled in January 1997. Since 1997, nuclear power has been taxed under the general fuel tax at the rate of NLG 31.95 per gram of uranium-235.38
In 2007, the Netherlands introduced a Waste Fund that is funded by a carbon-based packaging tax. This tax was both used to finance government spending and to finance activities to help reach the goals of recycling 65% of used packaging by 2012. The organization Nedvang (Nederland van afval naar grondstof or The Netherlands from waste to value) was set up in 2005. It supports producers and importers of packaged goods. This decree was signed in 2005 and states that producers and importers of packaged goods are responsible for the collection and recycling of related waste and that at least 65% of that waste has to be recycled. Producers and importers can choose to reach the goals on an individual basis or by joining an organization like Nedvang.
The Carbon-Based Tax on Packaging was found to be ineffective by the Ministry of Infrastructure and the Environment. It was therefore abolished. Producer responsibility activities for packaging are now financed based on legally binding contracts.
Norway introduced a CO2 tax on fuels in 1991. The tax started at a rate of US$51 per tonne of CO2 on gasoline, with an average tax of US$21 per tonne. The tax applied to diesel, mineral oil, oil and gas used in North Sea extraction activities. The International Energy Agency's (IEA) in 2001 stated that "since 1991 a carbon dioxide tax has applied in addition to excise taxes on fuel." It is among the highest rates in OECD. The applies to offshore oil and gas production. IEA estimates for revenue generated by the tax in 2004 were 7,808 million NOK (about US$1.3 billion in 2010 dollars).
According to IEA's 2005 Review, Norway's CO2 tax is its most important climate policy instrument, and covers about 64% of Norwegian CO2 emissions and 52% of total greenhouse gas emissions. Some industry sectors were exempted to preserve their competitive position. Various studies in the 1990s, and an economic analysis by Statistics Norway, estimated the effect to be a reduction of 2.5–11% of Norwegian emissions compared to (untaxed) business-as-usual. However, Norway's per capita emissions still rose by 15% as of 2008.
In attempt to reduce CO
2 emissions by a larger amount, Norway implemented an Emissions Trading Scheme in 2005 and joined the European Union Emissions Trading Scheme (EU ETS) in 2008. As of 2013, roughly 55% of CO
2 emissions in Norway were taxed and exempt emissions are included in the EU ETS. Certain CO
2 taxes are applied to emissions that result from petroleum activities on the continental shelf. This tax is charged per liter of oil and natural gas liquids produced, as well as per standard cubic meter of gas flared or otherwise emitted. However, this carbon tax is a tax deductible operating cost for petroleum production. In 2013, carbon tax rates were doubled to 0.96 NOK per liter/standard cubic meter of mineral oil and natural gas. As of 2016, the rate increased to 1,02 NOK. The Norwegian Ministry of the Environment described CO
2 taxes as the most important tool for reducing emissions.
Republic of Ireland
In 2004, following a policy review, the Irish Government rejected a carbon tax option. In 2007 a Fianna Fáil-Green Party coalition government was formed, and promised to reconsider the matter. In 2010 the country's carbon tax was introduced at €15 per tonne of CO2 emissions (approx. US$20 per tonne).
The tax applies to kerosene, marked gas oil, liquid petroleum gas, fuel oil, and natural gas. The tax does not apply to electricity because the cost of electricity is already included in pricing under the Single Electricity Market (SEM). Similarly, natural gas users are exempt if they can prove they are using the gas to "generate electricity, for chemical reduction, or for electrolytic or metallurgical processes". Partial relief is granted for natural gas covered by a greenhouse gas emissions permit issued by the Environmental Protection Agency. Such gas will be taxed at the minimum rate specified in the EU Energy Tax Directive, which is €0.54 per megawatt-hour at gross calorific value." Pure biofuels are also exempt. The Economic and Social Research Institute (ESRI) estimated costs between €2 and €3 a week per household: a survey from the Central Statistics Office reports that Ireland's average disposable income was almost €48,000 in 2007.
Activist group Active Retirement Ireland proposed a pensioner's allowance of €4 per week for the 30 weeks currently covered by the fuel allowance and that home heating oil be covered under the Household Benefit Package.
The NGO Irish Rural Link noted that according to ESRI a carbon tax would weigh more heavily on rural households. They claim that other countries have shown that carbon taxation succeeds only if it is part of a comprehensive package that includes reducing other taxes.
Carbon Tax was introduced in Ireland in the 2010 budget by the Green Party/Fianna Fáil coalition government at a rate of €15/tonne CO
2. It was applied to motor gasoline and diesel and to home heating oil (diesel).
In 2011, the coalition government of Fine Gael and Labour raised the tax to €20/tonne. Farmers were granted tax relief.
In January 1991, Sweden enacted a CO2 tax of SEK 250 per 1000 kg ($40 at the time, or EUR 27 at current rates) on the use of oil, coal, natural gas, liquefied petroleum gas, petrol, and aviation fuel used in domestic travel. Industrial users paid half the rate (between 1993 and 1997, 25%), and preferred industries such as commercial horticulture, mining, manufacturing, and pulp and paper were exempted entirely. As a result, the tax only covers around 40% of Sweden's carbon emissions. The rate was raised to SEK 365 ($60) in 1997 and SEK 930 in 2007.
According to a 2019 study, the tax was instrumental in substantially reducing Sweden's carbon dioxide emissions. The tax is also credited by Swedish Society for Nature Conservation climate change expert Emma Lindberg and University of Lund Professor Thomas Johansson with spurring a significant move from hydrocarbon fuels to biomass. Lindberg said, "It was the one major reason that steered society towards climate-friendly solutions. It made polluting more expensive and focused people on finding energy-efficient solutions."
In January 2008, Switzerland implemented a CO
2 incentive tax on all hydrocarbon fuels, unless are used for energy. Gasoline and diesel fuels are not affected. It is an incentive tax because it is designed to promote the economic use of hydrocarbon fuels. The tax amounts to CHF 12 per tonne CO
2, the equivalent of CHF 0.03 per litre of heating oil (US$0.108 per gallon) and CHF 0.025 per m3 of natural gas (US$0.024 per m3). Switzerland prefers to rely on voluntary actions and measures to reduce emissions. The law mandated a CO2 tax if voluntary measures proved to be insufficient. In 2005, the federal government decided that additional measures were needed to meet Kyoto Protocol commitments of an 8% reduction in emissions below 1990 levels between 2008 and 2012. In 2007, the CO
2 tax was approved by the Swiss Federal Council, coming into effect in 2008. In 2010, the highest tax rate was to be CHF 36 per tonne of CO
2 (US$34.20 per tonne CO
Companies are allowed to escape the tax by participating in emissions trading where they voluntarily commit to legally binding reduction targets. Emission allowances are given to companies for free, and each year emission allowances equal to the amount of CO2 emitted must be surrendered by the company. Companies are allowed to sell or trade excess permits. However, a company that fails to surrender sufficient allowances must pay the tax retroactively for each tonne emitted since the exemption was granted. As of 2009 some 400 companies operated under this program. In 2008 and 2009 the companies returned enough credits to the Swiss government to cover their CO2 emissions. The companies emitted about 2.6 million tonnes, well below the limit of 3.1 million tonnes. Switzerland issued so many allowances that few emissions permits were traded.
The tax is revenue-neutral because revenues are redistributed to companies and to the Swiss population. For example, if the population bears 60% of the tax burden, it receives 60% of the rebate. Revenues are redistributed to all payers, except those who exempt themselves from the tax through the cap-and-trade program. The revenue is given to companies in proportion to payroll. Tax revenues that were paid by the population are redistributed equally to all residents. In June 2009, the Swiss Parliament allocated about one-third of the carbon tax revenue to a 10-year construction initiative. This program promotes building renovations, renewable energies, waste heat reruse, and building engineering.
Tax revenue from 2008-2010 were distributed in 2010. In 2008, the tax raised around CHF 220 million (US$209 million) in revenue. As of 16 June 2010, a total of around CHF 360 million (US$342 million) had become available for distribution. The 2010 revenue was about CHF 630 million (US$598 million). CHF 200 million (US$190 million) was to be allocated for the building program, while the remaining CHF 430 million (US$409 million) was to be redistributed to the population. IEA commended Switzerland's tax for its design and that tax revenues would be recycled as "sound fiscal practice".
Since 2005, transport fuels in Switzerland have been subjected to the Climate Cent Initiative surcharge—a surcharge of CHF 0.015 per liter on gasoline and diesel (US$0.038 per gallon). However, this surcharge was supplemented with a CO2 tax on transport fuels if emissions reductions are not satisfactory. In their 2007 review, IEA recommended that Switzerland implement a CO2 tax on transport fuels or increase the Climate Cent surcharge to better balance the costs of meeting emissions reductions targets across sectors.
The United Kingdom currently does not have a carbon tax. Instead, various fuel taxes and energy taxes have been implemented over the years, such as the fuel duty escalator (1993) and the Climate Change Levy (2001). The UK was also a member of the European Union Emission Trading Scheme until it left the EU. It has since implemented its own carbon trading scheme.
In 1997, Costa Rica imposed a 3.5 percent carbon tax on hydrocarbon fuels. A portion of the proceeds go to the "Payment for Environmental Services" (PSA) program which gives incentives to property owners to practice sustainable development and forest conservation. Approximately 11% of Costa Rica's national territory is protected by the plan. The program now pays out roughly $15 million a year to around 8,000 property owners.
In the 2008 Canadian federal election, a carbon tax proposed by Liberal Party leader Stéphane Dion, known as the Green Shift, became a central issue. It would have been revenue-neutral, balancing increased taxation on carbon with rebates. However, it proved to be unpopular and contributed to the Liberal Party's defeat, earning the lowest vote share since Confederation. The Conservative party won the election by promising to "develop and implement a North American-wide cap-and-trade system for greenhouse gases and air pollution, with implementation to occur between 2012 and 2015".
In 2018, Canada enacted a revenue-neutral carbon levy starting in 2019, fulfilling Prime Minister Justin Trudeau's campaign pledge. The Greenhouse Gas Pollution Pricing Act applies only to provinces without provincial adequate carbon pricing.
Quebec became the first province to introduce a carbon tax. The tax was to be imposed on energy producers starting 1 October 2007, with revenue collected used for energy-efficiency programs. The tax rate for gasoline is $CDN0.008 per liter, or about $3.50 per tonne of CO
On 19 February 2008, British Columbia announced its intention to implement a carbon tax of $10 per tonne of Carbon dioxide equivalent (CO2e) emissions (2.41 cents per litre on gasoline) beginning 1 July 2008, the first North American jurisdiction to implement such a tax. The tax was to increase until 2012, reaching a final price of $30 per tonne (7.2 cents per litre at the pumps). The tax was to be revenue neutral by reducing corporate and income taxes accordingly. The government was to reduce other taxes by $481 million over three years. In January 2010, the carbon tax was applied to biodiesel. Before the tax went into effect, the government of British Columbia sent out "rebate cheques" from expected revenues to all residents. In January 2013, the tax was collecting about $1 billion/year, which was rebated.
The tax was based on the following principles:
- All revenue is recycled through tax reductions – The government was required to demonstrate how all carbon tax revenue was to be returned to taxpayers through tax reductions..
- The tax rate increased gradually – to give individuals and businesses time to make adjustments and respect decisions made prior to the announcement of the tax.
- Protect Low-income individuals and families – A refundable Low Income Climate Action Tax Credit helps offset the tax paid by low-income individuals and families.
- Broad base – Virtually all emissions from fuel combustion are taxed, with no exemptions except those required for integration with other climate actions.
- The tax would not, on its own, meet B.C.'s emission-reduction targets.
Many Canadians concluded that the carbon tax generally benefitted the British Columbian economy, in large part because its revenue neutral feature reduced personal income taxes. However some industries complained loudly that the tax had harmed them, notably cement manufacturers and farmers. Nevertheless, the tax attracted attention in the United States and elsewhere from those seeking an economically efficient way of reducing the emission of greenhouse gases without hurting economic growth.
In July 2007, Alberta enacted the Specified Gas Emitters Regulation, Alta. Reg. 139/2007, (SGER). This tax exacts a $15/tonne contribution by companies that emit more than 100,000 tonnes of greenhouse gas annually that do not reduce their CO2 emissions per barrel by 12 percent, or buy an offset. In January 2016, the contribution required by large emitters increased to $20/tonne. The tax fell heavily on oil companies and coal-fired electricity plants. It was intended to encourage companies to lower emissions while fostering new technology. The plan only covered the largest emitters, who produced 70% of Alberta's emissions. Critics charged that the smallest energy producers are often the most casual about emissions and pollution. The carbon tax is currently $20 per tonne. Because Alberta's economy is dependent on oil extraction, the majority of Albertans opposed a nationwide carbon tax. Alberta also opposed a national cap and trade system. The local tax retains the proceeds within Alberta.
On 23 November 2015, the Alberta government announced a carbon tax scheme similar to British Columbia's in that it would apply to the entire economy. All businesses and residents paid tax based upon equivalent emissions, including the burning of wood and biofuels. The tax came into force in 2017 at $20 per tonne.
On 4 June 2019 a carbon tax repeal bill was enacted.
A national carbon tax has been repeatedly proposed, but never enacted. On 23 July 2018 Representative Carlos Curbelo (R-FL) introduced H.R. 6463, the "Modernizing America with Rebuilding to Kick-start the Economy of the Twenty-first Century with a Historic Infrastructure-Centered Expansion (MARKET CHOICE) Act." The Citizens' Climate Lobby (CCL) attempted to create support for a tax. Americans for Carbon Dividends supports the Baker-Shultz Carbon Dividends Plan, and is supported by companies including Microsoft, First Solar, American Wind Energy Association, Exxon Mobil, BP, Royal Dutch Shell, and Total SA.
Internal price on carbon
Many corporations calculate an "internal price on carbon". Companies use this internal price to assess the risk of future projects into their investment decisions. Companies usually assess a higher internal price when the company a) emits large amounts of CO
2, and b) projects further into the future. Oil company have assets (factories, refineries) with a long lifespan that can be affected by future energy policies.
|Company||Internal carbon price (US$)||CO2 emitted in 2013 (million tonnes)|
In November 2006, voters in Boulder, Colorado passed what is said to be the first municipal carbon tax. It covers electricity consumption with deductions for using electricity from renewable sources (primarily Xcel's WindSource program). The goal is to reduce their emissions by 7% below 1990 levels by 2012. Tax revenues are collected by Xcel Energy and are directed to the city's Office of Environmental Affairs to fund programs to reduce emissions.
Boulder's Climate Action Plan (CAP) tax was expected to raise $1.6 million in 2010. The tax was increased to a maximum allowable rate by voters in 2009 to meet CAP goals. As of 2017 the tax was set at $0.0049 /kWh for residential users (avg. $21 per year), $0.0009 /kWh for commercial (avg. $94 per year), and $0.0003 /kWh for industrial (avg. $9,600 per year). Tax revenues were expected to decrease over time as conservation and renewable energy expand. The tax was renewed by voters on 6 November 2012.
As of 2015, the Boulder carbon tax was estimated to reduce carbon output by over 100,000 tons per year and provided $1.8 million in revenue. This revenue is invested in bike lanes, energy-efficient solutions, rebates, and community programs. The surcharge has been generally well-received.
In May 2008, the Bay Area Air Quality Management District, which covers nine counties in the San Francisco Bay Area, passed a carbon tax on businesses of 4.4 cents per ton of CO2.
In 2006, the state of California passed AB-32 (Global Warming Solutions Act of 2006), which requires California to reduce greenhouse gas emissions. To implement AB-32, the California Air Resources Board proposed a carbon tax but this was not enacted.
In May 2010, Montgomery County, Maryland passed the nation's first county-level carbon tax. The legislation required payments of $5 per ton of CO2 emitted from any stationary source emitting more than a million tons of carbon dioxide per year. The only source of emissions fitting the criteria is an 850 megawatt coal-fired power plant then owned by Mirant Corporation. The tax was expected to raise between $10 million and $15 million for the county, which faced a nearly $1 billion budget gap. The law directed half of tax revenues toward low interest loans for county residents to invest in residential energy efficiency. The County's energy supplier buys its energy at auction, requiring the plant owner to sell its energy at market value, preventing any increase in energy costs. In June 2010, Mirant sued the county to stop the tax. In June 2011 the Federal Court of Appeals ruled that the tax was a fee imposed "for regulatory or punitive purposes" rather than a tax, and therefore could be challenged in court. The County Council repealed the fee in July 2012.
Economists and climate scientists
Greg Mankiw, head of the Council of Economic Advisers under the George W. Bush administration, economic adviser to Mitt Romney for his 2012 presidential campaign and economics professor at Harvard University since 1985, has been advocating for increased carbon/oil taxation since at least 1999. In 2006, he founded the Pigou Club of economists advocating for Pigovian taxes, a carbon tax among them. The club's manifesto states "[h]igher gasoline taxes, perhaps as part of a broader carbon tax, would be the most direct and least invasive policy to address environmental concerns."
In 1979, economist Milton Friedman expressed support for the idea of a carbon tax in an interview on The Phil Donahue Show, saying "...the best way to [deal with pollution] is to impose a tax on the cost of the pollutants emitted by a car and make an incentive for car manufacturers and for consumers to keep down the amount of pollution."
In 2001, environmental scientist Lester Brown, founder of the Worldwatch Institute and founder and president of the Earth Policy Institute, outlined a detailed "tax shifting" structure that would not lead to an overall higher tax level: "It means reducing income taxes and offsetting them with taxes on environmentally destructive activities such as carbon emissions, the generation of toxic waste, the use of virgin raw materials, the use of non-refillable beverage containers, mercury emissions, the generation of garbage, the use of pesticides, and the use of throwaway products... activities that should be discouraged by taxing."
Former US Federal Reserve chairman Paul Volcker suggested (6 February 2007) that "it would be wiser to impose a tax on oil, for example, than wait for the market to drive up oil prices. A tax would give the government 'some leverage that you can use for other things.'", supporting a carbon tax.
Citizens' Climate Lobby advocates for carbon tax legislation (specifically a progressive fee and dividend model). The organization has about 165 chapters in the United States, Canada, and several other countries including Bangladesh and Sweden.
Monica Prasad, a Northwestern University sociologist, wrote about Denmark's carbon tax in The New York Times in 2008. Prasad argued that a critical component for Denmark's success was that the revenues subsidized firms to switch to renewable energy.
According to economist Laura D'Andrea Tyson, "The beauty of a carbon tax is its market-based simplicity. Economists since Adam Smith have insisted that prices are by far the most efficient way to guide the decisions of producers and consumers. Carbon emissions have an 'unpriced' societal cost in terms of their deleterious effects on the earth's climate. A tax on carbon would reflect these costs and send a powerful price signal that would discourage carbon emissions."
The American Enterprise Institute, Environmental economist Jack Pezzey, economist Jeffrey Sachs (director of the Earth Institute of Columbia University), Yale economist William Nordhaus support carbon taxes.
In January 2019, economists published a statement in the Wall Street Journal calling for a carbon tax, describing it as "the most cost-effective lever to reduce carbon emissions at the scale and speed that is necessary." In February 2019, the statement had been signed by more than 3,000 U.S. economists, including 27 Nobel Laureates.
- Former US Vice President Al Gore backed a carbon tax in his book, Earth in the Balance.
- Former Representative Bob Inglis (R-South Carolina) heads the Energy and Enterprise Initiative at George Mason University and supports a carbon tax.
- Carl Pope, former executive director of the Sierra Club, supports a carbon tax over cap-and-trade because employers will know exactly what their emissions cost, and because cap-and-trade (with grandfathered permits) rewards those who have the highest emissions.
- In 2008, Rex Tillerson, then CEO of Exxonmobil, said a carbon tax is "a more direct, more transparent and more effective approach" than a cap and trade program, which he said, "inevitably introduces unnecessary cost and complexity." He said that he hoped that the revenues from a carbon tax would be used to lower other taxes.
- In 2016 in Washington state, the Sierra Club, the Washington Environmental Council, Climate Solutions, and the Alliance for Jobs and Clean Energy opposed a proposed tax of $25 per tonne on fossil fuels arguing that the enactment would undermine state finances. In 2018, they instead supported a $15 per tonne tax in that state, along with many other environmental groups, in part because the proceeds would fund projects that would steer the state away from fossil fuels.
- In 2015, BG Group, BP, Eni, Royal Dutch Shell, Statoil, and Total sent an open letter to the UNFCCC calling for carbon pricing and eventually link it into a global system.
- A 2019 International Monetary Fund report stated that "a global tax of $75 per ton by the year 2030 could limit the planet's warming to 2 degrees Celsius."
- CEOs supporting carbon taxes include Fred Smith (FedEx); James Owens (Caterpillar), Paul Anderson (Duke Energy), Elon Musk (Tesla and SpaceX).
- Companies include Unilever and Nestlé
As of 2015, developing countries were responsible for 63% of carbon emissions. Various barriers stand in the way of developing countries from adopting plans to slow carbon emissions, including a carbon tax. Developing countries often prioritize economic growth over lower emissions. Nuclear power is under development in multiple countries as an emissions-free energy source.
Cap and trade is another approach. Emission levels are limited and emission permits traded among emitters. The permits can be issued via government auctions or by offered without charge based on existing emissions (grandfathering). Auctions raise revenues that can be used to reduce other taxes or to fund government programs. Variations include setting price-floor and/or price-ceiling for permits. A carbon tax can combined with trading.
A cap with grandfathered permits can have an efficiency advantage since it applies to all industries. Cap and trade provides an equal incentive for all producers at the margin to reduce their emissions. This is an advantage over a tax that exempts or has reduced rates for certain sectors.
Both carbon taxes and trading systems aim to reduce emissions by creating a price for emitting CO
2. In the absence of uncertainty both systems will result in the efficient market quantity and price of CO
2. When the environmental damage and therefore the appropriate tax of each unit of CO
2 cannot be accurately calculated, a permit system may be more advantageous. In the case of uncertainty regarding the costs of CO
2 abatement for firms, a tax is preferable.
Permit systems regulate total emissions. In practice the limit has often been set so high that permit prices are not significant. In the first phase of the European Union Emissions Trading System, firms reduced their emissions to their allotted quantity without the purchase of any additional permits. This drove permit prices to nearly zero two years later, crashing the system and requiring reforms that would eventually appear in EUETS Phase 3.
The distinction between carbon taxes and permit systems can get blurred when hybrid systems are allowed. A hybrid sets limits on price movements, potentially softening the cap. When the price gets too high, the issuing authority issues additional permits at that price. A price floor may be breached when emissions are so low that noone needs to buy a permit. Economist Gilbert Metcalf has proposed such a system, the Emissions Assurance Mechanism, and the idea, in principle, has been adopted by the Climate Leadership Council.
A 2018 survey of leading economists found that 58% of the surveyed economists agreed with the assertion, "Carbon taxes are a better way to implement climate policy than cap-and-trade," 31% stated that they had no opinion or that it was uncertain, but none of the respondents disagreed.
In a review study Fisher et al. concluded that the choice between an international quota (cap) system, or an international carbon tax, remained ambiguous. Lu et al. (2012) compared a carbon tax, emissions trading, and command-and-control regulation at the industry level, concluding that market-based mechanisms would perform better than emission standards in achieving emission targets without affecting industrial production.
James E. Hansen argued in Storms of My Grandchildren and in an open letter to then President Barack Obama that emissions trading would only make money for banks and hedge funds and allow 'business-as-usual' for the chief carbon-emitting industries.
- 4 Degrees and Beyond International Climate Conference
- Economics of global warming
- Cap and Share
- Carbon credit
- Carbon offset
- Congestion pricing
- Emissions Reduction Currency System
- Environmental economics
- Environmental impact of aviation
- Hypermobility (travel)
- Landfill Tax Credit Scheme (in the UK)
- Meat tax
- Pigou Club
- Polluter pays principle
- Tax horsepower
- World Bank Group (6 June 2019), State and Trends of Carbon Pricing 2019
- Akkaya, Sahin; Bakkal, Ufuk (1 June 2020). "Carbon Leakage Along with the Green Paradox Against Carbon Abatement? A Review Based on Carbon Tax". Folia Oeconomica Stetinensia. 20 (1): 25–44. doi:10.2478/foli-2020-0002. ISSN 1898-0198. S2CID 221372046.
- "Costs and Benefits to Agriculture from Climate Change Policy". www.card.iastate.edu.
- Bashmakov, I.; et al. (2001). "18.104.22.168.1 Collection Point and Tax Base". In B. Metz; et al. (eds.). Policies, Measures, and Instruments. Climate Change 2001: Mitigation. Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Print version: Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A.. This version: GRID-Arendal website. Archived from the original on 28 December 2013. Retrieved 8 April 2011.
- "Effects of a Carbon Tax on the Economy and the Environment". Congressional Budget Office. 22 May 2013. Retrieved 29 September 2017.
- Kalkuhl, Matthias (September 2013). "Renewable energy subsidies: Second-best policy or fatal aberration for mitigation?" (PDF). Resource and Energy Economics. 35 (3): 217–234. doi:10.1016/j.reseneeco.2013.01.002. Retrieved 20 August 2018.
- Bashmakov, I.; et al. (2001). "Policies, Measures, and Instruments". In B. Metz; et al. (eds.). Climate Change 2001: Mitigation. Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A. Retrieved 20 May 2009.
- Helm, D. (2005). "Economic Instruments and Environmental Policy". The Economic and Social Review. 36 (3): 4–5. Archived from the original on 1 May 2011. Retrieved 8 April 2011.
- "Carbon Taxes: What Can We Learn From International Experience?". Econofact. 3 May 2019. Retrieved 7 May 2019.
- Gupta, S.; et al. (2007). "22.214.171.124 Taxes and charges". Policies, instruments, and co-operative arrangements. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.). Print version: Cambridge University Press, Cambridge, U.K., and New York, N.Y., U.S.A.. This version: IPCC website. Archived from the original on 29 October 2010. Retrieved 18 March 2010.
- "Carbon Taxes II". igmchicago.org. Retrieved 6 July 2019.
- "Carbon Tax | IGM Forum". Retrieved 6 July 2019.
- "Climate Change Policies". igmchicago.org. Retrieved 6 July 2019.
- "ECONOMISTS' STATEMENT ON CARBON DIVIDENDS". clcouncil.org. 2019. Retrieved 18 February 2019.
- "77 Countries, 100+ Cities Commit to Net Zero Carbon Emissions by 2050 at Climate Summit". Retrieved 24 September 2019.
- World Bank Group (6 June 2019). "State and Trends of Carbon Pricing 2019". hdl:10986/31755. Cite journal requires
|journal=(help) p. 24, Fig. 6
- World Bank Group (6 June 2019). "State and Trends of Carbon Pricing 2019". hdl:10986/31755. Cite journal requires
|journal=(help) p. 21
- IPCC (2001). 7.34. In (section): Question 7. In (book): Climate Change 2001: Synthesis Report. A Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change (Watson, R.T. and the Core Writing Team (eds.)). Print version: Cambridge University Press, UK. This version: GRID-Arendal website. p. 122. Archived from the original on 1 May 2011. Retrieved 29 March 2011.
- Letter to U.S. Senators from 18 scientific organizations, by Alan I. Leshner (Executive Director, American Association for the Advancement of Science), Keith Sietter (Executive Director, American Meteorological Society), Douglas N. Arnold (President, Society for Industrial and Applied Mathematics), et al., 21 October 2009
- IPCC (2007). "Climate Change 2007: Synthesis Report" (PDF). International Panel Climate Change. p. 14.
- "Volcanic Gases and Their Effects", United States Geological Survey. Retrieved 10 August 2009
- Forster, P.; et al. (2007). "2.2 Concept of Radiative Forcing". In Solomon, S. D.; et al. (eds.). Changes in Atmospheric Constituents and in Radiative Forcing. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Climate Change 2007: The Physical Science Basis. Print version: Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A.. This version: IPCC website. Archived from the original on 29 October 2010. Retrieved 25 August 2010.
- Berdik, Chris (10 August 2014). "The unsung inventor of the carbon tax". The Boston Globe. Retrieved 11 August 2014.
- Article "The way forward. Second-best solutions", The Economist, special report on "Climate change", 28 November 2015, pages 15–16.
- Groosman, Britt. "2500 Pollution Tax" (PDF). Encyclopedia of Law and Economics. Edward Elgar and the University of Ghent. Archived from the original (PDF) on 1 December 2001. Retrieved 2 February 2014.
- Greenbaum, Allan (2010). Environmental Law and Policy in the Canadian Context. Concord, Ontario: Captus Press. pp. 240–241. ISBN 978-1-55322-171-5.
- Hepburn, C. (2006). "Regulation by prices, quantities or both: an update and an overview" (PDF). Oxford Review of Economic Policy. 22 (2): 226–247. doi:10.1093/oxrep/grj014. Retrieved 30 August 2009.
- Helm, D., ed. (2005). Climate change Policy: A Survey. In: "Climate Change Policy" (PDF). Oxford University Press. ISBN 978-0-19-928145-9. Archived from the original (PDF) on 1 May 2011. Retrieved 2 September 2009.
- Stern, N. (2007). 2.6 Non-marginal policy decisions. In: Stern Review on the Economics of Climate Change (pre-publication ed.). Print version: Cambridge University Press. Pre-publication version: HM Treasury website. pp. 34–35. Archived from the original on 9 March 2010. Retrieved 8 April 2011.
- Helm, D. (2008). "Climate-change policy: why has so little been achieved?". Oxford Review of Economic Policy. 24 (2): 211–238. doi:10.1093/oxrep/grn014. Retrieved 2 September 2009.
- Barker, T.; et al. (2007). "11.7.2 Carbon leakage. In (book chapter): Mitigation from a cross-sectoral perspective. In (book): Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.)". Print version: Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A.. This version: IPCC website. Archived from the original on 3 May 2010. Retrieved 5 April 2010.
- Barker, T.; et al. (2007). "Executive Summary. In (book chapter): Mitigation from a cross-sectoral perspective. In (book): Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.)". Print version: Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A.. This version: IPCC website. Archived from the original on 31 March 2010. Retrieved 5 April 2010.
- IPCC (2007). "Glossary A-D. In (section): Annex I. In (book): Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.)". Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A. Archived from the original on 3 May 2010. Retrieved 18 April 2010.
- Goldemberg, J.; et al. (1996). Introduction: scope of the assessment. In: Climate Change 1995: Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change (J.P. Bruce et al. Eds.). This version: Printed by Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A. PDF version: IPCC website. doi:10.2277/0521568544. ISBN 978-0-521-56854-8.
- Marcu, Andrei (December 2013). "Carbon Leakage: An overview" (PDF). Retrieved 21 May 2020. Cite journal requires
- Gupta, S.; et al. (2007). "126.96.36.199.3 Coordination/harmonization of policies. In (book chapter): Policies, instruments, and co-operative arrangements. In: Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz et al. Eds.)". Print version: Cambridge University Press, Cambridge, UK, and New York, N.Y., U.S.A.. This version: IPCC website. Retrieved 2 April 2010.
- Farrahi Moghaddam, Reza; Farrahi Moghaddam, Fereydoun; Cheriet, Mohamed (2013). "A modified GHG intensity indicator: Toward a sustainable global economy based on a carbon border tax and emissions trading". Energy Policy. 57 (June): 363–380. arXiv:1110.1567. doi:10.1016/j.enpol.2013.02.012. S2CID 154257615.
- Condon, Madison; Ignaciuk, Ada (1 June 2013). "Border Carbon Adjustment and International Trade: A Literature Review". OECD Trade and Environment Working Papers.
- Pauwelyn, Joost (2012). "Carbon Leakage Measures and Border Tax Adjustments Under WTO Law". doi:10.2139/ssrn.2026879. S2CID 154066670. Cite journal requires
- Ireland, Robert. "Implications for Customs of climate change mitigation and adaptation policy options: a preliminary examination" (PDF). World Customs Journal. 4 (2): 21–36. Archived from the original (PDF) on 1 May 2011.
- Fisher 1996.
- Fisher 1996, p. 416.
- Alter, Lloyd (1 April 2019). "Let's rename "Embodied Carbon" to "Upfront Carbon Emissions"". TreeHugger. Retrieved 10 August 2019.
- "New Buildings: Embodied Carbon". Architecture 2030. Retrieved 10 August 2019.
- Webb, Steve (13 September 2019). "Time to stop tinkering with global warming". RIBA Journal. Retrieved 14 October 2019.
- Leddy, Bill. "Zero Carbon Architecture: the Business Case". arcCA Digest. Retrieved 10 August 2019.
- "The cost and effectiveness of policies to reduce vehicle emissions" (PDF). OECD ITF Joint Transport Research Centre. 1 February 2008. Archived from the original (PDF) on 5 March 2016.
- "Oil dependence : Is transport running out of affordable fuel?" (PDF). OECD ITF Joint Transport Research Centre. 16 November 2007. Archived from the original (PDF) on 23 November 2010.
- "Fuel and Energy Source Codes and Emission Coefficients". Voluntary Reporting of Greenhouse Gases Program. United States Department of Energy (DOE), Energy Information Administration (EIA). Archived from the original on 1 November 2004. Retrieved 15 April 2008.
- The calculation is: A lb CO
2/million BTU x (1 million BTU/ 1000000 BTU) x (10.3 BTU/Wh) x (1 tonne/2205 lb) x (,/tonne CO
2) = B $/kWh. See "Special topics relating to electricity" in the Energy Units document from the American Physical Society. Retrieved 16 July 2010
- "Energy Units". American Physical Society. 2011. Retrieved 5 May 2011.
- Larson, 17 June 2016 | Aaron (17 June 2016). "GE-Powered Combined Cycle Plant Sets New Efficiency Record". POWER Magazine.
- Andersson, Julius J. (November 2019). "Carbon Taxes and CO2 Emissions: Sweden as a Case Study". American Economic Journal: Economic Policy. 11 (4): 1–30. doi:10.1257/pol.20170144. ISSN 1945-7731.
- Murray, Brian; Rivers, Nicholas (1 November 2015). "British Columbia's revenue-neutral carbon tax: A review of the latest "grand experiment" in environmental policy". Energy Policy. 86: 674–683. doi:10.1016/j.enpol.2015.08.011. ISSN 0301-4215.
- Zhang, Kun; Wang, Qian; Liang, Qiao-Mei; Chen, Hao (1 May 2016). "A bibliometric analysis of research on carbon tax from 1989 to 2014". Renewable and Sustainable Energy Reviews. 58: 297–310. doi:10.1016/j.rser.2015.12.089. ISSN 1364-0321.
- Loewenstein, George; Ho, Emily H.; Hagmann, David (2019). "Nudging out support for a carbon tax". Nature Climate Change. 9 (6): 484–489. Bibcode:2019NatCC...9..484H. doi:10.1038/s41558-019-0474-0. ISSN 1758-6798. S2CID 182663891.
- Yamazaki, Akio (1 May 2017). "Jobs and climate policy: Evidence from British Columbia's revenue-neutral carbon tax". Journal of Environmental Economics and Management. 83: 197–216. doi:10.1016/j.jeem.2017.03.003. ISSN 0095-0696. S2CID 157293760.
- Driscoll, Daniel (January 2020). "Do Carbon Prices Limit Economic Growth?". Socius: Sociological Research for a Dynamic World. 6: 237802311989832. doi:10.1177/2378023119898326. ISSN 2378-0231.
- Callan, Tim; Lyons, Sean; Scott, Susan; Tol, Richard S. J.; Verde, Stefano (1 February 2009). "The distributional implications of a carbon tax in Ireland" (PDF). Energy Policy. 37 (2): 407–412. doi:10.1016/j.enpol.2008.08.034. hdl:10419/50117. ISSN 0301-4215.
- Berry, Audrey (1 January 2019). "The distributional effects of a carbon tax and its impact on fuel poverty: A microsimulation study in the French context". Energy Policy. 124: 81–94. doi:10.1016/j.enpol.2018.09.021. ISSN 0301-4215.
- Renner, Sebastian (1 January 2018). "Poverty and distributional effects of a carbon tax in Mexico" (PDF). Energy Policy. 112: 98–110. doi:10.1016/j.enpol.2017.10.011. ISSN 0301-4215.
- Mathur, Aparna; Morris, Adele C. (1 March 2014). "Distributional effects of a carbon tax in broader U.S. fiscal reform". Energy Policy. 66: 326–334. doi:10.1016/j.enpol.2013.11.047. ISSN 0301-4215. S2CID 13852853.
- Metcalf, Gilbert E. (1 June 2019). "The distributional impacts of U.S. energy policy". Energy Policy. 129: 926–929. doi:10.1016/j.enpol.2019.01.076. ISSN 0301-4215.
- "South Africa gears up for carbon tax". CPC News. 16 June 2010. Retrieved 16 June 2010.
- "South Africa: Fuel Emissions". allafrica.com. 4 August 2010. Retrieved 4 August 2010.
- Powels, David (8 August 2010). "What the new CO2 tax will mean". Moneyweb Network. Archived from the original on 1 May 2011. Retrieved 19 August 2010.
- "Carbon Tax – Zimbabwe Revenue Authority (ZIMRA)". zimra.co.zw.
- Jiawei, Zhang (11 May 2010). "China ministries propose carbon tax from 2012 -report". China Daily. Retrieved 10 July 2011.
- "China ministries propose carbon tax from 2012 -report". Alibaba News. Reuters. 10 May 2010. Retrieved 10 July 2011.
- Farah, Paolo Davide (2015). "China's Role and Contribution in The Global Governance of Climate Change: Institutional Adjustments for Carbon Tax Introduction, Collection and Management in China". Journal of World Energy Law and Business. 8 (6). SSRN 2695612.
- "China to Launch World's Largest Emissions Trading System". unfccc.int/.
- Dogra, Sapna (3 July 2010). "India sets $1/mt clean coal tax for domestic producers/importers". Platts International Coal Report.
- Pearson, Natalie Obiko (1 July 2010). "India to Raise $535 Million From Carbon Tax on Coal". Bloomberg Businessweek. Retrieved 14 May 2011.
- For more information about India's carbon tax and other efforts being taken to mitigate climate change refer to India's Climate Change Initiative for 2010
- "Japans Ministry of the Environment – Environmental taxation". Retrieved 1 February 2013.
- "Japan industry unites against carbon tax". Reuters. 7 December 2009. Retrieved 9 August 2010.
- "Budget 2017: Singapore to impose carbon tax on large direct emitters". CNA. 20 February 2017. Retrieved 7 June 2019.
- Tan, Audrey; Toh, Wen Li (19 February 2018). "Carbon tax Bill passed amid competitiveness concerns". The Straits Times. Retrieved 24 October 2019.
- "Carbon tax Bill passed amid competitiveness concerns". The Straits Times. 21 March 2018. Retrieved 4 June 2019.
- National Environment Agency, Carbon Tax. Retrieved 11 June 2020
- Chan, Yvonne (20 October 2009). "Taiwan plans taxes for energy and CO2 emissions by 2011". Businessgreen.
- "View from Taiwan – Environmental venting". Michael Turton. 29 October 2009. Archived from the original on 14 May 2011.
- "Taiwan plans energy tax starting in 2011". Earth Times. 19 October 2009.
- Peter, Hannam. "Carbon price helped curb emissions, ANU study finds". The Guardian. Archived from the original on 26 July 2014. Retrieved 17 July 2014.
- Cox, Lisa. "Carbon tax is gone: Repeal bills pass the Senate". The Guardian. Archived from the original on 26 July 2014. Retrieved 17 July 2014.
- Hannam, Peter (13 June 2014). "Fall in greenhouse gas emissions biggest in 24 years". The Sydney Morning Herald.
- Hodgson, Pete (4 May 2005). "Speech announcing carbon tax detail". Minister of Climate Change Issues, The Beehive, NZ Parliament. Retrieved 18 September 2009.
- NZPA (5 December 2005). "Carbon tax ditched". The New Zealand Herald. Archived from the original on 11 May 2011. Retrieved 24 September 2009.
- Parker, David (10 September 2008). "Historic climate change legislation passes". New Zealand Government Media Release. Archived from the original on 26 September 2008. Retrieved 10 September 2008.
- Andersen, Prof. Mikael Skou (2010). "Europe's experience with carbon-energy taxation". Sapiens. 3 (2). Retrieved 24 August 2011.
- Pearce, D. (2005). "The United Kingdom Climate Change Levy: A study in political economy" (PDF). OECD Environment Directorate, Centre for Tax Policy and Administration. Retrieved 30 August 2009.
- Kanter, James (22 June 2010). "Europe Considers New Taxes to Promote 'Clean' Energy". The New York Times.
- Kanter, James (22 June 2010). "Europe Considers New Taxes to Promote 'Clean' Energy". The New York Times.
- "Energy Policies of IEA Countries: Denmark Review" (PDF). Head of Publications Service, OECD/IEA 2, rue André-Pascal, 75775 Paris cedex 16, France. 2002. Archived from the original (PDF) on 6 March 2006. Retrieved 3 August 2010.
- Morris, David (1994). "Green Taxes". Institute for local self reliance. Archived from the original on 3 July 2010. Retrieved 12 August 2010.
- Vourc'h, A.; M. Jimenez (2000). "Enhancing Environmentally Sustainable Growth in Finland. Economics Department Working Papers No. 229" (PDF). OECD website. Retrieved 21 April 2010.
- Vourc'h, Ann; Jimenez, Miguel (19 January 2000). "Enhancing Environmentally Sustainable Growth in Finland". OECD Economics Department Working Papers. doi:10.1787/370821866730. Cite journal requires
- Saltmarsh, Matthew (23 March 2010). "France Abandons Plan for Carbon Tax". The New York Times. Retrieved 5 January 2011.
- Puljak, Nadeje (10 September 2009). "Sarkozy unveils new French carbon tax". The Sydney Morning Herald.
- Kanter, James (30 December 2009). "Council in France Blocks a Carbon Tax as Weak on Polluters". The New York Times.
- Décision n° 2009-599 DC du 29 décembre 2009 French Constitutional Council (in French)
- Evans-Pritchard, Ambrose (23 March 2010). "France Ditches Carbon Tax as Social Protests Mount". The Telegraph. London.
- Chrisafis, Angelique (10 September 2009). "Sarkozy Launches Carbon Tax to Help 'Save the Human Race'". The Guardian.
- Taxe Carbone: comment ça va marcher, The Tribune, 23 September 2013.
- "State and Trends of Carbon Pricing 2018" (PDF). World Bank. Retrieved 5 December 2018.
- Fiscalité des énergies Ministère de la transition écologique et solidaire, 24 January 2018.
- Macron scraps fuel tax rise in face of gilets Jaunes protests, The Guardian 5 December 20183.
- "Bund und Länder einigen sich im Streit über Klimapaket".
- Hoerner, J Andrew; Bosquet, Benoît. "Environmental Tax Reform: The European Experience" (PDF). Archived from the original (PDF) on 2 May 2013. Retrieved 13 August 2010.
- "188.8.131.52. Energy/carbon Taxes".
- "Climate Answers – Stephen Tindale » Blog Archive » Carbon and energy taxes in Europe". Climateanswers.info. Archived from the original on 13 August 2011. Retrieved 24 August 2016.
- "Carbon Tax on Packaging (Netherlands)", June 2009
- [email protected], Nest Design Letacek. "General Information Netherlands – PRO Europe". pro-e.org.
- "CE Delft", 2010
- "Afvalfonds Verpakkingen", 2014
- IEA (2005). "Energy Policies of IEA Countries – Norway- 2005 Review". International Energy Agency's website. p. 208. Archived from the original on 15 June 2010. Retrieved 21 April 2010.
- Bruvoll, Annegrete; Bodil Merethe Larsen (2002). "Greenhouse gas emissions in Norway Do carbon taxes work?" (PDF). Statistics Norway, Research Department. p. 28. Retrieved 15 September 2011.
- OECD (1998). "Economic/Fiscal Instruments: Taxation (i.e., carbon/energy) Annex I Expert Group on the United Nations Framework Convention on Climate Change" (PDF). Organisation for Economic Co-operation and Development website. p. 94. Retrieved 21 April 2010.
- IEA (2005). "Energy Policies of IEA Countries – Norway- 2005 Review". International Energy Agency's website. p. 204. Archived from the original on 22 October 2009. Retrieved 4 August 2010.
- Sumner, J, Bird, L, & Smith H (December 2009). "Carbon Taxes: A Review of Experience and Policy Design Considerations" (PDF). National Renewable Energy Laboratory. Retrieved 6 June 2011.CS1 maint: multiple names: authors list (link)
- "Norway: An Emissions Trading Case Study" (PDF). International Emissions Trading Association. International Emissions Trading Association. May 2015. Retrieved 14 November 2016.
- "Putting a Price on Carbon With A Tax" (PDF). World Bank. Retrieved 14 November 2016.
- "IEA – Norway". iea.org. Archived from the original on 15 November 2016. Retrieved 15 November 2016.
- "The government's revenues – Norwegianpetroleum.no – Norwegian Petroleum". Norwegian Petroleum. Retrieved 15 November 2016.
- "Norway's Fifth National Communication under the Framework Convention on Climate Change – Status report as of December 2009" (PDF). Norwegian Ministry of the Environment. Norwegian Ministry of the Environment. December 2009. Retrieved 14 November 2016.
- IEA (2007). "Energy Policies of IEA Countries – Ireland- 2007 Review". International Energy Agency's website. p. 154. Archived from the original on 22 October 2009. Retrieved 21 April 2010.
- The 2010 budget was delivered in December 2009
- "Carbon tax of €15 a tonne announced". Inside Ireland. 9 December 2009. Archived from the original on 28 March 2012. Retrieved 5 May 2011.
- €1 was equal to US$1.32 as of August 2010. See: 9 August 2010, Currency Calculator
- Bord Gáis Energy (2010). "Help & Questions – Home Gas – Carbon Tax". Bord Gáis Energy website. Retrieved 30 July 2010.
- Revenue. "Guide to Natural Gas Carbon Tax". Revenue: Irish tax and customs. Retrieved 6 January 2017.
- McGee, Harry (30 January 2010). "Producers of biofuels want changes to carbon tax". The Irish Times. Retrieved 12 August 2010.
- McGee, Harry (12 December 2009). "Carbon tax to drive up fuel costs". The Irish Times. Retrieved 2 August 2010.
- Carr, Aoife (2009). "Irish household income up 10% in 2007". The Irish Times. Retrieved 2 August 2010.
- "Multy woman among group arguing for aid for elderly". Westmeath Examiner. 28 July 2010. Archived from the original on 1 May 2011. Retrieved 3 August 2010.
- Revenue (2010). "Guide to Natural Gas Carbon Tax" (PDF). Revenue: Irish tax and customs. Retrieved 4 August 2010.
- "Irish Rural Link – Naisc Tuaithe na hÉireann". irishrurallink.ie.
- "A Carbon Tax for Ireland" (PDF). ESRI Working Paper. Archived from the original (PDF) on 16 October 2011.
- "Carbon Tax and Rural Ireland" (PDF). Irish Rural Link Carbon Tax Briefing Note. Archived from the original (PDF) on 14 May 2011.
- "Department of Finance briefing on the Irish Carbon Tax" (PDF). Archived from the original (PDF) on 14 November 2017. Retrieved 14 October 2013.
- "Budget 2012: The main points ... from mortgage relief to carbon tax". The Irish Independent. 6 December 2011.
- Jonsson, Samuel; Ydstedt, Anders; Asen, Elke (23 September 2020). "Looking Back on 30 Years of Carbon Taxes in Sweden". Tax Foundation. Retrieved 1 May 2021.
- "Carbon taxes raised to tackle climate change". The Local (Sweden's news in English). 17 September 2007. Retrieved 5 May 2011.
- IEA (2008). "Energy Policies of IEA Countries – Sweden- 2008 Review" (PDF). International Energy Agency website. p. 150. Archived from the original (PDF) on 14 July 2014. Retrieved 25 December 2014.
- Andersson, Julius J. (1 November 2019). "Carbon Taxes and CO2 Emissions: Sweden as a Case Study". American Economic Journal: Economic Policy. 11 (4): 1–30. doi:10.1257/pol.20170144. ISSN 1945-7731.
- Fouché, Gwladys (29 April 2008). "Sweden's carbon-tax solution to climate change puts it top of the green list". The Guardian. London.
- Carlgren, Fredrik (7 October 2015). "GDP – Gross Domestic Product – Ekonomifakta". Ekonomifakta.se. Retrieved 24 August 2016.
- Federal Office for the Environment (2010). "CO2 tax". Agency for the Environment FOEN. p. 1. Archived from the original on 26 August 2010. Retrieved 10 August 2010.
- IEA (2008). "CO2 Tax on Stationary Fuels". International Energy Agency's website. p. 1. Archived from the original on 1 May 2011. Retrieved 4 August 2010.
- IEA (2000). "Implementation of the Law on the Reduction of CO2 Emissions (CO2 Law)". International Energy Agency's website. p. 1. Archived from the original on 1 May 2011. Retrieved 4 August 2010.
- IEA (2007). "Energy Policies of IEA Countries – Switzerland" (PDF). International Energy Agency's website. p. 128. Archived from the original (PDF) on 1 May 2011. Retrieved 4 August 2010.
- Agency for the Environment FOEN (2010). "Redistribution of CO2 tax". Agency for the Environment FOE. p. 1. Archived from the original on 1 May 2011. Retrieved 9 August 2010.
- Parliament, European (2008). "Options and Implications of Linking the EU ETS with other Emissions Trading Schemes". European Parliament. p. 30. Archived from the original on 7 May 2009. Retrieved 4 August 2010.
- Agency for the Environment FOEN (2010). "Companies exceed CO2 targets in 2009". Federal Office for Environment. p. 1. Retrieved 9 August 2010.
- Broom, Giles (2009). "Swiss Favour Carbon Tax Over Emissions Trading". Swisster. p. 1. Archived from the original on 9 December 2009. Retrieved 4 August 2010.
- Agency for the Environment FOEN (2010). "Redistribution of the CO2 tax: The economy receives some 360 million francs". Federal Office for Environment. p. 1. Retrieved 9 August 2010.
- IEA (2007). "Energy Policies of IEA Countries – Switzerland" (PDF). International Energy Agency. p. 128. Archived from the original (PDF) on 1 May 2011. Retrieved 4 August 2010.
- Harvey, Fiona (24 February 2021). "Carbon tax would be popular with UK voters, poll suggests". The Guardian. Retrieved 28 April 2021.
- "Fuel duty - All you need to know". Politics.co.uk. Retrieved 28 April 2021.
- UK Climate Change Levy http://www.gov.uk/green-taxes-and-reliefs/climate-change-levy
- Hodgson, Camilla; Sheppard, David; Pickard, Jim (27 February 2021). "UK carbon trading system to launch in May". Financial Times. Retrieved 28 April 2021.
- Meyer, Peter (2010). "United States. Costa Rica: Background and U.S. Relations" (PDF). Archived from the original (PDF) on 7 August 2011.
- Costa Rica: Experts warn about the dangers of missing environmental targets (PDF), Agence France-Presse, 8 October 2009
- (PDF) https://web.archive.org/web/20110807121815/http://assets.opencrs.com/rpts/R40593_20100222.pdf. Archived from the original (PDF) on 7 August 2011. Retrieved 4 August 2010. Missing or empty
- "Costa Rica aims to win "carbon neutral" race". Reuters. 24 May 2007.
- Bryden, Joan. Liberals cast themselves in leader's light. Toronto Star. 20 October 2008.
- Bilton, Chris. Green shifting right? Archived 1 May 2011 at the Wayback Machine Eye Weekly. 7 January 2009
- Reid, Scott. The good, the (mostly) bad, and the faint signs of hope. The Globe and Mail. 26 December 2009.
- Findlay, Martha Hall. After the Green Shift. The Globe and Mail. 19 January 2009.
- "Case of the Conservatives' carbon amnesia". The Globe and Mail.
- Greenhouse Gas Pollution Pricing Act, in force since 21 June 2018 (page visited on 26 October 2018).
- Dana Nuccitelli, "Canada passed a carbon tax that will give most Canadians more money", The Guardian, 26 October 2018 (page visited on 26 October 2018).
- "Carbon Pricing in Canada (Updated 2020)". energyhub.org. 24 September 2020. Retrieved 27 September 2020.
- "Quebec to collect nation's 1st carbon tax".
- "Quebec Government to Implement Carbon Tax" (PDF). Archived from the original (PDF) on 12 August 2014.
- "Where Carbon Is Taxed". www.carbontax.org.
- "B.C. introduces carbon tax". CanWest MediaWorks Publications. 22 February 2008. Archived from the original on 10 November 2012. Retrieved 9 January 2013.
- "British Columbia Carbon Tax" (PDF). Ministry of Small Business and Revenue. February 2008. Archived from the original (PDF) on 13 May 2013.
- "B.C.'s Revenue-neutral Carbon Tax". Balanced Budget 2008 Backgrounder. Province of British Columbia. 1 July 2008. Retrieved 5 May 2011.
- "B.C. tax rebate cheques due out this week". CTV British Columbia News. 23 June 2008. Retrieved 9 January 2013.
- Ahearn, Ashley (7 January 2013). "Talk Of A Carbon Tax In The Northwest". EarthFix · Oregon Public Broadcasting. Archived from the original on 16 January 2013. Retrieved 9 January 2013.
"It makes sense, it's simple, it's well accepted," says Terry Lake, the minister of the environment of British Columbia.
- "What is a Carbon Tax?". Government of British Columbia. Retrieved 27 October 2014.
- Beaty, Ross; Lipsey, Richard; Elgie, Stewart (9 July 2014). "The shocking truth about B.C.'s carbon tax: It works". The Globe and Mail. Toronto, Ontario. Retrieved 10 December 2015.
- "British Columbia's carbon tax; The evidence mounts". The Economist. 31 July 2014. Retrieved 10 December 2015.
- Halstead, Ted (16 November 2015). "The Republican Solution for Climate Change; Republicans have the ability to offer a market-based solution to climate change, so why aren't they doing it?". The Atlantic. Washington, D.C. Retrieved 10 December 2015.
- "Specified Gas Emitters Regulation, Alta Reg 139/2007". Retrieved 11 December 2013.
- "Alberta's carbon-tax windfall dilemma". The Globe and Mail. 9 April 2013. Retrieved 12 December 2013.
- "The Tax Favored By Most Economists". Brookings. Retrieved 12 December 2013.
- "To Spur Innovation, What Price to Put on Oil Sands Carbon? – Tyee Solutions Society". tyeesolutions.org. Archived from the original on 15 November 2012.
- "Carbon tax proposal a non-starter in Alberta". CBC News. 8 January 2008. Retrieved 19 August 2010.
- "Go figure – a carbon tax crafted right here at home". The Calgary Herald. 9 March 2007. Archived from the original on 11 May 2011. Retrieved 19 August 2010.
- "Alberta boosts carbon tax to $20 a tonne starting in 2016 as part of climate change plan". 25 June 2015.
- "Alberta Extends Climate Change Rules, Including $15 Tonne Carbon Levy". Huffpost Alberta. Retrieved 27 November 2015.
- Simpson, Jeffery (22 January 2010). "Many Albertans agree: A carbon tax was the best solution". The Globe and Mail. Retrieved 19 August 2010.
- H.R. 6463 https://www.congress.gov/bill/115th-congress/house-bill/6463
- Roberts, David (22 June 2018). "Energy lobbyists have a new PAC to push for a carbon tax. Wait, what?". Vox.
- "Carbon copy". The Economist. Retrieved 1 May 2017.
- Climate Action Plan Tax, City of Boulder, Colorado https://web.archive.org/web/20110227063524/http://www.bouldercolorado.gov/index.php?option=com_content&task=view&id=7698&Itemid=2844. Archived from the original on 27 February 2011. Retrieved 20 March 2010. Missing or empty
|title=(help) 30 June 2010 accessed 5 August 2010
- Kelley, Katie (18 November 2006). "City Approves 'Carbon Tax' in Effort to Reduce Gas Emissions". The New York Times. Retrieved 28 January 2010.
- Sadasivam, Naveena (2 November 2015). "How Boulder Taxed its Way to a Climate-Friendlier Future".
- Air quality board to fine Bay Area polluters Archived 15 March 2011 at the Wayback Machine, San Francisco Chronicle, 22 May 2008
- "California and AB32". carbonshare.org. Archived from the original on 1 May 2011. Retrieved 8 August 2010.
- "Natural Sciences Repository Index 130". Itsgettinghotinhere.org. Archived from the original on 18 October 2014. Retrieved 24 August 2016.
- https://web.archive.org/web/20100717055202/http://solveclimate.com/blog/20100525/maryland-county-carbon-tax-law-could-set-example-rest-country. Archived from the original on 17 July 2010. Retrieved 4 August 2010. Missing or empty
- "Archive Template". Washingtonexaminer.com. 13 June 2013. Retrieved 24 August 2016.
- "Archive Template". Washingtonexaminer.com. 13 June 2013. Retrieved 24 August 2016.
- United States Court of Appeals for the Fourth Circuit (20 June 2011). "Genon Mid-Atlantic LLC v. Montgomery County, Maryland" (PDF). Retrieved 19 August 2019.
- County Council for Montgomery County, Maryland (10 July 2012). "Resolution No. 17-484. Repeal of Department of Finance Regulation 12-10, Excise Tax: Major Emitters of Carbon Dioxide" (PDF). Retrieved 19 August 2019.
- Gas Tax Now! First Principles, Greg Mankiw, Fortune, 24 May 1999.
- The Pigou Club Manifesto, Greg Mankiw, 20 October 2006.
- McMahon, Jeff (12 October 2014). "What Would Milton Friedman Do About Climate Change? Tax Carbon". Forbes. Retrieved 20 March 2016.
- Brown, Lester. Eco-Economy: Building an Economy for the Earth, Chapter 11. Tools for Restructuring the Economy: Tax Shifting, Earth Policy Institute (2001)
- "Economist Paul Volcker says steps to curb global warming would not devastate an economy". Associated Press. 6 February 2007. Archived from the original on 10 February 2009. Retrieved 15 April 2008.
- Bone, James (3 December 2009). "Climate scientist James Hansen hopes the summit will fail". The Times. London. Retrieved 10 December 2009.
- Randerson, James (2 January 2009). "Nasa climate expert makes personal appeal to Obama". The Guardian. London. Retrieved 10 December 2009.
- "Climate group forming in Oklahoma City". The Oklahoman. 11 December 2012. Retrieved 20 December 2012.
- Prasad, Monica (25 March 2008). "On Carbon, Tax and Don't Spend". The New York Times. Retrieved 4 August 2010.
- Tyson, Laura (28 June 2013). "The Myriad Benefits of a Carbon Tax". The New York Times. Retrieved 28 June 2013.
- "Exploring a Carbon Tax for Australia" Archived 14 May 2011 at the Wayback Machine, John Humphreys, The Centre for Independent Studies
- "Experts divided on carbon tax" Archived 9 June 2009 at the Wayback Machine, Matthew Warren, The Australian, 17 July 2008
- Tickell, Oliver (12 March 2009). "Replace Kyoto protocol with global carbon tax, says Yale economist". The Guardian. London. Retrieved 27 January 2010.
- Noah, Timothy (9 November 2006). The GOP Triangulates. Slate.
- "Energy and Enterprise Initiative". Retrieved 20 December 2012.
- "Tax on Carbon Emissions Gains Support", Juliet Eilperin and Steven Mufson, The Washington Post, 1 April 2007, Page A05
- Exxon supports carbon tax Archived 22 January 2009 at the Wayback Machine, Herald News Services, 9 January 2009
- "WEC Statement on I-732". Washington Environmental Council. 14 October 2016. Archived from the original on 6 March 2017. Retrieved 6 March 2017.
- "Best Way to Fight Climate Change? Put an Honest Price on Carbon". The New York Times. 29 October 2018. Retrieved 31 October 2018.
- unfccc.com. "Six Oil Majors Say: We Will Act Faster with Stronger Carbon Pricing Open Letter to UN and Governments". Archived from the original on 30 March 2018. Retrieved 3 December 2018.
- Mooney, Chris; Freedman, Andrew (10 October 2019). "The world needs a massive carbon tax in just 10 years to limit climate change, IMF says. The international organization suggests a cost of $75 per ton by 2030". The Washington Post. Archived from the original on 10 October 2019. Retrieved 14 October 2019.
- "Fred Smith Addresses the Topic of Carbon Tax". FedEx Multimedia Center. 27 April 2009. Retrieved 28 January 2010.
- Bittle, Scott; Johnson, Jean. "The Energy Debate We Should Be Having". Forbes. Retrieved 28 December 2015.
- Makower, Joel (8 April 2005). "Climate Change: Keeping Up with the Andersons". Two Steps forward. Retrieved 15 April 2008.
- DeBord, Matthew (2 December 2015). "Elon Musk Just Demanded a Carbon Tax in Paris". Business Insider. Retrieved 8 February 2017.
- "Companies Join Investors to Pledge Climate Action". 22 September 2014 – via www.bloomberg.com.
- "De CO2-taks: onethisch en gevaarlijk ondoeltreffend". MO*.
- "Developing Countries Are Responsible for 63 Percent of Current Carbon Emissions". Center For Global Development. Retrieved 12 December 2020.
- "How Developing Countries Can Reduce Emissions Without Compromising Growth". Earth.Org - Past | Present | Future. 16 December 2019. Retrieved 12 December 2020.
- Kugelmass, Bret. "Want to stop climate change? Embrace the nuclear option". USA TODAY. Retrieved 12 December 2020.
- Smith, S. (11 June 2008). "Environmentally Related Taxes and Tradable Permit Systems in Practice" (PDF). OECD, Environment Directorate, Centre for Tax Policy and Administration. Retrieved 26 August 2009.
- Jacobsen, Mark. "Environmental Economics Lecture 6–7." UCSD Econ 131. Econ 131 Lecture, 20 October 2016, San Diego, UCSD .
- "Which Is Better: Carbon Tax or Cap-and-Trade?" Grantham Research Institute on Climate Change and the Environment, London School of Economics, 21 March 2014, London School of Economics/GranthamInstitute/faqs/which-is-better-carbon-tax-or-cap-and-trade/.
- "Cap and Trade vs. Taxes." Center for Climate and Energy Solutions, Center for Climate and Energy Solutions, 24 October 2017, www.c2es.org/document/cap-and-trade-vs-taxes/.
- "EU Emissions Trading System (EU ETS)." Icapcarbonaction.com, International Carbon Action Partnership, 10 October 2017, icapcarbonaction.com/en/?option=com_etsmap&task=export&format=pdf&layout=list&systems=43
- "EU Emissions Trading System (EU ETS)." Icapcarbonaction.com, International Carbon Action Partnership, 10 October 2017, icapcarbonaction.com/en/?option=com_etsmap&task=export&format=pdf&layout=list&systems=43.
- Burtraw, Dallas; Palmer, Karen; Kahn, Danny (1 September 2010). "A symmetric safety valve". Energy Policy. Special Section on Carbon Emissions and Carbon Management in Cities with Regular Papers. 38 (9): 4921–4932. doi:10.1016/j.enpol.2010.03.068. ISSN 0301-4215.
- "An Emissions Assurance Mechanism: Adding Environmental Certainty to a Carbon Tax". Resources for the Future. Retrieved 11 October 2019.
- "The Four Pillars of Our Carbon Dividends Plan". clcouncil.org. Retrieved 11 October 2019.
- Fisher 1996, p. 430.
- Lu, Yujie; Zhu, Xinyuan; Cui, Qingbin (2012). "Effectiveness and equity implications of carbon policies in the United States construction industry". Building and Environment. Elsevier Ltd. 49: 259–269. doi:10.1016/j.buildenv.2011.10.002.
- James Hansen. Storms of My Grandchildren Bloomsbury, London 2009 ISBN 978-1-4088-0744-6 p. 241.
- Aldy, J. (9 August 2007). "Cap-and-Trade vs. Emission Tax: An Introduction". climatepolicy.org. ClimatePolicy. Retrieved 30 August 2009.
- Cuervo, J.; V.P. Gandhi (1 May 1998). "Carbon Taxes – Their Macroeconomic Effects and Prospects for Global Adoption – A Survey of the Literature. Working Paper No. 98/73". International Monetary Fund, Fiscal Affairs Department. Retrieved 12 May 2010.
- Dower, R.C.; M.B. Zimmerman (August 1992). "The right climate for carbon taxes: Creating economic incentives to protect the environment". World Resources Institute website. Retrieved 12 May 2010.
- Fisher, B.S.; et al. (1996). An Economic Assessment of Policy Instruments for Combating Climate Change. In: Climate Change 1995: Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change (J.P. Bruce et al. Eds.). This version: Printed by Cambridge University Press, Cambridge, UK, and New York. Web version: IPCC website. doi:10.2277/0521568544. ISBN 978-0-521-56854-8. | https://library.kiwix.org/wikipedia_en_top_maxi/A/Carbon_tax | 21 |
16 | Thyroid cancer begins in the thyroid gland. Thyroid gland is located in the front of the neck just below the larynx, which is called the voice box. The thyroid gland is part of the endocrine system, which regulates hormones in the body. The thyroid gland absorbs iodine from the bloodstream to produce thyroid hormones, which regulate a person’s metabolism.
A normal thyroid gland has 2 lobes, 1 on each side of the windpipe, joined by a narrow strip of tissue called the isthmus. A healthy thyroid gland is barely palpable, which means it is hard to find by touch. If a tumor develops in the thyroid, it is felt as a lump in the neck.
A swollen or enlarged thyroid gland is called a goiter, which may be caused when a person does not get enough iodine. However, most Americans receive enough iodine from salt, and a goiter under these circumstances is caused by other reasons.
Thyroid cancer starts when healthy cells in the thyroid change and grow out of control, forming a mass called a tumor. The thyroid gland contains 2 types of cells:
1. Follicular Cells
These cells are responsible for the production of thyroid hormone. Thyroid hormone is needed to live. The hormone controls the basic metabolism of the body. It controls how quickly calories are burned.
This can affect weight loss and weight gain, slow down or speed up the heartbeat, raise or lower body temperature, influence how quickly food moves through the digestive tract, control the way muscles contract, and control how quickly dying cells are replaced.
2. C Cells
These special cells of the thyroid make calcitonin, a hormone that participates in calcium metabolism.
A tumor can be cancerous or benign. A cancerous tumor is malignant, meaning it can grow and spread to other parts of the body. A benign tumor means the tumor can grow but will not spread.
Thyroid tumors can also be called nodules, and about 90% of all thyroid nodules are benign.
Types of Thyroid Cancer
There are 5 main types of thyroid cancer:
1. Papillary Thyroid Cancer
Papillary thyroid cancer develops from follicular cells and usually grow slowly. It is the most common type of thyroid cancer. It is usually found in 1 lobe. Only 10% to 20% of papillary thyroid cancer appears in both lobes.
It is a differentiated thyroid cancer, meaning that the tumor looks similar to normal thyroid tissue under a microscope. Papillary thyroid cancer can often spread to lymph nodes.
2. Follicular Thyroid Cancer
Follicular thyroid cancer also develops from follicular cells and usually grows slowly. Follicular thyroid cancer is also a differentiated thyroid cancer, but it is far less common than papillary thyroid cancer. Follicular thyroid cancer rarely spreads to lymph nodes.
Follicular thyroid cancer and papillary thyroid cancer are the most common differentiated thyroid cancers. They are very often curable, especially when found early and in people younger than 50. Together, follicular and papillary thyroid cancers make up about 95% of all thyroid cancer.
3. Hurthle Cell Cancer
Hurthle cell cancer, also called Hurthle cell carcinoma, is cancer that is arises from a certain type of follicular cell. Hurthle cell cancers are much more likely to spread to lymph nodes than other follicular thyroid cancers.
4. Medullary Thyroid Cancer (MTC)
MTC develops in the C cells and is sometimes the result of a genetic syndrome called multiple endocrine neoplasia type 2 (MEN2). This tumor has very little, if any, similarity to normal thyroid tissue. MTC can often be controlled if it is diagnosed and treated before it spreads to other parts of the body.
MTC accounts for about 3% of all thyroid cancers. About 25% of all MTC is familial. This means that family members of the patient will have a possibility of a similar diagnosis. The RETproto-oncogene test can confirm if family members also have familial MTC (FMTC).
5. Anaplastic Thyroid Cancer
This type is rare, accounting for about 1% of thyroid cancer. It is a fast-growing, poorly differentiated thyroid cancer that may start from differentiated thyroid cancer or a benign thyroid tumor.
Anaplastic thyroid cancer can be subtyped into giant cell classifications. Because this type of thyroid cancer grows so quickly, it is more difficult to treat successfully.
In addition, other types of cancer may start in or around the thyroid gland.
A risk factor is anything that increases a person’s chance of developing cancer. Although risk factors often influence the development of cancer, most do not directly cause cancer.
Some people with several risk factors never develop cancer, while others with no known risk factors do. Knowing your risk factors and talking about them with your doctor may help you make more informed lifestyle and health care choices.
The following factors may raise a person’s risk of developing thyroid cancer:
Women are diagnosed with 3 of every 4 thyroid cancers.
Thyroid cancer can occur at any age, but about two-thirds of all cases are found in people between the ages of 20 and 55. Anaplastic thyroid cancer is usually diagnosed after age 60. Older infants (10 months and older) and adolescents can develop MTC, especially if they carry the RET proto-oncogene mutation (see below).
Some types of thyroid cancer are associated with genetics. Below are some key facts about this disease, genes, and family history.
An abnormal RET oncogene, which can be passed from parent to child, may cause MTC. Not everyone with an altered RET oncogene will develop cancer. Blood tests and genetic tests can detect the gene.
Once the altered RET oncogene is identified, a doctor may recommend surgery to remove the thyroid gland before cancer develops.
People with MTC are encouraged to have genetic testing to determine if a mutation of the RET proto-oncogene is present. If so, genetic testing of parents, siblings, and children will be recommended.
- A family history of MTC increases a person’s risk. People with MEN2 syndromeare also at risk for developing other types of cancers.
- A family history of precancerous polyps in the colon, also called the large intestines, increases the risk of developing papillary thyroid cancer.
4. Radiation Exposure
Exposure to moderate levels of radiation to the head and neck may increase the risk of papillary and follicular thyroid cancers. Such sources of exposure include:
- Low-dose to moderate-dose x-ray treatments used before 1950 to treat children with acne, tonsillitis, and other head and neck problems.
- Radiation therapy for Hodgkin lymphoma or other forms of lymphoma in the head and neck.
- Exposure to radioactive iodine, also called I-131 or RAI, especially in childhood.
- Exposure to ionizing radiation, including radioactive fallout from atomic weapons testing during the 1950s and 1960s and nuclear power plant fallout. Examples include the 1986 Chernobyl nuclear power plant accident and the 2011 earthquake that damaged nuclear power plants in Fukushima, Japan. Another source of I-131 is environmental releases from atomic weapon production plants.
5. Diet Low in Iodine
Iodine is needed for normal thyroid function. In the United States, iodine is added to salt to help prevent thyroid problems.
White people and Asian people are more likely to develop thyroid cancer, but this disease can affect a person of any race or ethnicity.
7. Breast Cancer
A recent study showed that breast cancer survivors may have a higher risk of thyroid cancer, particularly in the first 5 years after diagnosis and for those diagnosed with breast cancer at a younger age. This finding continues to be examined by researchers.
Symptoms and Signs
It is common for people with thyroid cancer to have few or no symptoms. Thyroid cancers are often diagnosed by routine examination of the neck during a general physical exam.
They are also unintentionally found by X-Rays or other imaging scans that were performed for other reasons. People with thyroid cancer may experience the following symptoms or signs.
Sometimes, people with thyroid cancer do not have any of these changes. Or, the cause of a symptom may be a different medical condition that is not cancer.
- A lump in the front of the neck, near the Adam’s apple
- Swollen glands in the neck
- Difficulty swallowing
- Difficulty breathing
- Pain in the throat or neck
- A cough that persists and is not caused by a cold
If you are concerned about any changes you experience, please talk with your doctor. Your doctor will ask how long and how often you’ve been experiencing the symptom(s), in addition to other questions. This is to help figure out the cause of the problem, called a diagnosis.
These symptoms may be caused by thyroid cancer; other thyroid problems, such as a goiter; or a condition not related to the thyroid, such as an infection.
If cancer is diagnosed, relieving symptoms remains an important part of cancer care and treatment. This may be called palliative care or supportive care.
It is often started soon after diagnosis and continued throughout treatment. Be sure to talk with your health care team about the symptoms you experience, including any new symptoms or a change in symptoms.
Doctors use many tests to find, or diagnose, cancer. They also do tests to learn if cancer has spread to another part of the body from where it started. If this happens, it is called metastasis.
For example, imaging tests can show if the cancer has spread. Imaging tests show pictures of the inside of the body. Doctors may also do tests to learn which treatments could work best.
For most types of cancer, a biopsy is the only sure way for the doctor to know if an area of the body has cancer. In a biopsy, the doctor takes a small sample of tissue for testing in a laboratory. If a biopsy is not possible, the doctor may suggest other tests that will help make a diagnosis.
Your doctor may consider these factors when choosing a diagnostic test:
- The type of cancer suspected
- Your signs and symptoms
- Your age and general health
- The results of earlier medical tests
This section describes options for diagnosing thyroid cancer. Not all tests listed below will be used for every person.
1. Physical Examination
The doctor will feel the neck, thyroid gland, throat, and lymph nodes (the tiny, bean-shaped organs that help fight infection) in the neck for unusual growths or swelling. If surgery is recommended, the larynx may be examined at the same time with a laryngoscope, which is a thin, flexible tube with a light.
2. Blood Tests
There are several types of blood tests that may be done during diagnosis and to monitor the patient during and after treatment. This includes tests called tumor marker tests. Tumor markers are substances found at higher-than-normal levels in the blood, urine, or body tissues of some people with cancer.
a. Thyroid Hormone Levels
As explained in the Introduction, thyroid hormones regulate a person’s metabolism. The doctor will use this test to find out the current levels of the thyroid hormones triiodothyronine (T3) and thyroxine (T4) in the body.
b. Thyroid-Stimulating Hormone (TSH)
This blood test measures the level of TSH, a hormone produced by the pituitary gland near the brain. If the body is in need of thyroid hormone, the pituitary gland releases TSH to stimulate production.
c. Tg and TgAb
Thyroglobulin (Tg) is a protein made naturally by the thyroid as well as by differentiated thyroid cancer. After treatment, there should be very low levels of thyroglobulin in the blood since the goal of treatment is to remove all thyroid cells.
If Tg is rising after surgery and/or radioactive iodine, it may be a sign of more cancer. A tumor marker test may be done to measure the body’s Tg level before, during, and/or after treatment.
There is also a test for thyroglobulin antibodies (TgAb), which are proteins produced by the body to attack thyroglobulin that occur in some patients. If TgAb is found, it is known to interfere with the results of the Tg level test.
d. Medullary Type-Specific Tests
If MTC is a possibility, the doctor will order tumor marker tests to check for high calcitonin and carcinoembryonic antigen (CEA) levels. The doctor should also recommend a blood test to detect the presence of RETproto-oncogenes, particularly if there is a family history of MTC.
An ultrasound uses sound waves to create a picture of the internal organs. An ultrasound wand or probe is guided over the skin of the neck area.
High-frequency sound waves create a pattern of echoes that show the doctor the size of the thyroid gland and specific information about any nodules, including whether a nodule is solid or a fluid-filled sac called a cyst.
4. Molecular Testing of the Nodule Sample
Your doctor may recommend running laboratory tests on a tumor sample to identify specific genes, proteins, and other factors unique to the tumor.
Genetic analysis of your thyroid nodule may allow you to understand the risk of the thyroid nodule being cancerous. Other genetic, protein, and molecular analysis of thyroid cancers can help determine your treatment options, including types of treatments called targeted therapy.
5. Radionuclide Scanning
This test may also be called a whole-body scan. This scan will either be done using a very small, harmless amount of radioactive iodine I-131 or I-123, called a tracer. It is used most often to learn more about a thyroid nodule. In this test, the patient swallows the tracer, which is absorbed by thyroid cells.
This makes the thyroid cells appear on the scan image, allowing the doctor to see differences between those cells and other body structures.
An x-ray is a way to create a picture of the structures inside of the body, using a small amount of radiation. For instance, a chest x-ray can help doctors determine if the cancer has spread to the lungs.
7. Computed Tomography (CT or CAT) Scan
A CT scan creates a 3-dimensional picture of the inside of the body using x-rays taken from different angles. A computer combines these pictures into a detailed, cross-sectional view that shows any abnormalities or tumors.
A CT scan can be used to measure the tumor’s size. Sometimes, a special dye called a contrast medium is given before the scan to provide better detail on the image. This dye can be injected into a patient’s vein or given as a pill to swallow.
CT scans are often used in people with thyroid cancer to examine parts of the neck that cannot be seen with ultrasound (see above). Also, CT scans of the chest may be needed to look to see if thyroid cancer has spread to that area of the body.
CT scans of the abdomen may be used to see if thyroid cancer has spread to the liver or other sites. Patients with the hereditary form of medullary thyroid cancers may be at risk for developing other types of endocrine tumors in the abdomen; these patients may also have a CT scan of the abdomen.
8. Positron Emission Tomography (PET) or PET-CT Scan
A PET scan is usually combined with a CT scan (see above), called a PET-CT scan. But you may hear your doctor refer to this procedure just as a PET scan. A PET scan is a way to create pictures of organs and tissues inside the body.
A small amount of a radioactive sugar substance is injected into the patient’s body. This sugar substance is taken up by cells that use the most energy. Because cancer tends to use energy actively, it absorbs more of the radioactive substance. A scanner then detects this substance to produce images of the inside of the body.
A biopsy is the removal of a small amount of tissue for examination under a microscope. Other tests can suggest that cancer is present, but only a biopsy can make a definite diagnosis. The way to determine whether a nodule is cancerous or benign is through a biopsy.
During this procedure, the doctor removes cells from the nodule that are then examined by a cytopathologist. A cytopathologist is a doctor who specializes in analyzing cells and tissue to diagnose disease. This test is often done with the help of ultrasound.
A biopsy for thyroid nodules will be done in 1 of 2 ways:
a. Fine Needle Aspiration
This procedure is usually performed in a doctor’s office or clinic. It is an important diagnostic step to find out if a thyroid nodule is benign or cancerous. A local anesthetic may be injected into the skin to numb the area before the biopsy.
The doctor inserts a thin needle into the nodule and removes cells and some fluid. The procedure may be repeated 2 or 3 times to get samples from different areas of the nodule. A report of the results of this test is created by the cytopathologist.
A pathologist is a doctor who specializes in interpreting laboratory tests and evaluating cells, tissues, and organs to diagnose disease. The test can be positive, meaning there are cancerous cells, or negative, meaning there are no cancerous cells. The test can also be undetermined, meaning it is not clear whether cancer is there.
b. Surgical Biopsy
If the needle aspiration biopsy is not clear, the doctor may suggest a biopsy in which the nodule and possibly the affected lobe of the thyroid will be removed using surgery.
Removal of the nodule alone is usually not recommended due to the potential to incompletely remove the potential malignancy without adequate margins, which is an area of tissue around the nodule.
This procedure is usually done under general anesthesia. It may also require a hospital stay. | https://surgmedia.com/thyroid-cancer/ | 21 |
29 | Parties try to influence the laws passed by Congress and the president after they are in office.
Their political power is huge.
Americans have long been suspicious of parties in theelectorate that have party rule.
The separation of powers into different branches was meant to blunt any attempts by a "faction" or party to gain control of government.
In some elections, one party wins the presidency but fails to take control of Congress.
In some elections, voters give control of Congress to the party that is against the president in order to rein in the executive.
Our political system makes it difficult for any party to gain complete control of American government, and when one does, unified government is often short lived.
The separation of powers and government has not put the parties out of business.
The operations of the legislature and the conduct of elections are carried out by the Democratic and Republican parties.
It is difficult to imagine how the electoral system would work without political parties.
There was a campaign to get the United States to approve the Constitution.
In a democracy, parties form to solve problems of rationality and collective action.
They offer clear choices to voters, lowering the costs of collecting information about the candidates, and making it easier for voters to hold government accountable.
The transition from elections to government is made easier by the parties.
They pay the costs of bringing together representatives of different groups into a coalition that can act together in government.
Parties link elections to governing.
In this chapter we highlight some of the general functions of parties in a democracy, but we are especially attentive to party politics in the United States.
The United States has just two major parties, the Democrats and the Republicans.
The American two-party system is durable and flexible.
Since the 1850s, third parties have not been able to compete with the Democrats and Republicans.
The teams of politicians, activists, and interest groups are called political parties.
Democrats and Republicans compete for most offices in the United States, which has a two-party system.
The two-party system is a result of the form of government and the use of single-member legislative districts.
Parties have different views on how government should operate and what laws should be enacted.
They serve different interests and communities.
The parties simplify the choices that voters must make and reduce the costs of gathering information about how to vote by offering distinctive "brands."
The legislative and executive branches of the U.S. government are organized by the parties, with the party that won a majority of seats controlling most of the key positions and levers of power.
Occasionally, a governor or legislator runs as a third-party candidate, but those independent candidacies usually fail unless the person eventually ties himself or herself to one of the parties.
Donald Trump, a reality-TV star and businessman, and self-proclaimed socialist and candidate for the presidency of the United States, gained widespread support in the 2016 election by attaching themselves to the Republican Party and Democratic Party.
Hillary Clinton faced a serious challenge from the left-winger in the Democratic primaries.
Trump defeated a field of Republican politicians.
Donald Trump was not a traditional Republican.
During the 2016 presidential election, many Republican leaders and politicians, including the party's 2012 nominee, refused to endorse Trump as the GOP's nominee; some even crossed party lines to endorse Clinton.
Many Americans have a lack of trust in the establishment.
Without affiliating with one of the two major parties, neither candidate would have been as successful.
When a group breaks away from one of the parties, it usually rejoins the fold or moves into another party.
The parties are not static.
The Democrats have moved from a southern base to a northern one, while the Republicans have moved from the Northeast and Midwest to the South.
The two parties have adapted to the changes in American society.
The institutions of a two-party system mean that the choice between the parties translate into the government.
The majority of seats in the House and Senate are up for election.
The presidency is decided by the majority of electoral votes.
We choose between those who are in power and those who want to be.
Voting against the party that holds the presidency and Congress means changing the government.
Most parliamentary systems that allocate seats to parties based on the number of votes won nationwide have more than one party.
Governments are usually made up of coalitions of several parties in such systems.
It is hard to assign blame to any one party in a coalition government because it is difficult to anticipate which coalitions might form.
A two-party system has great simplicity.
The Democratic and Republican parties have presented distinct visions for governing.
They keep successful third parties at bay by capturing a range of ideological views.
There is very little chance for a third party to enter races and win seats in congress.
The rationality principle was used by Duverger.
Any third-party movement that attempts to enter the American party system would likely fail to win, or worse, would improve the electoral fortunes of the party it most dislikes.
Voters don't want to waste a vote on a losing cause.
There are two successful parties.
U.S. government institutions and electoral rules create strong pressures to maintain just two parties, distinctive in their plans for governing but expansive in the interests that they encompass.
The presidency or one or both chambers of Congress will be controlled by one party, which simplifies politics inside governing institutions.
Voters can identify with one of the two major parties or use the party labels to figure out the most effective way to vote.
Parties are not good.
The rationality and collective action problems are not solved by simply making democracy work.
These problems are opportunities for party members to get elected, influence public policy, and make a profit.
The nominating system and the party organization give politicians a clear path to power.
The party could be used to pull government policies in a more favorable direction to the party's views.
The parties offer the potential benefits of being closer to power for interest groups.
The influence of activists, party leaders, organized interests, and local bosses becomes too great at times.
Efforts to prevent party bosses and interest groups from taking advantage of their power have resulted in regulations on campaign contributions and government contracting.
The reforms have weakened the party organizations, but they have always found new resources.
American history has seen eras of strong party organization and periods of relative party weakness as a result of these actions and reactions.
American politics is characterized by strong party orga nizations and disciplined legislative parties.
They offer a simple strategy for changing the direction of government to the American voter.
They can be distinguished from interest groups on the government by their orientation.
A party wants to control the entire government.
Interest groups are concerned with electing politicians who are inclined in their policy direction.
Interest groups don't sponsor candidates directly, and between elections they usually accept government personnel and try to influence government policies through them.
Three problems are solved by political parties.
The first problem of collective action is that a candidate for office must attract campaign funds, assemble a group of activists and workers, mobilize prospective voters, and persuade them to vote for her.
The give-and-take between the legislature and the executive can make or break policy success.
Competition among politicians is the third problem.
Politician seek success for themselves and the organization at the same time.
In furthering their own ambitions, politicians can act in ways that serve their own interests but that can undermine the collective ambitions of fellow partisans.
In the following sections, we look at each of the problems.
Each is a problem faced by politicians.
Parties form to serve politicians' interests.
The political parties in the United States were formed by politicians.
Running for office, organizing one's supporters, and forming a government are some of the basic tasks of political life.
Political parties can only be understood in the context of elections.
American parties take their structure from the electoral process.
For every district where an election is held, there should be a party unit.
The brand name, resources, buzz, and link to the larger national organization help arouse interest in the party's candidates and stimulates commitment by voters.
These activities allow voters to understand the choice of candidates and overcome the free riding that reduces turnout in general elections.
They suggest that parties in the legislature are electoral machines that serve to preserve and enhance party reputation, thereby giving meaning to the party labels when elections arecontested.
Parties keep order in their ranks so that individual actions by members don't undermine the party label.
This is an especially challenging task for party leaders when there is diversity within each party, as has been the case in American political history.
The electoral competition by groups is enabled by the party organization.
The Chamber of Commerce and the National Association of Manufacturers are two organizations that have long been members of the Republican Party.
Labor unions and reformers have been aligned with the Democratic Party since the 1930s.
Large groups that don't have a lot of money find their voice in the party system.
Women's organizations worked closely with the Progressives inside the Republican Party in order to gain the right to vote.
Changing social issues and party strategies led many newer women's groups, such as the National Organization for Women, to align with the Democratic Party.
Immigrant groups have aligned with the parties in the past.
Irish immigrants attached themselves to the Democrats, whose urban political organizations helped them find jobs and negotiate the immigration system, Italian immigrants tended toward Republicans, and Cubans have historically aligned with the Republicans because of the party's immigration policies.
Disaffection with liberal policies concerning school prayer, funding of religious schools, abortion, and other social issues led fundamentalist Christians to align with the Republican Party.
Collective action by groups and party electoral strategy are two different things.
Electoral resources, including a reliable voting bloc, money, personnel, and even candidates, are provided by groups that align with a party.
The interests gain influence over public policy when their party wins.
An organized interest may suffer if the party it supports loses the election.
The process of making policy involves political parties.
Parties are coalitions of people with similar interests who support one another's programs and initiatives.
A common party label gives members a reason to work together.
Parties facilitate the policy-making process because they are permanent coalitions.
The business of government would slow to a crawl if alliances had to be formed from scratch for each proposal.
The time and effort needed to advance a legislative proposal is greatly reduced by parties creating a basis for coalition.
Even if the party is in the minority, every president works with his party's leadership in the House and Senate to ensure that the executive's agenda is supported.
Without party support, the president would have to form a completely new coalition for every policy proposal.
There is a need for action on a crucial issue.
In 2008, when the financial industry was collapsing, President George W. Bush proposed a $700 billion intervention.
The majority of Democratic votes in the House were in favor of the Republican president's plan, but Republican leadership in the House did not have enough votes to pass it.
The failure of the Republicans to come to agreement on this plan in advance doomed the president's proposal.
Republican efforts to repeal theAffordable Care Act were defeated in the Senate by a few "no" votes.
The road to policy can be difficult even when party coalitions agree on an issue.
Individual politicians are helped by parties.
The brand names that parties give are important electoral assets.
Once their candidates are elected, parties give these politicians a basis for coordination, common cause, cooperation, and joint enterprise.
Individual ambition can undermine any base for cooperation.
Political parties regulate career advancement, provide for orderly resolution of ambitious competition, and attend to the post-career care of party officials in order to rescue coordination and cooperation.
Simple devices such as primaries give a context in which to resolve clashing electoral ambitions.
The Democratic Committee on Committees in the House and similar bodies for the Republicans and the Senate resolve competing claims for power positions.
Incentives for politicians to conduct their campaigns and vote in line with their party are provided by centralized fund-raising by organizations.
Politics is not about foot soldiers walking in lockstep but about ambitious and self-sufficient individuals seeking power.
The burnishing of individual careers is a formula for destructive competition in which the dividends of cooperation are rarely reaped.
Political parties try to capture some of those dividends by providing a structure in which ambition is not suppressed altogether but is not so destructive.
Political action involves merging people's individual preferences.
Collective action is easy for interest groups, people who share a common policy goal work together to achieve it.
Collective action is more complicated for political parties.
Parties want to achieve policy goals and capture public offices.
Party leaders sometimes have to compromise on policy goals in order to improve their electoral chances.
In 2016 there were two major anti-abortion protests in the US.
On the other hand, the Republican Party's opposition to abortion is reflected in the party's platform every four years.
Despite electing three presidents since 1980 and abortion, policy action can be complex.
The first presidential between principles and practices was in 1976 after the Court's decision on the American party system.
Republicans rhetoric is anti against abortion but it's a moral and personal action that runs counter to the views of many people.
The 2012 and 2016 Republican Party platforms are called more difficult because of the ate.
Parties are involved in nominations and elections to make it easier for citizens to choose their leaders.
They help solve the problems of collective action and ambition.
They influence the institutions of government, providing leadership as well as organization of the various congressional committees and activities on the floor in each chamber.
Problems of collective choice are solved by them.
The recruitment of candidates for office is one of the most important party activities.
Thousands of state and local offices as well as for congressional seats must be found for candidates each election year.
When an incumbent is not seeking reelection or when an incumbent in the opposing party is vulnerable, party leaders look for strong candidates and interest them in entering the campaign.
In some states, the dates for candidates to file for office for the November election come as early as January.
The parties' message and fortunes in the general election are shaped by candidate recruitment in the spring.
The Republicans have done a better job of electing candidates to state legislative seats over the past decade.
Republicans held more than 4,100 state legislative seats after the 2016 election.
The parties use their recruitment efforts to target segments of the electorate where they want to strengthen their appeal.
An ideal candidate will be charismatic, organized, knowledgeable, and an excellent debater, as well as have the ability to raise enough money to mount a serious campaign.
Party leaders don't give financial backing to candidates who can't raise enough money on their own.
Senate seat, several million dollars; and upward of $1 billion for the presidency.
In recent years, many potential congressional candidates declined to run, saying they were reluctant to leave their homes and families for the busy life of a member of Congress.
There are only a few provisions for elections in the Constitution.
The power to set the "Times, Places and Manner of holding Elections" is given to the states.
Congress has the power to make such laws if it chooses to do so.
Congress has occasionally passed laws regulating elections, congressional districting, and campaign practices, as well as amending the Constitution to expand the right to participate in elections.
The Constitution and the laws only set citizenship and age requirements for candidates.
The president must be at least 35 years of age, a natural born citizen, and a resident of the United States for at least 14 years.
A senator needs to be at least 30 years old, a U.S. citizen for at least 9 years, and a resident of the state he or she represents.
A member of the House must be at least 25 years old, a U.S. citizen for 7 years, and a resident of the state he or she represents.
The most difficult business of the parties is nomination.
The process by which a political party selects their candidate for a public office is the same as it is when a presidential candidate is eliminated.
A nominating convention is a formal meeting of members of a political party that is bound by rules.
Conventions are meetings of delegates elected by party members from the relevant county.
Each party's national convention delegates are chosen by party members on a state- by-state basis, and there is no single national delegate selection process.
The nominees are selected in a primary election.
It is possible to participate in an election.
They just go to the polling place and ask for the ballot of a particular party.
The open primary allows voters to consider candidates and issues before making a decision on which party to vote for.
It's open on the day of the primary.
Only a small number of states, including Connecticut, Delaware, and Utah, provide for state conventions to nominate candidates for statewide offices, and even those states also use primaries whenever a substantial minority of delegates has voted for one of the defeated candidates.
In either case, primaries are more open than convention or caucuses to new issues.
In several states, the presidential nominating process begins with caucuses.
The caucuses are open to registered voters, but the nomination process consists of lengthy discussions among those present, and the meetings can last several hours.
The county convention selects delegates to go to the state party convention at the local caucuses.
The state party convention is where delegates are elected to the national convention.
Candidates must win both the primary and the general election in order to hold office because of the shift from party conventions to primary elections and caucuses.
The introduction of primary elections may have contributed to the rise of candidate-centered politics by creating advantages for politicians who are particularly strong campaigners but who may be less effective at governing.
The institution principle suggests that institutions matter because they encourage or discourage particular types of candidates.
Immediately after the nominations, the election period begins.
This has been a time of glory for the political parties, with their popular base of support fully displayed.
The local party workforces are activated in the form of all the paraphernalia of the party committees.
The first step in the electoral process is voter registration.
At one time party workers were responsible for this activity, but they have been replaced by civic groups such as the League of Women Voters, unions, and chambers of commerce.
On Election Day, those who have registered must decide if they want to go to the polling place, stand in line, or vote.
It is possible for political parties, candidates, and campaigning to make a difference in persuading eligible voters to vote.
Parties help overcome the free-rider problem by getting the voters to support the candidates.
In recent years, the parties and not-for-profit groups have raised millions of dollars for election organizing and advertising.
Legions of workers have used new technologies to build and communicate with their supporters.
The "netroots" organizations of politics are what they are.
To comply with federal election and tax law, groups must maintain independence from the political parties, even though they have the same objectives as the parties.
The shadow appendages of the two parties mobilize supporters for one or the other.
The netroots have become an important part of campaign organizations, and these new forms of direct campaigning have increased voter turnout.
It is easier for voters to choose a party.
It's argued that we should vote for the best person regardless of party affiliation.
Only a few candidates for president, U.S. Senate, U.S. House, and governor are well known to voters.
Voters' familiarity with the candidates decreases as one moves down the ballot.
Voters might have difficulty making informed decisions without party labels.
Voters benefit from candidates' party affiliations.
Voters can infer from party labels how the candidate will behave once elected, without knowing much about the candidate.
The Republican Party favors a limited government role in the economy and reduced government spending, while the Democratic Party favors more government regulation of the economy and a larger public sector.
Civil rights for women, sexual and racial minorities, and a secular approach to religion are all favored by the Democrats.
Government participation in expanding the role of religious organizations in civil society is supported by the Republicans.
In the 1930s, the parties' positions on the economy were solidified, while their division on social issues emerged in the 1960s and 1970s.
The Democrats and Republicans are both liberal and conservative.
Most Americans identify with one of the two parties and are likely to vote with that party, but even those who do not can derive value from the party labels.
If a group coordinates its activities with a political party, it is subject to additional reporting requirements and contribution limits, and political action can violate the conditions for tax-exempt status of nonprofits.
The voter's choice to evaluate two competing policy positions or the performance of those in office can be simplified by party labels.
She will vote against the president's party on the ballot.
She will vote to keep the president and his party in power.
The independent voter needs parties to simplify their choice.
The parties make it easier to hold government accountable.
In good times and bad times, people can vote against the party in power.
If unpopular legislation is enacted, they can vote against the party in power.
The problem of collective responsibility is one of the most important collective action problems facing American democracy.
Each race for legislator or executive would become an isolated event if every politician ran on her own.
It would be hard for voters to send a message to government that they want it to go in a different direction.
Politicians benefit from party labels because they lend meaning to elections.
Most districts and states don't have to educate voters about what they stand for if they have recognizable labels.
Democrats and Republicans are usually enough.
Like minded people identify with the organizations on the labels.
People who broadly share the principles of a party and who wish to participate on a high level will attend party meetings, run for leadership positions in local and state party organizations, attend state and national conventions, and even run for elected office.
The division between the parties is reinforced by successive elections.
The two major parties attract a lot of groups and ideas.
They are positioning themselves as broad coalitions so that the Democrats and Republicans can win control of Congress.
The studies of delegates were done by Walter J.
Similar sorting is found in surveys of candidates.
The platforms of the Democratic and Republican parties are shaped by the coalitions that come together.
What interests and social groups align with the parties are determined by the political coalitions that party leaders assemble.
The Democratic Party believes that regulation is necessary to ensure orderly economic growth, to prevent the emergence of monopolies, and to address certain costs of economic activity, such as pollution, poverty, and unemployment.
The Democratic Party wants to protect civil rights for women and minorities.
The Republican Party believes in laissez-faire economics and a minimal government role in the economy.
Ronald Reagan's vision of limited intervention in the economy with an expanded role for religion in society and opposition to immigration, affirmative action, and abortion, views that continue to represent the party today, were part of the coalition that Ronald Reagan built in the late 1970s.
The American parties are thought to be odd amalgams of conflicting ideas.
Liberalism is a political philosophy that pairs laissez-faire economics with liberal views on civil rights.
Conservatism, which maintains a respect for social and political order, prefers a stronger role for religions, a greater respect for social hierarchies, and government power in the economy.
These traditional views have been scrambled by the American parties.
Conservative views on civil rights and religion are part of the Republican Party's laissez-faire economics.
Liberal views on civil rights are tied to an expansive view of government in the economy.
The American parties have different ideas in response to evolving issues.
The coalition that President Franklin Delano Roosevelt formed in the 1930s consisted of Progressives Republicans, old-line Democrats, and urban political machines in northern and midwestern cities.
The political philosophy and public policies pursued under the New Deal were influenced by this peculiar coalition.
Roosevelt could not do things on some issues.
He couldn't push for civil rights for blacks without losing the support of southerners.
The history of the parties in the United States made a big difference in the meaning of Democratic liberalism. | https://knowt.io/note/cb77bc3e-7d6b-4835-9c71-26606c127beb/12----Part-1-Political-Parties | 21 |
39 | Eutrophication describes the biological effects of an increase in the concentration of nutrients. The collective term 'nutrients' refers to those elements that are essential for primary production by plants or other photosynthetic organisms. Eutrophication is most often caused by increases in the availability of nitrogen and phosphorus, commonly present in soil and water in the form of nitrate and phosphate, respectively. However, altered concentrations of any plant nutrient may have a recognizable biological effect. Eutrophication can occur in any aquatic system (freshwater or marine), and the term is also used to describe the process whereby terrestrial vegetation is affected by nutrient-enriched soil water.
This OpenLearn course provides a sample of level 2 study in Environment & Development
After studying this course, you should be able to:
describe the principal differences between a eutrophic and an oligotrophic ecosystem
explain the mechanisms by which species diversity is reduced as a result of eutrophication (Questions 2.1 and 2.2)
contrast the anthropogenic sources that supply nitrogen and phosphorus to the wider environment, and describe how these sources can be controlled (Question 3.1)
describe how living organisms can be used as monitors of the trophic status of ecosystems (Question 4.1)
compare the advantages and disadvantages of three different methods for combating anthropogenic eutrophication (Question 4.2).
The levels of nutrients present determine the trophic state of a water body, where trophic means 'feeding'.
Give another example of the adjective trophic being used in a scientific context.
Trophic levels, as applied to a food chain.
The adjective eutrophe (literally 'well fed') was first used by the German botanist Weber in 1907, to describe the initially high nutrient conditions that occur in some types of ecosystem at the start of secondary succession. Scientists studying lakes at the beginning of the 20th century identified stages in plant community succession that appeared to be directly related to trophic state or nutrient status. They described a series of stages:
'oligotrophic — mesotrophic — eutrophic — hypertrophic'
where oligotrophic meant 'low in nutrients', mesotrophic 'with intermediate nutrient concentration', eutrophic 'high in nutrients' and hypertrophic 'very high in nutrients'. At the time, these definitions were derived from comparative estimates between water bodies with different nutrient status, judged according to their phytoplankton communities. Phytoplankton is a collective term for the free-floating photosynthetic organisms within the water column. It encompasses both algae (from the kingdom Protoctista) and photosynthetic members of the kingdom Bacteria. Thus an oligotrophic lake would have clear water with little phytoplankton, whereas a eutrophic lake would be more turbid and green from dense phytoplankton growth, and a mesotrophic lake would be intermediate between the two. Table 1.1 summarizes some of the general characteristics of oligotrophic and eutrophic lakes. A further definition, dystrophic, describes 'brown-water lakes', which have heavily stained water due to large amounts of organic matter usually leached from peat soils. The presence of these organic compounds can reduce the availability of nutrients to organisms, making the water body even less productive than an oligotrophic one.
|diversity of primary producers||high species diversity,with low population densities||low species diversity,with high population densities|
|light penetration into water column||high||low|
|plant nutrient availability||low||high|
|oxygen status of surfacewater||high||low|
|fish||salmonid fish (e.g.trout, char)often dominant||coarse fish (e.g. perch, roach, carp) often dominant|
Why is light penetration poor in eutrophic lakes?
The high density of phytoplankton absorbs light for photosynthesis and prevents it penetrating deeper into the water.
More recently, trophic bands have been defined in relation to levels of nutrients measured by chemical analysis. Table 1.2 shows trophic bands as defined in relation to concentrations of total phosphorus.
|Trophic band||Total phosphorus/mg l-1|
The trophic state of water bodies and rivers varies depending on a number of factors, including position in the landscape and management of surrounding land. In general, upland areas are more likely to have nutrient-poor (oligotrophic) water, characterized by relatively fast-flowing rivers (Figure 1.1) and lakes that have clear water with limited higher plant communities.
By contrast, lowland waters in more fertile river catchments tend to be nutrient-rich (eutrophic), and lakes in lowland areas are more likely to be turbid with lush fringing vegetation. Lowland rivers have slower flow and are likely to be more nutrient rich as a result of soluble compounds having been washed into them. They are likely to have fringing vegetation and some floating and submerged aquatic plants (Figures 1.2 and 1.3). In aquatic systems, the term macrophyte is used to describe any large plant (macro, large; phyte, plant). The term is used to distinguish angiosperms (whether emergent, floating or submerged) from small algae such as diatoms (which are strictly not plants at all, but are often lumped together with plants when considering the productivity of ecosystems).
What is the process by which nutrient elements are lost from the soil profile by the action of excess rainfall draining through it, which may eventually deliver them to a surface water body?
The term 'eutrophication' came into common usage from the 1940s onwards, when it was realized that, over a period of years, plant nutrients derived from industrial activity and agriculture had caused changes in water quality and the biological character of water bodies. In England and Wales, eutrophication has been a particular concern since the late 1980s, when public awareness of the problem was heightened by widespread toxic blue-green bacterial blooms (commonly, but incorrectly, referred to as algal blooms) in standing and slow-flowing freshwaters. Figure 1.4 shows blue-green bacteria (cyanobacteria) growing at the margins of a lake. Cyanobacteria are not typical bacteria, not only because some of them are photosynthetic, but also because some of them can be multicellular, forming long chains of cells. Nonetheless, cyanobacteria clearly belong to the kingdom Bacteria because of their internal cellular structure.
Why are cyanobacteria so productive in eutrophic water bodies (Figure 1.4) compared with oligotrophic ones?
The ready availability of nutrients allows rapid growth. In oligotrophic water the rate of growth is limited by the nutrient supply, but in eutrophic water it is often only the availability of light which regulates primary production.
A wide range of ecosystems has been studied in terms of their species diversity and the availability of resources. Each produces an individual relationship between these two variables, but a common pattern emerges from most of them, especially when plant diversity is being considered. This pattern has been named the humped-back relationship and suggests diversity is greatest at intermediate levels of productivity in many systems (Figure 1.5).
How does species diversity differ from species richness?
Species diversity includes a measure of how evenly spread the biomass is between species (equitability) rather than a simple count of the species present.
An explanation for this relationship is that at very low resource availability, and hence ecosystem productivity, only a limited number of species are suitably adapted to survive. As the limiting resource becomes more readily available, then more species are able to grow. However, once resources are readily available, then the more competitive species within a community are able to dominate it and exclude less vigorous species.
In most ecosystems it is the availability of mineral nutrients (especially nitrogen and phosphorus) that limits productivity. In eutrophic environments these nutrients are readily available by definition, so species diversity can be expected to be lower than in a more mesotrophic situation. It is for this reason that eutrophication is regarded as a threat to biodiversity. Eutrophication of the environment by human-mediated processes can have far reaching effects, because the nutrients released are often quite mobile. Together with habitat destruction, it probably represents one of the greatest threats to the sustainability of biodiversity over most of the Earth.
Eutrophication of habitat can occur without human interference. Nutrient enrichment may affect habitats of any initial trophic state, causing distinctive changes to plant and animal communities. The process of primary succession is normally associated with a gradual eutrophication of a site as nutrients are acquired and stored by vegetation both as living tissue and organic matter in the soil.
There is a long-standing theory that most water bodies go through a gradual process of nutrient enrichment as they age: a process referred to as natural eutrophication. All lakes, ponds and reservoirs have a limited lifespan, varying from a few years for shallow water bodies to millions of years for deep crater lakes created by movements of the Earth's crust. They fill in gradually with sediment and eventually became shallow enough for plants rooted in the bed sediment to dominate, at which point they develop into a closed swamp or fen and are eventually colonized by terrestrial vegetation (Figures 1.6 and 1.7).
Nutrient enrichment occurs through addition of sediment, rainfall and the decay of resident animals and plants and their excreta. Starting from an oligotrophic state with low productivity, a typical temperate lake increases in productivity fairly quickly as nutrients accumulate, before reaching a steady state of eutrophy which might last for a very long time (perhaps thousands of years). However, it is possible for the nutrient status of a water body to fluctuate over time and for trophic state to alter accordingly. Study of sediments in an ancient lake in Japan, Lake Biwa (believed to be around four million years old) suggests that it has passed through two oligotrophic phases in the last half million years, interspersed with two mesotrophic phases and one eutrophic phase. Evidence such as this has led to the suggestion that the nutrient status of lakes reflects contemporary nutrient supply, and can increase or decrease in response to this. The processes by which nutrients are washed downstream or locked away in sediments help to ensure that reversal of natural eutrophication can occur.
Rivers vary in trophic state between source and sea, and generally become increasingly eutrophic as they approach sea-level.
While eutrophication does occur independently of human activity, increasingly it is caused, or amplified, by human inputs. Human activities are causing pollution of water bodies and soils to occur to an unprecedented degree, resulting in an array of symptomatic changes in water quality and in species and communities of associated organisms. In 1848 W. Gardiner produced a flora of Forfarshire, in which he described the plants growing in Balgavies Loch. He talked of 'potamogetons [pondweeds] flourishing at a great depth amid the transparent waters, animated by numerous members of the insect and finny races'. These 'present a delightful spectacle, and the long stems of the white and yellow water lilies may be traced from their floating flowers to the root'. By 1980, the same loch had very low transparency and dense growths of planktonic algae throughout the summer. The submerged plants grew no deeper than 2 m, and in the 1970s included just three species of Potamogeton, where previously there were 17.
For any ecosystem, whether aquatic or terrestrial, nutrient status plays a major part in determining the range of organisms likely to occur. Characteristic assemblages of plant and associated animal species are found in water with different trophic states. Table 1.3 shows some of the aquatic macrophyte species associated with different concentrations of phosphorus in Britain.
|Phosphorus present as soluble reactive phosphorus (SRP)*/mg P l-1||Plant species (see Figure 1.8 for illustrations)|
|<0.1||bog pondweed, Potamogeton polygonifolius river water-crowfoot,Ranunculus fluitans|
|0.1-0.4||fennel-leaved pondweed, Potamogeton pectinatus|
|0.4-1.0||yellow water-lily,Nuphar lutea arrowhead,Sagittarias agittifolia|
|>1.0||spiked water-milfoil, Myriophyllum spicatum|
What impression would you gain from an observation that a population of river water-crowfoot in a particular stretch of river had been largely replaced by fennel-leaved pondweed over a three-year period?
The phosphorus concentration of the water may have increased.
Figure 1.9 illustrates the relationship between levels of total phosphorus in standing water and the nutrient status of lakes. Above a level of 0.1 mg phosphorus per litre, biodiversity often declines. Using the trophic bands defined in Table 1.2, this is the concentration at which lakes are considered to become hypertrophic. This is way below the standard of 50 mg l−1 set as the acceptable limit for phosphorus in drinking water. Nutrient loadings this high are generally caused by human activities. Extremely high levels of eutrophication are often associated with other forms of pollution, such as the release of toxic heavy metals, resulting in ecosystems that may no longer support life (Figure 1.10).
For lakes with no written historical records, the diatom record of sediments can be used to study earlier periods of natural change in water quality, and to provide a baseline against which to evaluate trends in artificial or human-induced eutrophication. Diatoms are microscopic photosynthetic organisms (algae of the kingdom Protoctista), which live either free-floating in lakes or attached to the surface of rocks and aquatic vegetation. It is well established that some species of diatom can tolerate oligotrophic conditions whereas others flourish only in more eutrophic waters. When they die, their tiny (< 1 mm) bony capsules, which can be identified to species level, sink to the bed and may be preserved for thousands of years. A historical record of which species have lived within a water body can therefore be constructed from an analysis of a core sample taken from its underlying sediment.
Studies of diatom remains have demonstrated that current levels of eutrophication far exceed those found historically. In the English Lake District, productivity and sediment input increased in some lakes when vegetation was cleared by Neolithic humans around 5000 years ago, and again when widespread deforestation occurred 2000 years ago. However the greatest increases in productivity, sediment levels and levels of carbon, nitrogen and phosphorus, have occurred since 1930. Figure 1.11 shows the general pattern of changes in productivity in Cumbrian lakes through history as the type and intensity of human activities has changed.
In the Norfolk Broads, the waters of the River Ant had a diverse macrophyte flora during the 19th century. The submerged species known as water soldier (Stratiotes aloides, Figure 1.12) was common, but by 1968 the only macrophytes remaining were those with permanently floating leaves, such as water-lilies. During that period, throughout the Broads, there was a general trend away from clear-water habitats, typified by, for example, the diminutive angiosperm known as the holly-leaved naiad (Najas marina), towards habitats containing more productive species, such as pondweeds (Potamogeton spp.) andhornworts (Ceratophyllum spp.). In some cases, they eventually became eutrophic habitats with turbid water, typified by free-floating green algae and cyanobacteria, with very few macrophytes at all. For example, hornwort (Ceratophyllum demersum) was almost choking Alderfen Broad in 1963, but had almost disappeared by 1968 to be replaced eventually by algal blooms in the 1990s.
Sediment cores from the River Ant and neighbouring broads suggest that observed changes in plant community composition were linked to rising levels of total phosphorus: mean levels in the area rose dramatically between 1900 and 1975 (Figure 1.13), but have since fallen as a result of actions taken to remove phosphorus from the system.
Using the trophic bands in Table 1.2, describe the change in the River Ant broads between 1800 and 1975.
In 1800 the water was at the upper end of the oligotrophic range; it had moved through the mesotrophic range to become eutrophic by 1900, and by 1940 would be classed as hypertrophic. Between 1940 and 1975 there was a further threefold increase in the concentration of total phosphorus.
Eutrophication has damaged a large number of sites of special scientific interest (SSSIs) designated in the UK under the Wildlife and Countryside Act of 1981: English Nature has identified a total of 90 lake SSSIs and 12 river SSSIs that have been adversely affected. Artificial eutrophication in rivers is even more widespread than in lakes and reservoirs. Human activities worldwide have caused the nitrogen and phosphorus content of many rivers to double and, in some countries, local increases of up to 50 times have been recorded.
Eutrophication has also become a problem for terrestrial wildlife. Deposition of atmospheric nitrogen and the use of nitrogen-rich and phosphorus-rich fertilizers in agriculture has resulted in nutrient enrichment of soils and has caused associated alteration of terrestrial plant and animal communities.
Some of the effects of large-scale eutrophication have adverse consequences for people, and efforts to manage or reduce eutrophication in different countries now cost substantial sums of money. Removing nitrates from water supplies in England and Wales cost £20 million in 1995. Higher frequency of algal blooms increases the costs of filtration for domestic water supply and may cause detectable tastes and odours due to the secretion of organic compounds. If the bloom is large, these compounds can accumulate to concentrations that are toxic to mammals and sometimes fish. Furthermore, the high productivity of the blooms means that although oxygen is released by photosynthesis during the day, the effect of billions of cells respiring overnight can deplete the water of oxygen, resulting in fish dying through suffocation even if they tolerate the toxins.
Are fish most at risk from suffocation in warm or cool water?
Warm water, because oxygen is less soluble at warmer temperatures and is therefore more rapidly depleted by respiring organisms, especially as respiration rate also increases with temperature.
Another problem caused to the water industry by algal blooms is the production of large quantities of fine organic detritus, which, when collected within waterworks' filters, may support clogging communities of aquatic organisms such as nematode worms, sponges and various insects. These may subsequently find their way into water distribution pipes and on occasion appear in tap water!
A number of biological changes may occur as a result of eutrophication. Some of these are direct (e.g. stimulation of algal growth in water bodies), while others are indirect (e.g. changes in fish community composition due to reduced oxygen concentrations). This section summarizes some of the typical changes observed in aquatic, marine and terrestrial ecosystems following eutrophication.
Some typical changes observed in lakes following artificial eutrophication are summarized in Table 2.1. Similar characteristic changes are observed in other freshwater systems.
|• Turbidity increases, reducing the amount of light reaching submerged plants.|
|• Rate of sedimentation increases, shortening the lifespan of open water bodies such as lakes.|
|• Primary productivity usually becomes much higher than in unpolluted water and may be manifest as extensive algal or bacterial blooms.|
|• Dissolved oxygen in water decreases, as organisms decomposing the increased biomass consume oxygen.|
|• Diversity of primary producers tends to decrease and the dominant species change. Initially the number of species of green algae increases, causing temporary increase in diversity of primary producers. However, as eutrophication proceeds, blue-green bacteria become dominant, displacing many algal species. Similarly some macrophytes (e.g. bulrushes) respond well initially, but due to increased turbidity and anoxia (reduced oxygen) they decline in diversity as eutrophication proceeds.|
|• Fish populations are adversely affected by reduced oxygen availability, and the fish community becomes dominated by surface-dwelling coarse fish, such as pike (Esox lucius, see Figure 2.1) and perch (Perca fluviatilis).|
|• Zooplankton (e.g. Daphnia spp.), which eat phytoplankton, are disadvantaged due to the loss of submerged macrophytes, which provide their cover, thereby exposing them to predation.|
|• Increased abundance of competitive macrophytes (e.g. bulrushes) may impede water flow, increasing rates of silt deposition.|
|• Drinking water quality may decline. Water may be difficult to treat for human consumption, for example due to blockage of filtering systems. Water may have unacceptable taste or odour due to the secretion of organic compounds by microbes.|
|• Water may cause human health problems, due to toxins secreted by the abundant microbes, causing symptoms that range from skin irritations to pneumonia.|
In oligotrophic systems, even quite small increases in nutrient load can have relatively large impacts on plant and animal communities.
Plant species differ in their ability to compete as nutrient availability increases. Some floating and submerged macrophyte species are restricted to nutrient-poor waters, while others are typical of nutrient-rich sites (see Table 2.2). Figure 2.2 shows turbid water in a polluted drainage ditch associated with localized growth of algae. There are no aquatic plants present.
|Trophic state||Associated macrophytespecies|
|oligotrophic||alternate water-milfoil (Myriophyllum alternifolium) bog pondweed (Potamogeton polygonifolius)|
|oligo-mesotrophic||bladderwort (Utricularia vulgaris)|
|eutrophic||hairlike pondweed (Potamogeton trichoides)|
|tending towards hypertrophic||spiked water-milfoil (Myriophyllum spicatum) fennel-leaved pondweed (Potamogeton pectinatus)|
In rivers, the presence of plant species such as the yellow water-lily (Nuphar lutea) and the arrowhead (Sagittaria sagittifolia, Figure 2.3) are likely to indicate eutrophic conditions. In some rivers, the fennel-leaved pondweed (Potamogeton pectinatus) is tolerant of both sewage and industrial pollution.
Whereas some species can occur in waters with quite a wide range of nutrient levels, some are relatively obligate to specific trophic bands and are unable to survive if nutrient levels alter significantly from those to which they are adapted. In 1989, Michael Jeffries derived ranges of tolerance for a number of macrophyte species by studying literature on their occurrence and distribution in relation to different aspects of water quality. He also reviewed results of scientific studies reported in the literature to determine what concentrations of nitrate, ammonia, phosphorus, suspended solids and biological oxygen demand (BOD) appeared to be associated with severe or total loss of macrophyte species due to eutrophication (see Table 2.3). Research has suggested that changes to certain macrophyte communities can occur at soluble reactive phosphorus concentrations as low as 20 μg 1(1(0.02 mg l(1). Soluble reactive phosphorus (SRP) is the term commonly used to describe phosphorus that is readily available for uptake by organisms. It is used in contrast to measures of total phosphorus, which include forms of the element that are bound to sediment particles or locked up in large organic molecules. These forms are unavailable for immediate uptake, but they may become available over time.
Water samples from two lowland rivers, A and B, are found to contain the following concentrations of plant nutrients.
|River A||River B|
By reference to Table 2.3, what conclusions can you draw about the probable diversity of aquatic macrophytes in each of the rivers?
|Condition||SRP/mg P l−1||Nitrate/mg N l−1||Ammonia/mg N l−1||Suspended solids||BOD|
|degraded (partial loss of species found under 'natural' conditions)||0.1-0.2||3.0-10||0.2-5.0||30-100||2.0-6.0|
|severe loss of species||>0.2||>10||>5.0||>100||>6.0|
BOD, biological oxygen demand; SRP, soluble reactive phosphorus.
River A has a soluble reactive phosphorus concentration in the range 0.1-0.2 mg 1−1, which corresponds to the 'degraded' category in Table 2.3. This suggests that the diversity of macrophytes would be less than in the pristine natural state, due to a limited eutrophication effect. Neither form of nitrogen is present at concentrations above the natural range, so primary productivity may become limited by nitrogen rather than phosphorus, limiting the impact of the elevated phosphorus concentration.
River B has a similar concentration of SRP to river A, which would again place it in the 'degraded' category, but a much higher concentration of nitrogen in both its forms, especially nitrate at 12.1 mg 1−1, taking it into the 'severe loss of species' category. The elevated availability of both P and N would boost primary production in the watercourse, favouring algal communities and leading to a decline in macrophyte populations and diversity. The more competitive macrophytes may benefit from the increased nutrient availability, but their increased growth would further exclude less competitive species, resulting in lower diversity.
One of the symptoms of extreme eutrophication in shallow waters is often a substantial or complete loss of submerged plant communities and their replacement by dense phytoplankton communities (algal blooms). This results not only in the loss of characteristic plant species (macrophytes) but also in reduced habitat structure within the water body. Submerged plants provide refuges for invertebrate species against predation by fish. Some of these invertebrate species are phytoplankton-grazers and play an important part in balancing relative proportions of macrophytes and phytoplankton. Submerged macrophytes also stabilize sediments and the banks of slow-flowing rivers or lakes. Bodies of water used for recreation (boating for example) become more vulnerable to bank destabilization and erosion in the absence of well-developed plant communities, making artificial bank stabilization necessary (Figure 2.4). Submerged plants also have a role in the oxygenation of lower water layers and in the maintenance of aquatic pH.
Name three species of submerged macrophytes that are tolerant of eutrophic water.
Spiked water-milfoil (Myriophyllum spicatum), fennel-leaved pondweed (Potamogeton pectinatus) and arrowhead (Sagittaria sagittifolia).
The enrichment of water bodies by eutrophication may be followed by population explosions or 'blooms' of planktonic organisms.
Bursts of primary production in an aquatic ecosystem in response to an increased nutrient supply are commonly referred to as 'algal blooms'. Can you explain why this term is taxonomically incorrect?
The organisms responsible may be either algae or bacteria, or a mixture of the two. It is incorrect to refer to bacteria as algae as they belong to a completely different taxonomic kingdom.
'Algal blooms' are a well-publicized problem associated with increased nutrient levels in surface waters. The higher the concentration of nutrients, the greater the primary production that can be supported. Opportunistic species like some algae are able to respond quickly, showing rapid increases in biomass. Decomposition of these algae by aerobic bacteria depletes oxygen levels, often very quickly. This can deprive fish and other aquatic organisms of their oxygen supply and cause high levels of mortality, resulting in systems with low diversity. The odours associated with algal decay taint the water and may make drinking water unpalatable. Species of cyanobacteria that flourish in nutrient-rich waters can produce powerful toxins that are a health hazard to animals. Such problems are well documented for a number of famous lakes. The Zurichsee in Switzerland has been subject to seasonal blooms of the cyanobacterium Oscillatoria rubescens due to increased sewage discharge from new building developments on its shores. For lakes in Wisconsin, USA, 'nuisance' blooms of algae or bacteria occur whenever concentrations of phosphate and nitrate rise.
Increased productivity tends to increase rates of deoxygenation in the surface layer of lakes. Although phytoplankton release oxygen to the water as a byproduct of photosynthesis during the day, water has a limited ability to store oxygen and much of it bubbles off as oxygen gas. At night, the phytoplankton themselves, the zooplankton and the decomposer organisms living on dead organic matter are all respiring and consuming oxygen. The store of dissolved oxygen thus becomes depleted and diffusion of atmospheric oxygen into the water is very slow if the water is not moving.
What s the relative rate of oxygen diffusion in water compared with its rate in air?
Oxygen diffuses through water at approximately one ten-thousandth of its rate through air.
Still waters with high productivity are therefore likely to become anoxic.
Figures 2.5 and 2.6 give an example of the change in aquatic invertebrate species following eutrophication. In unpolluted water, mayfly larvae may be found. In polluted water, these species cannot survive due to reduced oxygen availability and are likely to be replaced by species, such as the bloodworm, which can tolerate lower oxygen concentrations.
Many species of coarse fish, such as roach (Rutilus rutilus, a cyprinid fish, Figure 2.7), can also tolerate low oxygen concentrations in the water, sometimes gulping air, and yields of fish may indeed increase due to the high net primary production (NPP) of the system. However these species are generally less desirable for commercial fishing than others such as salmon (Salmo salar, a salmonid fish, Figure 2.8), which depend on cool, well-oxygenated surface water. Populations of such species usually decline in waters that become eutrophic (Figure 2.9); they may be unable to live in a deoxygenated lake at all, resulting in fish kills (Figure 2.10). They may also be unable to migrate through deoxygenated waters to reach spawning grounds, resulting in longer-term population depressions.
Lake Victoria is one of the world's largest lakes and used to support diverse communities of species endemic to the lake (i.e. species that are found only there), but it now suffers from frequent fish kills caused by episodes of deoxygenation. In the 1960s deoxygenation was limited to certain areas of the lake, but it is now widespread. It is usually associated with at least a tenfold increase in the algal biomass and a fivefold increase in primary productivity.
When eutrophication reaches a stage where dense algal growth outcompetes marginal aquatic plants, even relatively tolerant fish species suffer from the consequent loss of vegetation structure, especially young fish (Figure 2.11). Spawning is reduced for fish species that attach their eggs to aquatic plants or their detritus, and fish that feed on large plant-eating invertebrates, such as snails and insect nymphs, suffer a reduced food supply.
In a southeast Asian village where cyprinid fish from the local pond are an important source of protein, eutrophication of the water by domestic sewage is seen as advantageous. Why?
The cyprinid fish are tolerant of deoxygenation, and the increased NPP boosts their food supply; therefore the yield of fish improves.
Why do you think nitrogen is becoming increasingly available to terrestrial ecosystems in many parts of the world)
Emission of nitrogen oxides from burning fossil fuel and of ammonia from intensive agriculture result in nitrogen compounds being transported and deposited by atmospheric processes.
Some suggested global scenarios for the year 2100 identify nitrogen deposition (together with land use and climate change) as one of the most significant 'drivers' of biodiversity change in terrestrial ecosystems.
Atmospheric deposition of nitrogen, together with the deposition of phosphorus-rich sediments by floods, can alter competitive relationships between plant species within a terrestrial community. This can cause significant changes in community composition, as species differ in their relative responses to elevated nutrient levels. As is the case with aquatic vegetation, terrestrial species that are able to respond to extra nitrogen and phosphorus with elevated rates of photosynthesis will achieve higher rates of biomass production, and are likely to become increasingly dominant in the vegetation. Atmospheric deposition of nutrients can reduce, or even eliminate, populations of species that have become adapted to low nutrient conditions and are unable to respond to increased nutrient availability. Some vegetation communities of conservation interest are directly threatened by atmospheric pollution.
In Britain, rare bryophytes are found associated with snowbeds (Figure 2.12). Most of the late-lying snowbeds in Britain are in the Central Highlands of Scotland, which are also areas of very high deposition of nitrogenous air pollutants. Snow is a very efficient scavenger of atmospheric pollution and melting snowbeds release their pollution load at high concentrations in episodes known as 'acid flushes'. The flush of nitrogen is received by the underlying vegetation when it has been exposed following snowmelt. Concentrations of nutrients in the meltwater of Scottish snowbeds have already been shown to damage underlying bryophytes, including a rare species called Kiaeria starkei. Recovery from damage is slow, and sometimes plants show no signs of recovery even four weeks after exposure to polluted meltwater. Given the very short growing season, this persistent damage can greatly reduce the viability and survival of the plants. Tissue nitrogen concentrations in Kiaeria starkei have been shown to be up to 50% greater than that recorded in other upland bryophytes. This example emphasizes the potential threat of atmospheric pollution to snowbed species, and suggests that some mountain plant communities may receive much higher pollution loadings than was previously realized.
The deposition of atmospheric nitrogen can be enhanced at high altitude sites as a consequence of cloud droplet deposition on hills. Sampling of upland plant species at sites in northern Britain has shown marked increases in nitrogen concentration in leaves with increasing nitrogen deposition, which is, in turn, correlated with increasing altitude. The productivity of the species was also found to increase in line with the amount of nitrogen deposited. Plant species can therefore respond directly to elevated levels of nitrogen. In the longer term, the relative dominance of species is likely to alter depending on their ability to convert elevated levels of deposited nitrogen into biomass.
What will be the effect on species diversity of increasing biomass?
As biomass increases beyond an optimal value, species diversity will decline.
Atmospheric pollution can also affect plant-insect interactions. Unusual episodes of damage to heather moorland in Scotland have been caused by the winter moth (Operophtera brumata) in recent years. It has been suggested that this may be due to the effects of increased nitrogen supply on heather plants, including increased shoot growth and a decrease in the carbon : nitrogen ratio in plant tissues. Winter moth larvae have been shown to grow faster on nitrogen-treated heather plants, so it is possible that increased atmospheric deposition of nitrogen may have a role in winter moth outbreaks and the associated degradation of heather moorland in upland Britain.
Although uplands are more susceptible to atmospheric deposition of nitrogen, the effects can be seen in lowland areas too. Nitrogen deposition and the consequent eutrophication of ecosystems is now regarded as one of the most important causes of decline in plant species in the Netherlands. Figure 2.13 shows how the number of grassland species of conservation interest in south Holland declines as the nitrogen load increases. The maximum percentage of species (approximately 95%) is possible at a nitrogen load of about 6 kg N ha−1 yr−1. At loads higher than 10 kg N ha−1 yr−1 the number of species declines due to eutrophication effects, and below 5 kg ha−1 yr−1nitrogen may be too limiting for a few species.
A significant proportion of important nature conservation sites in Britain are subject to nitrogen and/or sulfur deposition rates that may disturb their biological communities. Lowland heath ecosystems, for example, have a high profile for conservation action in Britain. They typically have low soil nutrient levels and a vegetation characterized by heather (Calluna vulgaris). Under elevated atmospheric deposition of nitrogen, they tend to be invaded by taller species, including birch (Betula spp., Figure 2.14), bracken (Pteridium aquilinum) and the exotic invader, rhododendron (Rhododendron ponticum).
A large number of SSSIs in the UK, designated as such on account of their terrestrial plant communities, are considered to have been damaged by eutrophication. This has been identified as a factor in the decline of some important UK habitats, including some identified for priority action under the UK's Biodiversity Action Plan (BAP). Wet woodlands, for example, occur on poorly drained soils, usually with alder, birch and willow as the predominant tree species, but sometimes oak, ash or pine occur in slightly drier locations. These woodlands are found on floodplains, usually as a successional habitat on fens, mires and bogs, along streams and in peaty hollows. They provide an important habitat for a variety of species, including the otter (Lutra lutra, Figure 2.15), some very rare beetles and craneflies. They also provide damp microclimates, which are particularly suitable for bryophytes, and have some unusual habitat features not commonly found elsewhere, such as log jams in streams which support a rare fly, Lipsothrix nigristigma. Wet woodlands occur on a range of soil types, including relatively nutrient-rich mineral soils as well as acid, nutrient-poor ones. Nevertheless, many have been adversely affected by eutrophication, resulting in altered ground flora composition and changes in the composition of invertebrate communities.
Nutrient enrichment can also affect habitats found in drier sites. Eutrophication caused by runoff from adjacent agricultural land has been identified as a cause of altered ground flora composition in upland mixed ash woods for example. These woods are notable for bright displays of flowers such as bluebell (Hyacinthoides non-scripta), primrose (Primula vulgaris) and wild garlic (Allium ursinum, Figure 2.16a). They also support some very rare woodland flowers which are largely restricted to upland ash woods, such as dark red helleborine (Epipactis atrorubens, Figure 2.16b) and Jacob's ladder (Polemonium caeruleum).
Other terrestrial UK BAP habitats that may be adversely affected by nutrient enrichment from agricultural fertilizers or atmospheric deposition include lowland wood pasture, lowland calcareous grassland, upland hay meadows and lowland meadows; again the result can be altered plant species composition.
Coastal marshes and wetlands in many parts of the world have been affected by invasion of 'weed' or 'alien' species. Eutrophication can accelerate invasion of aggressive, competitive species at the expense of slower growing native species. In the USA, many coastal marshes have been invaded by the common reed (Phragmites australis, Figure 2.17). Phragmites is a fierce competitor and can outcompete and entirely displace native marsh plant communities, causing local extinction of plants and the insects and birds that feed on them. Phragmites can spread by underground rhizomes and can rapidly colonize large areas. However, it is the target of conservation effort in some areas, including Britain, because the reedbeds it produces provide an ideal habitat for rare bird species such as the bittern (Botaurus stellarus). But its spread is not always beneficial for nature conservation, as it often results in the drying of marsh soils, making them less suitable for typical wetland species and more suitable for terrestrial species. This is because Phragmites is very productive and can cause ground levels to rise due to deposition of litter and the entrapment of sediment. Thus eutrophication can also play an indirect part in the loss of wetland habitats.
In the marine environment, nutrient enrichment is suspected when surface phytoplankton blooms are seen to occur more frequently and for longer periods. Some species of phytoplankton release toxic compounds and can cause mass mortality of other marine life in the vicinity of the bloom. Changes in the relative abundance of phytoplankton species may also occur, with knock-on effects throughout the food web, as many zooplankton grazers have distinct feeding preferences. In sheltered estuarine areas, high nutrient levels appear to favour the growth of green macroalgae ('seaweeds') belonging to such genera as Enteromorpha and Ulva (Figure 2.18).
Nutrient runoff from the land is a major source of nutrients in estuarine habitats. Shallow-water estuaries are some of the most nutrient-rich ecosystems on Earth, due to coastal development and the effects of urbanization on nutrient runoff. Figure 2.19 shows some typical nitrogen pathways. Nitrogen loadings in rainfall are typically assimilated by plants or denitrified, but septic tanks tend to add nitrogen below the reach of plant roots, and if situated near the coast or rivers can lead to high concentrations entering coastal water. Freshwater plumes from estuaries can extend hundreds of kilometres offshore (Figure 2.20) and the nutrients within them have a marked effect on patterns of primary productivity. Localized effects of eutrophication can be dramatic. For example, increased nitrogen supplies lead to the replacement of seagrass beds (e.g. Zostera marina) by free-floating rafts of ephemeral seaweeds such as Ulva and Cladophera, whose detritus may cover the bottom in a dense layer up to 50 cm thick.
Estuarine waters enriched by nitrogen from fertilizers and sewage have been responsible for the decline of a number of estuarine invertebrate species, often by causing oxygen depletion of bottom water. Intertidal oyster beds have declined considerably as a result of both over-harvesting and reduced water quality. Harvesting tends to remove oysters selectively from shallow-water habitats, reducing the height of oyster beds and making remaining oysters more vulnerable to the damaging effects of eutrophication. In estuaries, elevated rates of microbial respiration deplete oxygen, and periods of anoxia occur more frequently, especially in summer when water temperatures are high and there is slow water circulation. Oysters in deeper water are more likely to be exposed to anoxic conditions, being further removed from atmospheric oxygen inputs, and to die as a result.
Seagrass distributions are very sensitive to variation in light. Seagrasses, like any other plant, cannot survive in the long term if their rate of photosynthesis is so limited by light that it cannot match their rate of respiration. Light transmission is a function of water column turbidity (cloudiness), which in turn is a combination of the abundance of planktonic organisms and the concentration of suspended sediment. Seagrass distributions are therefore strongly affected by eutrophication and effects on water clarity. In Chesapeake Bay, eastern USA, seagrass beds historically occurred at depths of over 10 m. Today they are restricted to a depth of less than 1 m. Runoff from organic fertilizers has increased plankton production in the water column, limiting light transmission. In addition, large areas of oyster bed have been lost (also partly as a result of eutrophication) with an associated reduction in natural filtration of bay water. Oysters, like many shellfish, clean the water while filtering out microbes, which they then consume. Seagrass beds now cover less than 10% of the area they covered a century ago (Figure 2.21). Similar patterns have occurred elsewhere.
Seagrass beds play an important role in reducing the turbidity of coastal waters by reducing the quantity of sediment suspended in the water. Seagrasses slow down currents near the bottom, which increases the deposition of small sediment particles and decreases their erosion and resuspension. Seagrass roots also play an important part in stabilizing sediments and limiting disturbance caused by burrowing deposit-feeders.
Many species depend on seagrass beds for food or nursery grounds. Seagrass increases the structural complexity of habitat near the sea floor, and provides a greater surface area for epiphytic organisms. Seagrass leaves support rich communities of organisms on their surface, including microalgae, stationary invertebrates (such as sponges and barnacles) and grazers (such as limpets and whelks). The plants also provide refuges from predatory fishes and crabs. Predation on seagrass-associated prey such as the grass shrimp is much higher outside seagrass beds than within them, where the shrimps can hide and predator foraging is inhibited by grass cover. Without seagrasses, soft-substrate communities on the sea bed are simpler, less heterogeneous and less diverse. By reducing the health of seagrasses, eutrophication contributes directly to biotic impoverishment (Figure 2.22).
Seagrass (Zostera marina) is a key species for maintaining the biodiversity of estuaries. Why is its abundance reduced following eutrophication of estuarine waters? Can you identify a mechanism in which the decline of seagrass promotes its further decline?
Seagrass needs sufficient light to photosynthesize effectively. An increase in nutrient levels leads to a greater abundance of phytoplankton in the water column, which increases the turbidity of the water and blocks out the light. As the seagrass beds recede, the exposed sediment may be re-suspended and further increase the turbidity of the water, thereby exacerbating the problem in a classic positive feedback response. Another possible positive feedback loop is the loss of habitat for filter-feeding animals, which lived in the shelter of seagrass beds and helped keep the water clear by consuming microbes.
Marsh plant primary production is generally nitrogen limited, so saltmarsh vegetation responds readily to the artificial eutrophication that is now so common in nearshore waters. Eutrophication causes marked changes in plant communities in saltmarshes, just as it does in freshwater aquatic and terrestrial systems. Biomass production increases markedly as levels of eutrophication increase. Increases in the nitrogen content of plants cause dramatic changes in populations of marsh plant consumers: insect herbivores tend to increase (Figure 2.23) and so do numbers of carnivorous insects. Thus, increasing the nitrogen supply to saltmarshes has a dramatic bottom-up effect on marsh food webs. Eutrophication can also alter the outcome of competition among marsh plants, by changing the factor limiting growth. At low levels of nitrogen, plants that exploit below-ground resources most effectively, such as the saltmarsh rush (Juncus gerardii) are competitively dominant, but at higher nutrient levels dominance switches to plants that are good above-ground competitors, such as the common cord grass (Spartina anglica, Figure 2.24). In other words, as nitrogen availability increases, competition for light becomes relatively more important.
Light availability, water availability, temperature and the supply of plant nutrients are the four most important factors determining NPP. Altered availability of nutrients affects the rate of primary production in all ecosystems, which in turn changes the biomass and the species composition of communities.
Which two elements most often limit NPP?
Phosphorus and nitrogen are the main limiting nutrients.
Compounds containing these elements are therefore the causal agents of eutrophication in both aquatic and terrestrial systems. Let us consider them in turn.
Phosphorus has a number of indispensable biochemical roles and is an essential element for growth in all organisms, being a component of nucleic acids such as DNA, which hold the code for life. However, phosphorus is a scarce element in the Earth's crust and natural mobilization of phosphorus from rocks is slow. Its compounds are relatively insoluble, there is no reservoir of gaseous phosphorus compounds available in the atmosphere (as there is for carbon and nitrogen), and phosphorus is also readily and rapidly transformed into insoluble forms that are unavailable to plants. This tends to make phosphorus generally unavailable for plant growth. In natural systems, phosphorus is more likely to be the growth-limiting nutrient than is nitrogen, which has a relatively rapid global cycle and whose compounds tend to be highly soluble.
Human activities, notably the mining of phosphate-rich rocks and their chemical transformation into fertilizer, have increased rates of mobilization of phosphorus enormously. A total of 12 × 1012 g P yr−1 are mined from rock deposits. This is six times the estimated rate at which phosphorus is locked up in the ocean sediments from which the rocks are formed. The global phosphorus cycle is therefore being unbalanced by human activities, with soils and water bodies becoming increasingly phosphorus-rich. Eutrophication produces changes in the concentrations of phosphorus in all compartments of the phosphorus cycle.
The mechanisms of eutrophication caused by phosphorus vary for terrestrial and aquatic systems. In soils, some phosphorus comes out of solution to form insoluble iron and aluminium compounds, which are then immobilized until the soil itself is moved by erosion. Eroded soil entering watercourses may release its phosphorus, especially under anoxic conditions.
What changes occur to iron(III) compounds (Fe3+) as a result of bacterial respiration in anoxic environments, and how is their solubility affected?
Bacterial respiration can reduce Fe3+ to Fe2+, increasing the solubility of iron salts, including phosphates of iron.
Once in rivers, retention times for phosphorus may be short, as it is carried downstream either in soluble form or as suspended sediment. Algal blooms are therefore less likely to occur in moving waters than in still systems. In the latter, there is more time for the phosphorus in enriched sediments to be released in an 'available' form, increasing the concentration of soluble reactive phosphorus (SRP), and thus affecting primary production.
Phosphorus is generally acknowledged to be the nutrient most likely to limit phytoplankton biomass, and therefore also the one most likely to cause phytoplankton blooms if levels increase. However, there do appear to be some systems that are 'naturally eutrophic', with high phosphorus loadings. In these systems, nitrogen concentrations may then become limiting and play a dominant role in determining phytoplankton biomass.
Nearly 80% of the atmosphere is nitrogen. Despite the huge supply potentially available, nitrogen gas is directly available as a nutrient to only a few organisms.
Why cannot the majority of organisms utilize gaseous nitrogen?
Nitrogen gas is very unreactive and only a limited number of bacterial species have evolved an enzyme capable of cleaving the molecule.
Once 'fixed' by these bacteria into an organic form, the nitrogen enters the active part of the nitrogen cycle. As the bacteria or the tissues of their mutualistic hosts die, the nitrogen is released in an available form such as nitrate or ammonium ions - a result of the decay process. Alternatively, the high temperatures generated during electrical storms can 'fix' atmospheric nitrogen as nitric oxide (NO). Further oxidation to nitric acid within the atmosphere, and scavenging by rainfall, provides an additional natural source of nitrate to terrestrial ecosystems. Nitrates and ammonium compounds are very soluble and are hence readily transported into waterways.
Nitrogen is only likely to become the main growth-limiting nutrient in aquatic systems where rocks are particularly phosphate-rich or where artificial phosphate enrichment has occurred. However, nitrogen is more likely to be the limiting nutrient in terrestrial ecosystems, where soils can typically retain phosphorus while nitrogen is leached away.
In addition to the natural sources of nutrients referred to above, nitrogen and phosphorus enter the environment from a number of anthropogenic sources. These are considered below.
Pollution of the atmosphere has increased rates of nitrogen deposition considerably. Nitrogen has long been recognized as the most commonly limiting nutrient for terrestrial plant production throughout the world, but air pollution has now created a modern, chemical, climate that often results in excess supplies of nitrogen due to atmospheric deposition.
The main anthropogenic source of this enhanced nitrogen deposition is the NOx (mainly as NO) released during the combustion of fossil fuels — principally in vehicles and power plants. Like that generated within the atmosphere, this fixed nitrogen returns to the ground as nitrate dissolved in rainwater.
Patterns and rates of deposition vary regionally, and between urban and rural areas. Concentrations and fluxes of nitrogen oxides tend to decline with distance from cities: deposition of inorganic nitrogen has been found to be twice as high in urban recording sites in New York City than in suburban or rural sites. Some natural ecosystems, particularly those near industrialized areas, now receive atmospheric nitrogen inputs that are an order of magnitude greater than those for pre-industrial times. Figure 3.2 shows intensive industrial land use adjacent to the River Tees and its estuary in Teesside, UK. The estuary is still important for wildlife, including seals and a variety of birds, but its quality has declined markedly due to atmospheric and water pollution. In the UK, atmospheric deposition can add up to 150 kg N ha−1 yr−1. For comparison, the amount thought to trigger changes in the composition of species-rich grassland is 20-30 kg Nha−1 yr−1, and a typical dose farmers apply as inorganic fertiliser to an intensively managed grassland is 100 kg N ha−1 yr−1.
Domestic detergents are a major source of phosphorus in sewage effluents. Phosphates are used as a 'builder' in washing powders to enhance the efficiency of surfactants by removing calcium and magnesium to make the water 'softer'. In 1992, the UK used 845 600 tonnes of detergent of various types, all of which have different effects on the environment. Estimates of the relative contribution of domestic detergents to phosphorus build-up in Britain's watercourses vary from 20-60%. The UK's Royal Commission on Environmental Pollution (RCEP) reviewed the impacts of phosphate-based detergents on water quality in 1992, focusing on the effects on freshwater. The RCEP concluded that eutrophication was widespread over large parts of the country, and recommended a considerable investment in stripping phosphates from sewage as well as efforts to reduce phosphate use in soft-water areas. The main problem is that many of the ingredients of detergents are not removed by conventional sewage treatment and degrade only slowly.
Why did the RCEP recommend that phosphate use be reduced particularly in 'soft' water areas?
The reason for including phosphates in detergents is to soften the water, so in areas with naturally soft water they provide no benefit yet still cause pollution.
Other compounds added to detergents may also contribute to eutrophication. Silicates, for example, particularly if used as a partial replacement for phosphates in detergents, can lead to increased growth of diatoms. These algae require silicates to build their 'skeleton' and their growth can be limited by silicate availability. When silicates are readily available, diatoms characteristically have 'spring blooms' of rapid growth, and can smother the surfaces of submerged macrophytes, depriving them of light. A loss of submerged macrophytes is a problem because it results in the loss of habitat for organisms feeding on phytoplankton, and therefore the risk of blooms by other species is enhanced.
Runoff from intensively farmed land often contains high concentrations of inorganic fertilizer. Nutrients applied to farmland may spread to the wider environment by:
drainage water percolating through the soil, leaching soluble plant nutrients;
washing of excreta, applied to the land as fertilizer, into watercourses; and
the erosion of surface soils or the movement of fine soil particles into subsoil drainage systems.
Some water bodies have been monitored for long periods, and the impact of agricultural runoff can be demonstrated clearly. In the 50 years between 1904 and 1954, for example, in Loch Leven, Scotland, there were major changes in the species composition of the community of photosynthetic organisms. The species composition of the green alga community changed and the numbers of cyanobacteria rose considerably. Increasingly since then, large blooms of filamentous cyanobacteria have been produced in the loch. These changes have been linked with trends in the use of agricultural fertilizers and other agrochemicals.
In Europe, large quantities of slurry from intensively reared and housed livestock are spread on the fields (Figure 3.3). Animal excreta are very rich in both nitrogen and phosphorus and therefore their application to land can contribute to problems from polluted runoff. Land use policies have concentrated livestock production into purpose-built units, increasing the pollution risks associated with handling the resultant slurry or manures.
European agricultural policies that subsidize agriculture on the basis of productivity have also encouraged the use of fertilizers. Use of fertilizers has undergone a massive increase since 1950. In the USA, by 1975, total use of inorganic fertilizer had reached a level equivalent to about 40 kg per person per year. A recent European Environment Agency report estimated that the groundwater beneath more than 85% of Europe's farmland exceeds guideline levels for nitrogen concentration (25 mg l−1), with agricultural fertilizers being the main source of the problem. Pollution of surface waters also occurs on a large scale. A survey by the UK's Environment Agency in 1994 found that over 50% of the 314 water bodies surveyed in England and Wales had algal blooms caused by fertilizer runoff (Figure 3.4).
Patterns of fertilizer use do differ considerably between countries. In those with poorly developed economies, the costs of artificial fertilizers may be prohibitive. In hotter climates, irrigation may be used, resulting in higher nutrient runoff than for equivalent crops that are not irrigated. The high solubility of nitrate means that agriculture is a major contributor to nitrogen loadings in freshwater. Agriculture accounts for 71% of the mass flow of nitrogen in the River Great Ouse in the Midlands, UK, compared with only 6% for phosphorus.
What are the main sources of phosphorus and nitrogen that enrich rivers in a developed country
Phosphorus comes primarily from domestic waste water, whilst nitrogen comes primarily from intensive agriculture.
Studies evaluating the effects of nutrient loading on receiving water bodies must take account of the range of land uses found within a catchment.
As shown in Table 3.1, phosphate exports increase considerably as forests are converted to agricultural land and as agricultural land is urbanized. Agricultural runoff is known to be a potential source of nutrients for eutrophication, but the degree of mechanization may also be important. In catchments where agriculture is heavily mechanized, higher levels of sedimentation are likely. Most sediments arise as a result of soil erosion, which is promoted by tilling the land intensively. This destroys the soil's natural structure as well as removing vegetation which helps to stabilize soil.
To cite just one example, high sediment input in the latter half of the 20th century has caused shrinkage of the area of open water in the Mogan Lake system near Ankara, Turkey. Undoubtedly, mechanization and intensification of agriculture have played their part, but so too has the drainage of adjacent wetlands. The drained wetlands no longer trapped sediments, and themselves became vulnerable to erosion. This further increased sediment loadings in the lake. Levels of phosphorus have also risen. Draining the wetlands exposed the organic matter in their soils to oxidation, 'mobilizing' the phosphorus that had accumulated there over many years. This was then carried into the lake in drainage water.
|Land use||Total phosphorus||Total nitrogen|
|Losses from land to water courses||urban||0.1||0.5|
|Additions to land||atmospheric sources:|
Sediments have a variable but complex role in nutrient cycling in most aquatic systems, and are a potential 'internal' source of pollutants. Release of phosphorus from lake sediment is a complex function of physical, biological and chemical processes and is not easy to predict for different systems. Nitrogen is not stored and released from sediments in the same way, so its turnover time within aquatic systems is quite rapid. Nitrogen concentrations tend to fall off relatively quickly following a reduction in external nitrogen loading, whilst this is not true for phosphorus because the sediments can hold such a large reservoir of this nutrient that input and output rates may become decoupled.
In some shallow coastal areas, tidal mixing is the dominant nutrient regeneration process, as the sediments are regularly disturbed and redistributed by changing water currents, making nutrient exchange with the water much more rapid.
Why is a lake in a catchment dominated by arable agriculture much more prone to eutrophication than one in a forested catchment?
First the arable catchment is likely to be receiving much more nutrient input in the form of fertilizers. Second and equally importantly, the soil structure is much less stable under arable systems and therefore more likely to erode and carry nutrients to the lake as suspended sediment.
Direct effects of eutrophication occur when growth of organisms (usually the primary producers) is released from nutrient limitation. The resulting increased NPP becomes available for consumers, either as living biomass for herbivores or as detritus for detritivores. Associated indirect effects occur as eutrophication alters the food supply for other consumers. Changes in the amount, relative abundance, size or nutritional content of the food supply influence competitive relationships between consumers, and hence the relative success and survival of different species. Nutrient-induced changes in plant community composition and productivity can therefore result in associated changes in the competitive balance between herbivores, detritivores and predators. Consumers may also be affected by changes in environmental conditions caused indirectly by eutrophication, for example reduced oxygen concentrations caused by bacterial decay of biomass.
In freshwater aquatic systems, a major effect of eutrophication is the loss of the submerged macrophyte community (Section 2.1.1). Macrophytes are thought to disappear because they lose their energy supply in the form of sunlight penetrating the water. Following eutrophication, the sunlight is intercepted by the increased biomass of phytoplankton exploiting the high availability of nutrients. In principle, the submerged macrophytes could also benefit from increased nutrient availability, but they have no opportunity to do so because they are shaded by the free-floating microscopic organisms. Research in the Norfolk Broads has supported the view that the rapid replacement of diverse macrophyte communities by algal communities is attributable to light attenuation, caused by raised turbidity, but has also suggested that there may be more complex mechanisms operating, which must be understood if practical measures are to be undertaken to tackle eutrophication problems. There is evidence to suggest that either a plant-dominated state or an algal-dominated state can exist under high-nutrient conditions (Figure 3.5). Once either state becomes established, a number of mechanisms come into play which buffer the ecosystem against externally applied change. For example, a well-established submerged plant community may secrete substances that inhibit algal growth, and may provide refuges for animals that graze large quantities of algae. On the other hand, once an algal community becomes well established, especially early in the year, it can shade out the new growth of any aquatic plants on the bottom and compete with them for carbon dioxide in the water.
Research in the Norfolk Broads into possible trigger factors for switches from communities dominated by submerged macrophytes to those dominated by algae suggests that pesticides could play a role. Some herbivores are thought to be susceptible to pesticide leaching from surrounding arable land. Pesticide residues in sediments were found at concentrations high enough to cause at least sub-lethal effects, which could reduce the herbivore population for long enough to reduce algal consumption. This could help to explain the observation that most of the Norfolk Broads that have lost their plants are directly connected with main rivers draining intensive arable catchments, whereas those that have retained plant dominance are in catchments where livestock grazing predominates.
Clear relationships can be seen between human population density and total phosphorus and nitrate concentrations in watercourses (Figure 3.6). In 1968 the anthropogenic contribution amounted to some 10.8 g N per capita per day and 2.18 g P per capita per day. Outputs have continued to rise since then. Worldwide, human activities have intensified releases of phosphorus considerably. Increased soil erosion, agricultural runoff, recycling of crop residues and manures, discharges of domestic and industrial wastes and, above all, applications of inorganic fertilizers, are the major causes of this increase. Global food production is now highly dependent on the continuing use of supplementary phosphates, which account for 50-60% of total phosphorus supply.
Studies of nutrient runoff have shown a mixture of inputs into most river and lake catchments: both point source (such as sewage treatment works) and diffuse source (such as agriculture). Point sources are usually most important in the supply of phosphorus, whereas nitrogen is more likely to be derived from diffuse sources.
Using the data presented in Figure 1.13 and Table 2.3, comment on whether the remediation activities on the broads neighbouring the River Ant were likely to have resulted in a recovery of plant species diversity by 2000. Assume that 80% of the total phosphorus in the water is in the form of SRP.
Figure 1.13 shows that total phosphorus concentrations had fallen to 0.2 mg 1−1 in 2000, compared with their peak concentration of 0.36 mg 1−1 in 1975. In terms of SRP (the form of phosphorus that affects ecosystems most directly), we assume levels have fallen from 0.29 mg 1−1 to 0.16 mg 1−1. Comparing these figures with those in Table 2.3, we see that the SRP concentration put the system in the 'severe loss of species' category in 1975, but only the 'degraded' category in 2000. This suggests that some recovery of macrophyte species would be possible. Actual re-colonization may be a slow process, however. Ecosystems can take many years to come back to equilibrium after a perturbation, and if an algal-dominated state has established, it will inhibit macrophyte recovery.
The degree to which eutrophication is considered a problem depends on the place and people concerned. A small lake in South-East Asia, heavily fertilized by village sewage, can provide valuable protein from fish. In other parts of the world, a similar level of nutrients would be regarded as damaging, making water undrinkable and unable to support characteristic wildlife. In Europe, nitrates in drinking water are regarded as a potentially serious threat to health. Eutrophication has also damaged important fisheries and caused significant loss of biodiversity. Worldwide, efforts to reduce the causes and symptoms of eutrophication cost huge sums of money.
There is no single piece of existing legislation dealing comprehensively with the problem of eutrophication in the UK. However, one aim of the European Community's Urban Wastewater Treatment Directive (EC UWWTD) is to protect the environment from the adverse effects of sewage. This should help to reduce the problem of eutrophication in coastal waters where large discharges contribute significant nutrient loads. In the UK, 62 rivers and canals (totalling 2500 km), 13 lakes and reservoirs and five estuaries have been designated as sensitive areas (eutrophic) under this directive, and there are requirements for reducing nutrient loads from sewage treatment works in these areas (Figure 4.1).
Under the EC UWWTD, areas designated as 'eutrophic sensitive' must have phosphorus-stripping equipment installed at sewage treatment works (STWs, Figure 4.2) that serve populations of 10 000 or more. However, the majority of nature conservation sites classified as sensitive are affected by smaller, rural STWs for which such equipment is not yet required. Phosphorus stripping involves the use of chemicals such as aluminium sulfate, which react with dissolved phosphates, causing them to precipitate out of solution.
Another piece of European legislation that has some bearing on problems of nutrient enrichment is the EC Nitrates Directive. This is intended to reduce nitrate loadings to agricultural land, particularly in areas where drinking water supplies have high dissolved nitrate levels. The directive requires member states to monitor nitrate levels in water, set up 'nitrate vulnerable zones' (NVZ), and produce and promote a 'code of good agricultural practice' throughout the countryside. This should include measures to control the storage, handling and disposal of slurry, for example (Figure 4.3). However, the legislation designed to curb nutrient inputs from agricultural sources is primarily directed towards reducing nitrate levels in drinking water rather than protecting nature conservation sites. The EC Nitrates Directive defines eutrophication only in terms of nitrogen compounds, and therefore does nothing to help protect the majority of aquatic sites where many eutrophication problems are attributable to phosphorus loading.
The Declaration of the Third North Sea Conference in 1990 specified that nutrient inputs entering areas of the marine environment that are, or are likely to become, eutrophic, must be reduced to 50% of their 1985 levels by 1995. The Fifth Conference in 2002 went further, aiming to eliminate eutrophication and create a healthy marine environment by 2010. Fine words.
The UK Environment Agency has developed a eutrophication strategy that promotes a coordinated framework for action, and a partnership approach at both national and local levels. The management of eutrophication requires targets and objectives to be agreed for different water bodies. Analysis of preserved plant and animal remains in sediments can be used to estimate the levels of nutrients that occurred in the past, when the water bodies concerned were less affected by eutrophication. These reference conditions can then be used to determine which waters are most at risk, or have already been damaged by eutrophication, and to prioritize sites for restorative action. The ability to measure and monitor levels of eutrophication has therefore become increasingly important.
During the 1990s there was increased demand in the UK for effective methods of monitoring eutrophication. There was also considerable interest in the development of monitoring systems based on biotic indices. Several 'quality indices' based on a variety of organisms were explored. For monitoring tools to have practical application, they must satisfy certain requirements:
sampling must be quick and easy;
monitoring must be based on a finite number of easily identified groups; and
indices for evaluation must be straightforward to calculate.
Within-year variability in nutrient concentrations can be high, particularly for enriched waters. A high sampling frequency may therefore be required to provide representative annual mean data. In nutrient-enriched lakes, annual means are more likely to provide appropriate estimates of phosphorus than winter-spring means, due to the importance of internal cycling of nutrients in summer. This is an important consideration when designing sampling strategies for use in predictive models of trophic status.
The large group of algal species collectively known as diatoms has been used as indicators of eutrophication in European rivers. Individual species of diatom vary in their tolerance of nutrient enrichment, some species being able to increase their growth rates as nutrients become more available, whilst others are outcompeted and disappear. As diatoms derive their nutrients directly from the water column, and have generation times measured in days rather than months or years, the species composition of the diatom community should be a good indicator for assessing eutrophication. Convincing correlations have been demonstrated between aqueous nutrient concentrations and diatom community composition, but there are a number of other physical and chemical factors that also affect diatom distribution, such as water pH, salinity and temperature, which also need to be taken into account.
The UK Environment Agency has assessed the extent of eutrophication on the basis of concentrations of key nutrients (primarily nitrogen and phosphorus) in water, and the occurrence of obvious biological responses, such as algal blooms. There is an intention to rely more heavily in future on biological assessment schemes. One such system is based on surveys of the aquatic plant populations in rivers. Known as the mean trophic rank (MTR) approach, this uses a scoring system based on species and their recorded abundances at river sites. Each species is allocated a score (its species trophic rank, STR) dependent on its tolerance to eutrophication (Table 4.1); then, for a given site, the mean score for all species present is calculated. Tolerant species have a low score, so a low MTR tends to indicate a nutrient-rich river. In Britain, rivers in the north and west tend to have the highest MTR scores, whereas rivers in the south and east of England have the lowest. These scores reflect the influence of numerous factors, such as differences in river flow, patterns of agricultural intensification and variations in population density.
|Batrachospermum spp.||6||(a) Broadleaved species||(b) Grassleaved species|
|Hildenbrandia rivularis||6||Apium inundatum||9||Acorus calamus||2|
|Lemanea fluviatilis||7||A. nodiflorum||4||Alisma plantago-aquatica||3|
|Vaucheria spp.||1||Berula erecta||5||A. lanceolatum||3|
|Cladophora spp.||1||Callitriche hamulata||9||Butomus umbellatus||5|
|Enteromorpha spp.||1||C. obtusangula||5||Carex acuta||5|
|Hydrodictyum reticulatum||3||Ceratophyllum demersum||2||C. acutiformis||3|
|Stigeoclonium tenue||1||Hippurus vulgaris||4||C. riparia||4|
|Littorella uniflora||8||C. rostrata||7|
|Liverworts||Lotus pedunculatus||8||C. vesicaria||6|
|Chiloscyphus polyanthos||8||Menyanthes trifoliata||9||Catabrosa aquatica||5|
|Jungermannia atrovirens||8||Montia fontana||8||Eleocharis palustris||6|
|Marsupella emarginata||10||Myriophyllum alterniflorum||8||Eleogiton fluitans||10|
|Nardia compressa||10||M. spicatum||3||Elodea canadensis||5|
|Pellia endiviifolia||6||Myriophyllum spp.*||6||E. nuttallii||3|
|P. epiphylla||7||Nuphar lutea||3||Glyceria maxima||3|
|Scapania undulata||9||Nymphaea alba||6||Groenlandia densa||3|
|Nymphoides peltata||2||Hydrocharis morsus-ranae||6|
|Mosses||Oenanthe crocata||7||Iris pseudacorus||5|
|Amblystegium fluviatilis||5||O. fluviatilis||5||Juncus bulbosus||10|
|A. riparium||1||Polygonum amphibium||4||Lemna gibba||2|
|Blindia acuta||10||Potentilla erecta||9||L. minor||4|
|Brachythecium plumosum||9||Ranunculus aquatilis||5||L. minuta/miniscula||3|
|B. rivulare||8||R. circinatus||4||L. trisulca||4|
|B. rutabulum||3||R. flammula||7||Phragmites australis||4|
|Bryum pseudotriquetrum||9||R. fluitans||7||Potamogeton alpinus||7|
|Calliergon cuspidatum||8||R. omiophyllus||8||P. berchtoldii||4|
|Cinclidotus fontinaloides||5||R. peltatus||4||P. crispus||3|
|Dichodontium flavescens||9||R. penicillatus pseudofluitans||5||P. friesii||3|
|D. pellucidum||9||R. penicillatus penicillatus||6||P. gramineus||7|
|Dicranella palustris||10||R. penicillatus vertumnus||5||P. lucens||3|
|Fontinalis antipyretica||5||R. trichophyllus||6||P. natans||5|
|F. squamosa||8||R. hederaceus||6||P. obtusifolia||5|
|Hygrohypnum luridum||9||R. sceleratus||2||P. pectinatus||1|
|H. ochraceum||9||Ranunculus spp.*||6||P. perfoliatus||4|
|Hyocomium armoricum||10||Rorippa amphibia||3||P. polygonifolius||10|
|Philonotis fontana||9||R. nasturtium-aquaticum||5||P. praelongus||6|
|Polytrichum commune||10||Rumex hydrolapathum||3||P. pusillus||4|
|Racomitrium aciculare||10||Veronica anagallis-aquatica||4||P. trichoides||2|
|Rhynchostegium riparioides||5||V. catenata||5||Sagittaria sagittifolia||3|
|Sphagnum spp.||10||V. scutellata||1||Schoenoplectus lacustris||3|
|Thamnobryum alopecurum||7||Viola palustris||9||Scirpus maritimus||3|
|Azolla filiculoides||3||Spirodela polyrhiza||2|
|Equisetum fluviatile||5||Typha latifolia||2|
|E. palustre||5||T. angustifolia||2|
In Britain, water supply companies have tended to regard eutrophication as a serious problem only when it becomes impossible to treat drinking water supplies in an economic way. Threshold concentrations at which action is taken to reduce nutrient loadings thus depend on economic factors, as well as wildlife conservation objectives.
There are two possible approaches to reducing eutrophication:
Reduce the source of nutrients (e.g. by phosphate stripping at sewage treatment works, reducing fertilizer inputs, introducing buffer strips of vegetation adjacent to water bodies to trap eroding soil particles).
Reduce the availability of nutrients currently in the system (e.g. by removing plant material, removing enriched sediments, chemical treatment of water).
Europe is the continent that has suffered most from eutrophication, and increasing efforts are being made to restore European water bodies damaged by nutrient enrichment. If the ultimate goal is to restore sites where nature conservation interest has been damaged by eutrophication, techniques are required for reducing external loadings of nutrients into ecosystems.
Although algal production requires both nitrogen and phosphorus supplies, it is usually sufficient to reduce only one major nutrient. An analogy can be drawn with motor cars, which require lubricating oil, fuel and coolant to keep them moving and are likely to stop if they run short of any one of these, even if the other two are in plentiful supply. As phosphorus is the limiting nutrient in most freshwater systems, phosphorus has been the focus of particular attention in attempts to reduce inputs. In addition, nitrogen is less easily controlled: its compounds are highly soluble and can enter waterways from many diffuse sources. It can also be 'fixed' directly from the atmosphere. Phosphorus, on the other hand, is readily precipitated, usually enters water bodies from relatively few point sources (e.g. large livestock units or waste-water treatment works) and has no atmospheric reserve. However, efforts to reduce phosphorus loadings in some lakes have failed due to ongoing release of phosphorus from sediments. In situations where phosphorus has accumulated naturally (e.g. in areas with phosphate-rich rocks) and nitrogen increases have driven eutrophication, it may be necessary to control nitrogen instead.
In some circumstances it may be possible to divert sewage effluent away from a water body in order to reduce nutrient loads. This was achieved at Lake Washington, near Seattle, USA, which is close to the sea. Lake Washington is surrounded by Seattle and its suburbs, and in 1955 a cyanobacterium, Oscitilloria rubescens, became dominant in the lake. The lake was receiving sewage effluent from about 70 000 people; this input represented about 56% of the total phosphorus load to the lake. The sewerage system was redesigned to divert effluent away from the lake, for discharge instead into the nearby sea inlet of Puget Sound. The transparency of the water in the lake, as measured by the depth at which a white disc could be seen, quickly increased from about 1 to 3 m, and chlorophyll concentrations decreased markedly as a result of reduced bacterial populations.
Diversion of effluent should be considered only if the effluent to be diverted does not constitute a major part of the water supply for the water body. Otherwise, residence times of water and nutrients will be increased and the benefits of diversion may be counteracted.
It has been estimated that up to 45% of total phosphorus loadings to freshwater in the UK comes from sewage treatment works. This input can be reduced significantly (by 90% or more) by carrying out phosphate stripping. The effluent is run into a tank and dosed with a product known as a precipitant, which combines with phosphate in solution to create a solid, which then settles out and can be removed. It is possible to use aluminium salts as a precipitant, but the resulting sludge contains toxic aluminium compounds that preclude its secondary use as an agricultural fertilizer. There are no such problems with iron salts, so Fe(II) ammonium sulfate is frequently chosen as a precipitant. The chemicals required as precipitants constitute the major cost, rather than installations or infrastructure, and the process is very effective: up to 95% of the phosphate can be removed easily, and it is possible to remove more. Despite its effectiveness, however, phosphate stripping is not yet used universally in sewage treatment.
The interface between aquatic ecosystems and the land is an ecotone that has a profound influence on the movement of water and water-borne contaminants. Vegetation adjacent to streams and water bodies can help to safeguard water quality, particularly in agricultural landscapes. Buffer strips are used to reduce the amounts of nutrients reaching water bodies from runoff or leaching. They usually take the form of vegetated strips of land alongside water bodies: grassland, woodland and wetlands have been shown to be effective in different situations. The vegetation often performs a dual role, by reducing nutrient inputs to aquatic habitat and also providing wildlife habitat. A riparian buffer zone of between 20 and 30 m width can remove up to 100% of incoming nitrate. The plants take up nitrogen directly, provide a source of carbon for denitrifying bacteria and also create oxidized rhizospheres where denitrification can occur. Uptake of nitrogen by vegetation is often seasonal and is usually greater in forested areas with sub-surface water flow than in grassland with predominantly surface flow. The balance between surface flow and sub-surface flow, and the redox conditions that result, are critical in determining rates of nitrate removal in buffer strips (Figure 4.4).
The dynamics of nitrogen and phosphorus retention by soil and vegetation can alter during succession. In newly constructed wetlands, nitrogen retention commences as soon as emergent vegetation becomes established and soil organic matter starts to accumulate: usually within the first 1-3 years. Accumulation of organic carbon in the soil sets the stage for denitrification. After approximately 5-10 years, denitrification removes approximately the same amount of nitrogen as accumulates in organic matter (about 5-10 gm−2 yr−1 under conditions of low nitrogen loading). Under higher nitrogen loading, the amount of nitrogen stored in accumulating organic matter may double, and nitrogen removal by denitrification may increase by an order of magnitude or more. Accumulation of organic nitrogen and denitrification can therefore provide for reliable long-term removal of nitrogen regardless of nitrogen loading.
Phosphorus removal, on the other hand, tends to be greater during the first 1-3 years of succession when sediment deposition and sorption (absorption and adsorption) and precipitation of phosphorus are greatest. During the early stages of succession, wetlands may retain from 3 g P m−2 yr−1 under low phosphorus loadings, and as much as 30 g P m−2 yr−1 under high loadings. However, as sedimentation decreases and sorption sites become saturated, further phosphorus retention relies upon either its accumulation as organic phosphate in plants and their litter, or the precipitation with incoming aqueous and particulate cations such as iron, aluminium and calcium.
Nevertheless, in general, retention of phosphorus tends to be largely regulated by geochemical processes (sorption and precipitation) which operate independently of succession, whereas retention of nitrogen is more likely to be controlled by biological processes (e.g. organic matter accumulation, denitrification) that change in relative significance as succession proceeds.
Surface retention of sediment by vegetated buffer strips is a function of slope length and gradient, vegetation density and flow rates. Construction of effective buffer strips therefore requires detailed knowledge of an area's hydrology and ecology. Overall, restoration of riparian zones in order to improve water quality may have greater economic benefits than allocation of the same land to cultivation of crops.
Wetlands can be used in a similar way to buffer strips as a pollution control mechanism. They often present a relatively cost-effective and practical option for treatment, particularly in environmentally sensitive areas where large waste-water treatment plants are not acceptable. For example, Lake Manzala in Egypt has been suffering from severe pollution problems for several years. This lake is located on the northeastern edge of the Nile Delta, between Damietta and Port Said. Land reclamation projects have reduced the size of the lake from an estimated 1698 km2 to 770 km2. The lake is shallow, with an average depth of around 1.3 m.
Five major surface water drains discharge polluted waters into the lake. These waters contain municipal, industrial and agricultural pollutants, which are causing water quality to deteriorate and fish stocks to decline. Recently, efforts have been made to improve water quality in the most polluted of the five drains. This carries waste water from numerous sources, including sewage effluent from Cairo, waste water from industries, agricultural discharges from farms, and discharges and spills from boat traffic. Several methods for drain water treatment have been proposed, including conventional waste-water treatment plants and other chemical and mechanical methods for aerating the drain water. There are also proposals for construction of a wetland to treat approximately 25 000 m3 per day of drain water and discharge the treated effluent back to the drain.
The treatment process involves passing the drain water through basins and ponds, designed to have specific retention times. The pumped water first passes through sedimentation basins to allow suspended solids to settle out (primary treatment), followed by a number of wetland ponds (secondary treatment). The ponds are cultivated with different types of aquatic plants, such as emergent macrophytes (e.g. Phragmites) with well-developed aerenchyma systems to oxygenate the rhizosphere, allowing the oxidation of ammonium ions to nitrate. Subsequent denitrification removes the nitrogen to the atmosphere.
The waste-water treatment mechanism depends on a wide diversity of highly productive organisms, which produce the biological activity required for treatment. These include decomposers (bacteria and fungi), which break down particulate and dissolved organic material into carbon dioxide and water, and aquatic plants. Some of the latter are able to convey atmospheric oxygen to submerged roots and stems, and some of this oxygen is available to microbial decomposers. Aquatic plants also sequester nitrogen and phosphorus. Species such as common reed (Phragmites australis, Figure 4.5) yield a large quantity of biomass, which has a range of commercial uses in the region. Another highly productive species is the water hyacinth (Eichhornia crassipes, Figure 4.6). This species is regarded as a serious weed on the lake and is regularly harvested to reduce eutrophication. However, it has a potential role in water treatment due to its high productivity and rapid rates of growth. The resultant biomass could possibly be harvested and used for the production of nutrient-rich animal feed, or for composting and the production of fertilizer. Further research is required to develop practical options.
The passage of water through emergent plants reduces turbidity because the large surface area of stems and leaves acts as a filter for particulate matter. Transmission of light through the water column is improved, enhancing photosynthesis in attached algae. These contribute further to nutrient reduction in through-flowing water. The mixture of floating plants and emergent macrophytes contributes to removal of suspended solids, improved light penetration, increased photosynthesis and the removal of toxic chemicals and heavy metals.
Estimates for the removal of total suspended solids (TSS), biological oxygen demand (BOD), total phosphorus and total nitrogen by the different wetland components are provided in Table 4.2. These suggest that wetlands, combined with sedimentation and ancillary water treatment systems, could play an important part in reducing nutrient loadings.
|Parameter||Sedimentation pond||Wetland treatment system|
|influent conc./mg l−1||effluent conc./mg l−1||removal efficiency/%||influent conc./mg l−1||effluent conc./mg l−1||removal efficiency/%|
An important aspect of efforts to reduce nutrient inputs to water bodies is the modification of domestic behaviour. Public campaigns in Australia have encouraged people to:
wash vehicles on porous surfaces away from drains or gutters
reduce use of fertilizers on lawns and gardens
compost garden and food waste
use zero- or low-phosphorus detergents
wash only full loads in washing machines
collect and bury pet faeces.
These campaigns have combined local lobbying with national strategies to tackle pollution from other sources.
Once nutrients are in an ecosystem, it is usually much harder and more expensive to remove them than tackle the eutrophication at source. The main methods available are:
precipitation (e.g. treatment with a solution of aluminium or ferrous salt to precipitate phosphates);
removal of nutrient-enriched sediments, for example by mud pumping; and
removal of biomass (e.g. harvesting of common reed) and using it for thatching or fuel.
In severe cases of eutrophication, efforts have been made to remove nutrient-enriched sediments from lakes. Lake Trummen in Sweden accumulated thick black sulfurous mud after years of receiving sewage effluent. Even when external loadings of phosphorus were reduced to 3 kg P yr−1, there was still an internal load (i.e. that derived from the lake's own sediment) of 177 kg P yr−1! Drastic action was needed. Eventually nutrient-rich sediment was sucked from the lake and used as fertilizer. The water that was extracted with the sediment was treated with aluminium salts and run back into the lake. This action reduced phosphorus concentrations and improved the clarity and oxygenation of the water. However, removal or sealing of sediments is an expensive measure, and is only a sensible option in severely polluted systems, such as the Norfolk Broads, England.
Removal of fish can allow species of primary consumers, such as the water-flea, Daphnia, to recover and control algae. Once water quality has improved, fish can be re-introduced.
Mechanical removal of plants from aquatic systems is a common method for mitigating the effects of eutrophication (Figure 4.7). Efforts may be focused on removal of existing aquatic 'weeds' such as water hyacinth that tend to colonize eutrophic water. Each tonne of wet biomass harvested removes approximately 3 kg N and 0.2 kg P from the system.
Alternatively plants may be introduced deliberately to 'mop up' excess nutrients. Although water hyacinth can be used in water treatment, the water that results from treatment solely with floating macrophytes tends to have low dissolved oxygen. Addition of submerged macrophytes, together with floating or emergent macrophytes, usually gives better results. Submerged plants are not always as efficient as floating ones at assimilating nitrogen and phosphorus due to their slower growth, resulting from poor light transmission through water (particularly if it is turbid) and slow rates of CO2 diffusion down through the water column. However, many submerged macrophytes have a high capacity to elevate pH and dissolved oxygen, and this improves conditions for other mechanisms of nutrient removal. At higher pH, for example, soluble phosphates can precipitate with calcium, forming insoluble calcium phosphates, so removing soluble phosphates from water. Various species have been used in this way. One submersed macrophyte, Elodea densa, has been shown to remove nitrogen and phosphorus from nutrient-enriched water, its efficiency varying according to loading rate. Nitrogen removal rates reached 400 mg N m−2 per day during summer, while for phosphorus over 200 mg P m−2 per day were removed.
In terrestrial habitats, removal of standing biomass is an important tool in nature conservation. Reduction in the nutrient status of soils is often a prerequisite for re-establishment of semi-natural vegetation, and the removal of harvested vegetation helps to reduce the levels of nutrients returned to the soil (Figure 4.8). However, if the aim is to lower the nutrient status of a nutrient-enriched soil, this can be a very long-term process (Figure 4.9).
A short reach of the River Great Ouse in Bedfordshire was found to contain the following species:
|Common name||Scientific name|
|filamentous alga||Cladophora spp.|
|fool's water-cress||Apium nodiflorum|
|yellow water-lily||Nuphar lutea|
|great water dock||Rumex hydrolapatham|
|water speedwell||Veronica anagallis-aquatica|
|sweet flag||Acorus calamus|
|water plantain||Alisma plantago-aquatica|
|lesser pond sedge||Carex acutiformis|
|reed sweet-grass||Glyceria maxima|
|yellow flag iris||Iris pseudacorus|
|greater duckweed||Lemna gibba|
|broadleaved pondweed||Potamogeton natans|
A similar length of the River Eden in Cumbria was found to have the following species:
|Common name||Scientific name|
|river moss||Fontanalis antipyretica|
|water horsetail||Equisetum fluviatile|
|white water-lily||Nymphaea alba|
|lesser spearwort||Ranunculus flammula|
|pinkwater speedwell||Veronica catenata|
|bottle sedge||Carex rostrata|
|common spike-rush||Eleocharis palustris|
|broadleaved pondweed||Potamogeton natans|
Using the trophic rank scores in Table 4.1, calculate the mean trophic rank (MTR) for each stretch of river, and comment on whether the watercourse is nutrient-enriched (eutrophic). Assume all the species recorded are of similar abundance and therefore there is no need to weight scores according to relative abundance, as you would do in a real situation.
The River Great Ouse has a MTR of 3, suggesting it is enriched with nutrients and therefore eutrophic, but it is on the mildest edge of this category so the eutrophication is not severe.
The River Eden has an MTR of 6, indicating that its plant community is composed of species that are moderately sensitive to enrichment, so it can be assumed that this stretch has not undergone substantial eutrophication. The most sensitive species are absent, suggesting that the waters may naturally carry a moderate concentration of nutrients or that some very mild enrichment has occurred.
List the advantages of preventing eutrophication at source, compared with treating its effects.
Prevention of eutrophication at source compared with treating its effects (or reversing the process) has the following advantages.
Technical feasibility. In some situations prevention at source may be simply engineered by diverting a polluted watercourse away from the sensitive ecosystem, while removal of nutrients from a system by techniques such as mud-pumping is more of a technical challenge.
Cost. Nutrient stripping at source using a precipitant is relatively cheap and simple to implement. Biomass stripping of affected water is labour-intensive and therefore expensive.
Habitat availability. Buffer strips and wetlands can provide a stable wildlife habitat whilst performing a nutrient trapping role on throughflow water. Habitats, whether aquatic or terrestrial, can be compromised in terms of their wildlife value, due to the degree of disturbance involved in biomass stripping.
Products. Constructed wetlands may be managed to provide economic products such as fuel, compost or thatching material more easily than trying to use the biomass stripped from a less managed system.
Eutrophication is a process in which an ecosystem accumulates mineral nutrients. It can occur naturally, but is usually associated with human activity that releases nutrients into the environment.
Anthropogenic eutrophication has caused a widespread loss of biodiversity in many systems. Recent attempts to reverse the process are proving difficult and expensive.
Symptoms of eutrophication are most readily seen in aquatic systems, where the additional nutrients lead to the explosive growth of algal or bacterial populations. The large biomass produced excludes light from the water and can result in the deoxygenation of the water, killing fish and other animals.
In terrestrial systems, additional nutrients boost the productivity of competitive plant species. These then exclude less competitive species by shading them, leading to a decrease in species richness. The humped-back curve describes the relationship between biomass and species richness.
Estuaries are particularly prone to eutrophication, and like other aquatic environments can suffer from algal blooms that eliminate other species. Loss of key species, such as seagrass, results in an entire habitat type and all its dependent species disappearing.
The main agents of eutrophication are compounds containing the elements phosphorus and nitrogen. It is these elements that, under natural conditions, usually limit the primary production in ecosystems. Increasing their supply therefore increases productivity.
Sources of anthropogenic phosphorus entering the environment include sewage discharges, intensive livestock farms and the spreading of artificial fertilizers and animal manures onto agricultural land. The majority of phosphorus comes from point sources.
Sources of anthropogenic nitrogen entering the environment include gaseous emissions from car exhausts and power stations and artificial fertilizers applied to agricultural land. The majority of nitrogen comes from diffuse sources.
Recent European legislation has tried to limit further eutrophication of the environment by measures such as the stripping of phosphorus from waste water and the control of nitrogen fertilizer applications in sensitive zones.
Living organisms can be used as monitors of the trophic status of ecosystems.
Removal of nutrients from an ecosystem in order to reverse the effects of eutrophication is a difficult and expensive undertaking.
Course image: Nic Redhead in Flickr made available under Creative Commons Attribution-NonCommercial-ShareAlike 2.0 Licence.
The content acknowledged below is Proprietary (see terms and conditions). This content is made available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 Licence
Grateful acknowledgement is made to the following sources for permission to reproduce material in this unit:
Figures 1.1, 1.2, 1.3, 1.10, 2.4, 2.8, 2.10, 2.11, 2.11, 2.17, 3.2, 3.3, 4.2, 4.3 & 4.7 Courtesy of the Environment Agency;
Figures 1.4, 1.8a, b, 2.3, 2.12, 2.14, 2.15, 2.16, 3.1, 4.8 Copyright © Mike Dodd/The Open University;
Figure 1.7 Andy Harmer/Science Photo Library;
Figure 1.8c Courtesy of Dr John Madsen/Minnesota State University;
Figure 1.12 Copyright Dr Julian Thompson, Wetland Research Unit, Department of Geography, University College, London;
Figure 2.1 Copyright © Sukurta Viliaus & Co.;
Figures 2.2, 2.5a & 3.4 Copyright © Owen Mountford/Centre for Ecology and Hydrology;
Figure 2.5b Copyright © Peter Gathercole/Oxford Scientific Films;
Figure 2.6 Copyright © Alistair MacEwen/Oxford Scientific Films;
Figure 2.7 Copyright © BBC Worldwide;
Figure 2.18 With permission from Professor Chris Kjeldsen, Biology Department, Sonoma State University;
Figure 2.24 Graham Day;
Figure 4.4 Copyright © Peter Leeds-Harrison/Cranfield University;
Figure 4.5 Copyright © ARM Limited. Reproduced with permission of ARM Limited, Rydal House, Colton Road, Rugeley, Staffordshire, WS15 3HF, UK, armreedbeds.co.uk, designers of reed beds and welands for wastewater treatment;
Figure 4.6 Pisces Conservation Limited.
Courtesy of the Environment Agency;
Don't miss out:
If reading this text has inspired you to learn more, you may be interested in joining the millions of people who discover our free learning resources and qualifications by visiting The Open University - www.open.edu/ openlearn/ free-courses | https://www.open.edu/openlearn/ocw/mod/oucontent/view.php?printable=1&id=2317 | 21 |
52 | During the 19th century, the Dutch possessions and hegemony were expanded, reaching their greatest territorial extent in the early 20th century. The Dutch East Indies was one of the most valuable colonies under European rule,
and contributed to Dutch global prominence in spice and cash crop
trade in the 19th to early 20th century.
The colonial social order was based on rigid racial and social structures with a Dutch elite living separate from but linked to their native subjects.
The term Indonesia
came into use for the geographical location after 1880. In the early 20th century, local intellectuals began developing the concept of Indonesia
as a nation state, and set the stage for an independence movement.
Scholars writing in English use the terms Indië
, the Dutch East Indies
, the Netherlands Indies
, and colonial Indonesia
Expansion of the Dutch East Indies in the Indonesian Archipelago
Centuries before Europeans arrived, the Indonesian archipelago
supported various states, including commercially oriented coastal trading states and inland agrarian states (the most important were Srivijaya
The islands were known to the Europeans and were sporadically visited by expeditions such as that of Marco Polo
in 1292, and his fellow Italian Odoric of Pordenone
in 1321. The first Europeans to establish themselves in Indonesia were the Portuguese
in 1512. Following disruption of Dutch access to spices,
the first Dutch
expedition set sail for the East Indies in 1595 to access spices directly from Asia
. When it made a 400% profit on its return, other Dutch expeditions soon followed. Recognising the potential of the East Indies
trade, the Dutch government amalgamated the competing companies into the United East India Company
(Vereenigde Oost-Indische Compagnie
The VOC was granted a charter to wage war, build fortresses, and make treaties across Asia.
A capital was established in Batavia
), which became the center of the VOC's Asian trading network.
To their original monopolies on nutmeg
, the company and later colonial administrations introduced non-indigenous cash crops
, and safeguarded their commercial interests by taking over surrounding territory.
Smuggling, the ongoing expense of war, corruption, and mismanagement led to bankruptcy by the end of the 18th century. The company was formally dissolved in 1800 and its colonial possessions in the Indonesian archipelago
(including much of Java
, parts of Sumatra
, much of Maluku
, and the hinterlands of ports such as Makasar
, and Kupang
) were nationalized under the Dutch Republic as the Dutch East Indies.
From the arrival of the first Dutch ships in the late 16th century, to the declaration of independence in 1945, Dutch control over the Indonesian archipelago was always tenuous.
Although Java was dominated by the Dutch,
many areas remained independent throughout much of this time, including Aceh
There were numerous wars and disturbances across the archipelago as various indigenous groups resisted efforts to establish a Dutch hegemony, which weakened Dutch control and tied up its military forces.
Piracy remained a problem until the mid-19th century.
Finally in the early 20th century, imperial dominance was extended across what was to become the territory of modern-day Indonesia.
Since the establishment of the VOC in the 17th century, the expansion of Dutch territory had been a business matter. Graaf van den Bosch
's governor-generalship (1830–1835) confirmed profitability as the foundation of official policy, restricting its attention to Java, Sumatra and Bangka
However, from about 1840, Dutch national expansionism saw them wage a series of wars to enlarge and consolidate their possessions in the outer islands.
Motivations included: the protection of areas already held; the intervention of Dutch officials ambitious for glory or promotion; and to establish Dutch claims throughout the archipelago to prevent intervention from other Western powers during the European push for colonial possessions
As exploitation of Indonesian resources expanded off Java, most of the outer islands came under direct Dutch government control or influence.
The Dutch subjugated the Minangkabau
of Sumatra in the Padri War
and the Java War
(1825–30) ended significant Javanese resistance.
The Banjarmasin War
(1859–1863) in southeast Kalimantan resulted in the defeat of the Sultan.
After failed expeditions to conquer Bali in 1846
, an 1849 intervention
brought northern Bali under Dutch control. The most prolonged military expedition was the Aceh War
in which a Dutch invasion in 1873 was met with indigenous guerrilla resistance and ended with an Acehnese surrender in 1912.
Disturbances continued to break out on both Java and Sumatra
during the remainder of the 19th century.
However, the island of Lombokcame under Dutch control
resistance in northern Sumatra was quashed in 1895.
Towards the end of the 19th century, the balance of military power shifted towards the industrialising Dutch and against pre-industrial independent indigenous Indonesian polities
as the technology gap widened.
Military leaders and Dutch politicians believed they had a moral duty to free the native Indonesian peoples from indigenous rulers who were considered oppressive, backward, or disrespectful of international law.
Although Indonesian rebellions broke out, direct colonial rule was extended throughout the rest of the archipelago from 1901 to 1910 and control taken from the remaining independent local rulers.
was occupied in 1905–06, the island of Bali was subjugated with military conquests in 1906
, as were the remaining independent kingdoms in Maluku, Sumatra, Kalimantan, and Nusa Tenggara
Other rulers including the Sultans of Tidore
in Maluku, Pontianak
(Kalimantan), and Palembang
in Sumatra, requested Dutch protection from independent neighbours thereby avoiding Dutch military conquest and were able to negotiate better conditions under colonial rule.
The Bird's Head Peninsula
(Western New Guinea
), was brought under Dutch administration in 1920. This final territorial range would form the territory of the Republic of Indonesia.
World War II and independence
The Netherlands capitulated their European territory to Germany
on May 14, 1940. The royal family fled to exile in Britain. Germany and Japan were Axis allies. On 27 September 1940, Germany, Hungary
, and Japan
signed a treaty outlining "spheres of influence". The Dutch East Indies fell into Japan's sphere.
The Netherlands, Britain and the United States tried to defend the colony from the Japanese forces as they moved south in late 1941 in search of Dutch oil.
On 10 January 1942, during the Dutch East Indies Campaign
, Japanese forces invaded the Dutch East Indies as part of the Pacific War
The rubber plantations and oil fields of the Dutch East Indies were considered crucial for the Japanese war effort. Allied forces were quickly overwhelmed by the Japanese and on 8 March 1942 the Royal Dutch East Indies Army
surrendered in Java.
Fuelled by the Japanese Light of Asia
and the Indonesian National Awakening
, a vast majority of the indigenous Dutch East Indies population first welcomed the Japanese as liberators from the colonial Dutch empire, but this sentiment quickly changed as the occupation turned out to be far more oppressive and ruinous than the Dutch colonial government. The Japanese occupation
during World War II brought about the fall of the colonial state in Indonesia,
as the Japanese removed as much of the Dutch government structure as they could, replacing it with their own regime.
Although the top positions were held by the Japanese, the internment of all Dutch citizens meant that Indonesians filled many leadership and administrative positions. In contrast to Dutch repression of Indonesian nationalism, the Japanese allowed indigenous leaders to forge links amongst the masses, and they trained and armed the younger generations.
According to a UN report, four million people died in Indonesia as a result of the Japanese occupation.
Following the Japanese surrender in August 1945, nationalist leaders Sukarno
and Mohammad Hatta
declared Indonesian independence. A four and a half-year struggle
followed as the Dutch tried to re-establish their colony; although Dutch forces re-occupied most of Indonesia's territory a guerrilla struggle ensued, and the majority of Indonesians, and ultimately international opinion, favoured Indonesian independence. The Netherlands committed war crimes : summary and arbitrary killings of Indonesian villagers and farmers, torture of Indonesian prisoners and execution of prisoners. Ad van Liempt documented the mass murder of 364 Indonesians by Dutch soldiers in the village of Galoeng Galoeng. Alfred Edelstein and Karin van Coevorden, documented later the execution of hundreds of men in the village of Rawagede
In December 1949, the Netherlands formally recognized Indonesian sovereignty with the exception of the Netherlands New Guinea
(Western New Guinea
's government campaigned for Indonesian control of the territory, and with pressure from the United States, the Netherlands agreed to the New York Agreement
which ceded the territory to Indonesian administration in May 1963.
The Governor-General's palace in Batavia (1880-1900)
Since the VOC era, the highest Dutch authority in the colony resided with the 'Office of the Governor-General'. During the Dutch East Indies era the Governor-General functioned as chief executive president of colonial government and served as commander-in-chief
of the colonial (KNIL
) army. Until 1903 all government officials and organisations were formal agents of the Governor-General and were entirely dependent on the central administration of the 'Office of the Governor-General' for their budgets.
Until 1815 the Governor-General had the absolute right to ban, censor or restrict any publication in the colony. The so-called Exorbitant powers
of the Governor-General allowed him to exile anyone regarded as subversive and dangerous to peace and order, without involving any Court of Law.
Until 1848 the Governor-General was directly appointed by the Dutch monarch, and in later years via the Crown and on advice of the Dutch metropolitan cabinet. During two periods (1815–1835 and 1854–1925) the Governor-General ruled jointly with an advisory board called the Raad van Indie
(Indies Council). Colonial policy
and strategy were the responsibility of the Ministry of Colonies
based in The Hague
. From 1815 to 1848 the Ministry was under direct authority of the Dutch King. In the 20th century the colony gradually developed as a state distinct from the Dutch metropole with treasury separated in 1903, public loans being contracted by the colony from 1913, and quasi diplomatic ties were established with Arabia to manage the Haji pilgrimage from the Dutch East Indies. In 1922 the colony came on equal footing with the Netherlands in the Dutch constitution, while remaining under the Ministry of Colonies.
House of the Resident (colonial administrator) in Surabaya
The Governor-General led a hierarchy of Dutch officials; the Residents, the Assistant Residents, and District Officers called Controllers. Traditional rulers who survived displacement by the Dutch conquests were installed as regents and indigenous aristocracy became an indigenous civil service. While they lost real control, their wealth and splendour under the Dutch grew.
This indirect rule did not disturb the peasantry and was cost-effective for the Dutch; in 1900, only 250 European and 1,500 indigenous civil servants, and 16,000 Dutch officers and men and 26,000 hired native troops, were required to rule 35 million colonial subjects.
From 1910, the Dutch created the most centralised state power in Southeast Asia
Politically, the highly centralised power structure, including the exorbitant powers of exile and censorship,
established by the Dutch administration was carried over into the new Indonesian republic.
A People's Council called the Volksraad
for the Dutch East Indies commenced in 1918. The Volksraad
was limited to an advisory role and only a small portion of the indigenous population were able to vote for its members. The Council comprised 30 indigenous members, 25 European and 5 from Chinese and other populations, and was reconstituted every four years. In 1925 the Volksraad was made a semilegislative body; although decisions were still made by the Dutch government, the governor-general was expected to consult the Volksraad
on major issues. The Volksraad
was dissolved in 1942 during the Japanese occupation.
The Supreme Court Building, Batavia
The Dutch government adapted the Dutch codes of law in its colony. The highest court of law, the Supreme Court in Batavia, dealt with appeals and monitored judges and courts throughout the colony. Six Councils of Justice (Raad van Justitie)
dealt mostly with crime committed by people in the European legal class
and only indirectly with the indigenous population. The Land Councils (Landraden)
dealt with civil matters and less serious offences like estate divorces, and matrimonial disputes. The indigenous population was subject to their respective adat
law and to indigenous regents and district courts, unless cases were escalated before Dutch judges.
Following Indonesian independence, the Dutch legal system was adopted and gradually a national legal system based on Indonesian precepts of law and justice was established.
By 1920 the Dutch had established 350 prisons throughout the colony. The Meester Cornelis
prison in Batavia incarcerated the most unruly inmates. In Sawah Loento
prison on Sumatra prisoners had to perform manual labour in the coal mines. Separate prisons were built for juveniles (West Java) and for women. In the female Boeloe
prison in Semarang inmates had the opportunity to learn a profession during their detention, such as sewing, weaving and making batik
. This training was held in high esteem and helped re-socialise women once they were outside the correctional facility.
In response to the communist uprising of 1926 the prison camp Boven-Digoel
was established in New Guinea
. As of 1927 political prisoners, including indigenous Indonesians espousing Indonesian independence, were 'exiled' to the outer islands.
The Dutch East Indies was divided into three Gouvernementen
- Groot Oost, Borneo and Sumatra - and three provincies
in Java. Provincies and Gouvernementen were both divided into Residencies, but while the Residencies under the provincies were divided again into regentschappen
, Residencies under Gouvermenten were divided into Afdeelingen
first before being subdivided into regentschappen
The KNIL was not part of the Royal Netherlands Army
, but a separate military arm commanded by the Governor-General and funded by the colonial budget. The KNIL was not allowed to recruit Dutch conscripts and had the nature of a 'Foreign Legion
' recruiting not only Dutch volunteers, but many other European nationalities (especially German, Belgian and Swiss mercenaries).
While most officers were Europeans, the majority of soldiers were indigenous Indonesians, the largest contingent of which were Javanese
Dutch policy before the 1870s was to take full charge of strategic points and work out treaties with the local leaders elsewhere so they would remain in control and co-operate. The policy failed in Aceh
, in northern Sumatra, where the sultan tolerated pirates who raided commerce in the Strait of Malacca
. Britain was a protector of Aceh and it granted the Dutch request to conduct their anti-piracy campaign. The campaign quickly drove out the sultan but across Aceh numerous local Muslim leaders mobilised and fought the Dutch in four decades of very expensive guerrilla war, with high levels of atrocities on both sides.
Colonial military authorities tried to forestall a war against the population by means of a ‘strategy of awe’. When a guerrilla war did take place the Dutch used either a slow, violent occupation or a campaign of destruction.
Decorated indigenous KNIL soldiers, 1927
By 1900 the archipelago was considered "pacified" and the KNIL was mainly involved with military police tasks. The nature of the KNIL changed in 1917 when the colonial government introduced obligatory military service
for all male conscripts in the European legal class
and in 1922 a supplemental legal enactment introduced the creation of a ‘Home guard’ (Dutch
) for European conscripts older than 32.
Petitions by Indonesian nationalists to establish military service for indigenous people were rejected. In July 1941 the Volksraad
passed law creating a native militia of 18,000 by a majority of 43 to 4, with only the moderate Great Indonesia Party objecting. After the declaration of war with Japan, over 100,000 natives volunteered.
The KNIL hastily and inadequately attempted to transform them into a modern military force able to protect the Dutch East Indies from Imperial Japanese invasion. On the eve of the Japanese invasion in December 1941, Dutch regular troops in the East Indies comprised about 1,000 officers and 34,000 men, of whom 28,000 were indigenous. During the Dutch East Indies campaign
of 1941–42 the KNIL and the Allied forces were quickly defeated.
All European soldiers, which in practice included all able bodied Indo-European males were interned by the Japanese as POW
's. 25% of the POW's did not survive their internment.
Following World War II, a reconstituted KNIL joined with Dutch Army troops to re-establish colonial "law and order". Despite two successful military campaigns
in 1947 and 1948, Dutch efforts to re-establish their colony failed and the Netherlands recognised Indonesian sovereignty in December 1949.
The KNIL was disbanded by 26 July 1950 with its indigenous personnel being given the option of demobilising or joining the Indonesian military
At the time of disbandment the KNIL numbered 65,000, of whom 26,000 were incorporated into the new Indonesian Army. The remainder were either demobilised or transferred to the Netherlands Army.
Key officers in the Indonesian National Armed Forces
that were former KNIL soldiers include: Suharto
second president of Indonesia, A.H. Nasution
, commander of the Siliwangi Division
and Chief of Staff of the Indonesian army and A.E. Kawilarang
founder of the elite special forces Kopassus
In 1898, the population of Java numbered 28 million with another 7 million on Indonesia's outer islands.
The first half of 20th century saw large-scale immigration of Dutch and other Europeans to the colony, where they worked in either the government or private sectors. By 1930, there were more than 240,000 people with European legal status in the colony, making up less than 0.5% of the total population.
Almost 75% of these Europeans were in fact native Eurasians known as Indo-Europeans
1930 census of the Dutch East Indies
The Dutch colonialists formed a privileged upper social class of soldiers, administrators, managers, teachers and pioneers. They lived together with the "natives", but at the top of a rigid social and racial caste system
The Dutch East Indies had two legal classes of citizens; European and indigenous. A third class, Foreign Easterners, was added in 1920.
In 1901 the Dutch adopted what they called the Ethical Policy
, under which the colonial government had a duty to further the welfare of the Indonesian people in health and education. Other new measures under the policy included irrigation programs, transmigration
, communications, flood mitigation, industrialisation, and protection of native industry. Industrialisation
did not significantly affect the majority of Indonesians, and Indonesia remained an agricultural colony; by 1930, there were 17 cities with populations over 50,000 and their combined populations numbered 1.87 million of the colony's 60 million.
Students of the School Tot Opleiding Van Indische Artsen (STOVIA) aka Sekolah Doctor Jawa
The Dutch school system was extended to Indonesians with the most prestigious schools admitting Dutch children and those of the Indonesian upper class. A second tier of schooling was based on ethnicity with separate schools for Indonesians, Arabs, and Chinese being taught in Dutch and with a Dutch curriculum. Ordinary Indonesians were educated in Malay
in Roman alphabet with "link" schools preparing bright Indonesian students for entry into the Dutch-language schools.
Vocational schools and programs were set up by the Indies government to train indigenous Indonesians for specific roles in the colonial economy. Chinese and Arabs, officially termed "foreign orientals", could not enrol in either the vocational schools or primary schools.
Graduates of Dutch schools opened their own schools modelled on the Dutch school system, as did Christian missionaries, Theosophical Societies, and Indonesian cultural associations. This proliferation of schools was further boosted by new Muslim schools in the Western mould that also offered secular subjects.
According to the 1930 census, 6% of Indonesians were literate, however, this figure recognised only graduates from Western schools and those who could read and write in a language in the Roman alphabet. It did not include graduates of non-Western schools or those who could read but not write Arabic, Malay or Dutch, or those who could write in non-Roman alphabets such as Batak
, Chinese, or Arabic.
Dutch, Eurasian and Javanese professors of law at the opening of the Rechts Hogeschool in 1924
Some higher education institutions were also established. In 1898 the Dutch East Indies government established a school to train medical doctors
, named School tot Opleiding van Inlandsche Artsen
(STOVIA). Many STOVIA graduates later played important roles in Indonesia's national movement
toward independence as well in developing medical education in Indonesia, such as Dr. Wahidin Soedirohoesodo, who established the Budi Utomo
political society. De Technische Hoogeschool te Bandung
established in 1920 by the Dutch colonial administration to meet the needs of technical resources at its colony. One of Technische Hogeschool
graduate is Sukarno
whom later would lead the Indonesian National Revolution
. In 1924, the colonial government again decided to open a new tertiary-level educational facility, the Rechts Hogeschool
(RHS), to train civilian officers and servants. In 1927, STOVIA's status was changed to that of a full tertiary-level institution and its name was changed to Geneeskundige Hogeschool
(GHS). The GHS occupied the same main building and used the same teaching hospital as the current Faculty of Medicine of University of Indonesia
. The old links between the Netherlands and Indonesia are still clearly visible in such technological areas as irrigation
design. To this day, the ideas of Dutch colonial irrigation engineers continue to exert a strong influence over Indonesian design practices.
Moreover, the two highest internationally ranking universities of Indonesia, the University of Indonesia
est.1898 and the Bandung Institute of Technology
est.1920, were both founded during the colonial era.
Education reforms, and modest political reform, resulted in a small elite of highly educated indigenous Indonesians, who promoted the idea of an independent and unified "Indonesia" that would bring together disparate indigenous groups of the Dutch East Indies. A period termed the Indonesian National Revival
, the first half of the 20th century saw the nationalist movement develop strongly, but also face Dutch oppression.
The economic history of the colony was closely related to the economic health of the mother country.
Despite increasing returns from the Dutch system of land tax, Dutch finances had been severely affected by the cost of the Java War
and the Padri War
, and the Dutch loss of Belgium in 1830 brought the Netherlands to the brink of bankruptcy. In 1830, a new Governor-General
, Johannes van den Bosch
, was appointed to make the Indies pay their way through Dutch exploitation of its resources. With the Dutch achieving political domination throughout Java for the first time in 1830, it was possible to introduce an agricultural policy of government-controlled forced cultivation. Termed cultuurstelsel
(cultivation system) in Dutch and tanam paksa
(forced plantation) in Indonesia, farmers were required to deliver, as a form of tax, fixed amounts of specified crops, such as sugar or coffee.
Much of Java became a Dutch plantation and revenue rose continually through the 19th century which were reinvested into the Netherlands to save it from bankruptcy.
Between 1830 and 1870, 840 million guilder (8 billion euro in 2018
) were taken from the East Indies, on average making a third of the annual Dutch Government budget.
The Cultivation System, however, brought much economic hardship to Javanese peasants, who suffered famine and epidemics in the 1840s.
Critical public opinion in the Netherlands led to much of the Cultivation System's excesses being eliminated under the agrarian reforms of the "Liberal Period". Dutch private capital flowed in after 1850, especially in tin mining and plantation estate agriculture. The Marktavious Company's tin mines off the eastern Sumatra coast was financed by a syndicate of Dutch entrepreneurs, including the younger brother of King William III
. Mining began in 1860. In 1863 Jacob Nienhuys
obtained a concession from the Sultanate of Deli
) for a large tobacco estate (Deli Company
From 1870, the Indies were opened up to private enterprise and Dutch businessmen set up large, profitable plantations. Sugar production doubled between 1870 and 1885; new crops such as tea and cinchona flourished, and rubber was introduced, leading to dramatic increases in Dutch profits. Changes were not limited to Java, or agriculture; oil from Sumatra and Kalimantan
became a valuable resource for industrialising Europe. Dutch commercial interests expanded off Java to the outer islands with increasingly more territory coming under direct Dutch control or dominance in the latter half of the 19th century.
However, the resulting scarcity of land for rice production, combined with dramatically increasing populations, especially in Java, led to further hardships.
The colonial exploitation of Indonesia's wealth contributed to the industrialisation of the Netherlands, while simultaneously laying the foundation for the industrialisation of Indonesia. The Dutch introduced coffee, tea, cacao, tobacco and rubber and large expanses of Java became plantations cultivated by Javanese peasants, collected by Chinese intermediaries, and sold on overseas markets by European merchants.
In the late 19th century economic growth was based on heavy world demand for tea, coffee, and cinchona. The government invested heavily in a railroad network (240 km or 150 mi long in 1873, 1,900 km or 1,200 mi in 1900), as well as telegraph lines, and entrepreneurs opened banks, shops and newspapers. The Dutch East Indies produced most of the world's supply of quinine and pepper, over a third of its rubber, a quarter of its coconut products, and a fifth of its tea, sugar, coffee, and oil. The profit from the Dutch East Indies made the Netherlands one of the world's most significant colonial powers.
The Koninklijke Paketvaart-Maatschappij
shipping line supported the unification of the colonial economy and brought inter-island shipping through to Batavia, rather than through Singapore, thus focussing more economic activity on Java.
Workers pose at the site of a railway tunnel under construction in the mountains, 1910
The worldwide recession of the late 1880s and early 1890s
saw the commodity prices on which the colony depended collapse. Journalists and civil servants observed that the majority of the Indies population were no better off than under the previous regulated Cultivation System economy and tens of thousands starved.
Commodity prices recovered from the recession, leading to increased investment in the colony. The sugar, tin, copra
and coffee trade on which the colony had been built thrived, and rubber, tobacco, tea and oil also became principal exports.
Political reform increased the autonomy of the local colonial administration, moving away from central control from the Netherlands, whilst power was also diverged from the central Batavia government to more localised governing units.
The world economy recovered in the late 1890s and prosperity returned. Foreign investment, especially by the British, were encouraged. By 1900, foreign-held assets in the Netherlands Indies totalled about 750 million guilders ($300 million), mostly in Java.
After 1900 upgrading the infrastructure of ports and roads was a high priority for the Dutch, with the goal of modernising the economy, facilitating commerce, and speeding up military movements. By 1950 Dutch engineers had built and upgraded a road network with 12,000 km of asphalted surface, 41,000 km of metalled road area and 16,000 km of gravel surfaces.
In addition the Dutch built, 7,500 kilometres (4,700 mi) of railways, bridges, irrigation systems covering 1.4 million hectares (5,400 sq mi) of rice fields, several harbours, and 140 public drinking water systems. Wim Ravesteijn has said that, "With these public works, Dutch engineers constructed the material base of the colonial and postcolonial Indonesian state."
Perhimpunan Pelajar-Pelajar Indonesia
(Indonesian Students Union) delegates in Youth Pledge
, an important event where Indonesian language
was decided to be the national language, 1928
Across the archipelago, hundreds of native languages are used, and Malay
or Portuguese Creole
, the existing languages of trade, were adopted. Prior to 1870, when Dutch colonial influence was largely restricted to Java, Malay was used in government schools and training programs such that graduates could communicate with groups from other regions who immigrated to Java.
The colonial government sought to standardise Malay based on the version from Riau and Malacca, and dictionaries were commissioned for governmental communication and schools for indigenous peoples.
In the early 20th century, Indonesia's independence leaders adopted a form of Malay from Riau, and called it Indonesian
. In the latter half of the 19th century, the rest of the archipelago, in which hundreds of language groups were used, was brought under Dutch control. In extending the native education program to these areas, the government stipulated this "standard Malay" as the language of the colony.
was not made the official language of the colony and was not widely used by the indigenous Indonesian population.
The majority of legally acknowledged Dutchmen were bilingual Indo-Eurasians.
Dutch was used by only a limited educated elite, and in 1942, around two percent of the total population in the Dutch East Indies spoke Dutch, including over 1 million indigenous Indonesians.
A number of Dutch loan words
are used in present-day Indonesian, particularly technical terms (see List of Dutch loan words in Indonesian
). These words generally had no alternative in Malay and were adopted into the Indonesian vocabulary giving a linguistic insight into which concepts are part of the Dutch colonial heritage. Hendrik Maier of the University of California says that about a fifth of the contemporary Indonesian language
can be traced to Dutch.
Most Dutch literature was written by Dutch and Indo-European authors. However, in the first half of the 20th century under the Ethical Policy, indigenous Indonesian authors and intellectuals came to the Netherlands to study and work. They wrote Dutch language literary works and published literature in literary reviews such as Het Getij
, De Gemeenschap
, Links Richten
, and Forum
. By exploring new literary themes and focusing on indigenous protagonists, they drew attention to indigenous culture and the indigenous plight. Examples include the Javanese prince and poet Noto Soeroto
, a writer and journalist, and the Dutch language writings of Soewarsih Djojopoespito
, Chairil Anwar
, Sutan Sjahrir
, and Sukarno
Much of the postcolonial discourse
in Dutch Indies literature has been written by Indo-European authors led by the "avant garde visionary" Tjalie Robinson
, who is the best-read Dutch author in contemporary Indonesia,
and second generation Indo-European immigrants such as Marion Bloem
The natural beauty of East Indies has inspired the works of artists and painters, that mostly capture the romantic scenes of colonial Indies. The term Mooi Indie
(Dutch for "Beautiful Indies") was originally coined as the title of 11 reproductions of Du Chattel's watercolor paintings which depicted the scene of East Indies published in Amsterdam in 1930. The term became famous in 1939 after S. Sudjojono used it to mock the painters that merely depict all pretty things about Indies. Mooi Indie
later would identified as the genre of painting that occurred during the colonial East Indies that capture the romantic depictions of the Indies as the main themes; mostly natural scenes of mountains, volcanoes, rice paddies, river valleys, villages, with scenes of native servants, nobles, and sometimes bare-chested native women. Some of the notable Mooi Indie
painters are European artists: F.J. du Chattel, Manus Bauer, Nieuwkamp, Isaac Israel, PAJ Moojen, Carel Dake and Romualdo Locatelli; East Indies-born Dutch painters: Henry van Velthuijzen, Charles Sayers, Ernest Dezentje, Leonard Eland and Jan Frank; Native painters: Raden Saleh
, Mas Pirngadi, Abdullah Surisubroto, Wakidi, Basuki Abdullah
, Mas Soeryo Soebanto and Henk Ngantunk; and also Chinese painters: Lee Man Fong
, Oei Tiang Oen and Siauw Tik Kwie. These painters usually exhibit their works in art galleries such as Bataviasche Kuntkringgebouw, Theosofie Vereeniging, Kunstzaal Kolff & Co and Hotel Des Indes
Theatre and film
cinema in Batu
A total of 112 fictional films
are known to have been produced in the Dutch East Indies between 1926 and the colony's dissolution in 1949. The earliest motion pictures, imported from abroad, were shown in late 1900,
and by the early 1920s imported serials
and fictional films were being shown, often with localised names.
Dutch companies were also producing documentary films about the Indies to be shown in the Netherlands.
The first locally produced film, Loetoeng Kasaroeng
, was directed by L. Heuveldorp and released on 31 December 1926.
Between 1926 and 1933 numerous other local productions were released. During the mid-1930s, production dropped as a result of the Great Depression
The rate of production declined again after the Japanese occupation
beginning in early 1942, closing all but one film studio.
The majority of films produced during the occupation were Japanese propaganda shorts
Following the Proclamation of Indonesian Independence
in 1945 and during the ensuing revolution
several films were made, by both pro-Dutch and pro-Indonesian backers.
Theatre plays by playwrights such as Victor Ido
(1869–1948) were performed at the Schouwburg Weltevreden
, now known as Gedung Kesenian Jakarta
. A less elite form of theatre, popular with both European and indigenous people, were the travelling Indo
theatre shows known as Komedie Stamboel
, made popular by Auguste Mahieu (1865–1903).
The rich nature and culture of the Dutch East Indies attracted European intellectuals, scientists and researchers. Some notable scientists that conducted most of their important research in the East Indies archipelago are Teijsmann
. Many important art, culture and science institutions were established in Dutch East Indies. For example, the Bataviaasch Genootschap van Kunsten en Wetenschappen
, (Royal Batavian Society of Arts and Sciences
), the predecessor of the National Museum of Indonesia
, was established in 1778 with the aim to promote research and publish findings in the field of arts and sciences, especially history
. The Bogor Botanical Gardens
with Herbarium Bogoriense
and Museum Zoologicum Bogoriense
was a major centre for botanical research
established in 1817, with the aim to study the flora and fauna of the archipelago.
With growing interest in scientific research, the government of the Dutch East Indies established Natuurwetenschappelijke Raad voor Nederlandsch-Indië
(Scientific Council of the Dutch East Indies) in 1928.
It operates as the country's main research organization until the outbreak of World War II in Asia Pacific
in 1942. In 1948 the institute was renamed Organisatie voor Natuurwetenschappelijk Onderzoek
(Organisation for Scientific Research). This organization was the predecessor of the current Indonesian Institute of Sciences
Dutch family enjoying a large Rijsttafel
The Dutch colonial families through their domestic servants and cooks were exposed to Indonesian cuisine, as the result they developed a taste for native tropical spices and dishes. A notable Dutch East Indies colonial dish is rijsttafel
, the rice table that consists of 7 to 40 popular dishes from across the colony. More an extravagant banquet than a dish, the Dutch colonials introduced the rice table not only so they could enjoy a wide array of dishes at a single setting but also to impress visitors with the exotic abundance of their colony.
Through colonialism the Dutch introduced European dishes such as bread
, barbecued steak
. As the producer of cash crops; coffee
and tea were also popular in the colonial East Indies. Bread, butter
, sandwiches filled with ham, cheese or fruit jam, poffertjes
and Dutch cheeses
were commonly consumed by colonial Dutch and Indos
during the colonial era. Some of the native upperclass ningrat
(nobles) and a few educated native were exposed to European cuisine, and it was held with high esteem as the cuisine of upperclass elite of Dutch East Indies society. This led to the adoption and fusion of European cuisine into Indonesian cuisine. Some dishes which were created during the colonial era are Dutch influenced: they include selat solo
(solo salad), bistik jawa (Javanese beef steak), semur
(from Dutch smoor
), sayur kacang merah (brenebon) and sop buntut
. Cakes and cookies also can trace their origin to Dutch influences; such as kue bolu (tart), pandan cake
, lapis legit (spekkoek
), spiku (lapis Surabaya), klappertaart
(coconut tart), and kaasstengels
(cheese cookies). Kue cubit
commonly found in front of schools and marketplaces are believed to be derived from poffertjes.
The 16th and 17th century arrival of European powers in Indonesia introduced masonry
construction to Indonesia where previously timber and its by-products had been almost exclusively used. In the 17th and 18th centuries, Batavia was a fortified brick and masonry city.
For almost two centuries, the colonialists did little to adapt their European architectural habits to the tropical climate.
They built row houses which were poorly ventilated with small windows, which was thought as protection against tropical diseases coming from tropical air.
Years later the Dutch learnt to adapt their architectural styles with local building features (long eaves, verandahs
, large windows and ventilation openings),
and the 18th century Dutch Indies country houses
was one of the first colonial buildings to incorporate Indonesian architectural elements and adapt to the climate, the known as Indies Style.
From the end of the 19th century, significant improvements to technology, communications and transportation brought new wealth to Java. Modernistic buildings, including train stations, business hotels, factories and office blocks, hospitals and education institutions, were influenced by international styles. The early 20th century trend was for modernist
influences—such as art-deco
—being expressed in essentially European buildings with Indonesian trim. Practical responses to the environment carried over from the earlier Indies Style, included overhanging eaves, larger windows and ventilation in the walls, which gave birth to the New Indies Style
The largest stock of colonial era buildings are in the large cities of Java, such as Bandung, Jakarta
, and Surabaya
. Notable architects and planners include Albert Aalbers
, Thomas Karsten
, Henri Maclaine Pont
, J. Gerber and C.P.W. Schoemaker
In the first three decades of the 20th century, the Department of Public Works funded major public buildings and introduced a town planning program under which the main towns and cities in Java and Sumatra were rebuilt and extended.
A lack of development in the Great Depression
, the turmoil of the Second World War
and the Indonesia's independence struggle
of the 1940s, and economic stagnation during the politically turbulent 1950s and 1960s, meant that much colonial architecture has been preserved through to recent decades.
Colonial homes were almost always the preserve of the wealthy Dutch, Indonesian and Chinese elites, however the styles were often rich and creative combinations of two cultures, so much so that the homes remain sought after into the 21st century.
Native architecture was arguably more influenced by the new European ideas than colonial architecture was influenced by Indonesian styles; and these Western elements continue to be a dominant influence on Indonesia's built environment today.
Javanese nobles adopted and mixed some aspects of European fashion, such as this couple in 1890.
Within the colony of the Dutch East Indies, fashion played an important role to define ones' status and social class. The European colonials wore European fashion straight out of the Netherlands, or even Paris, while the natives wore their traditional clothings that are distinct in every regions. As the years progressed and the Dutch influence became stronger, many natives began mixing European styles within their traditional clothing. High-ranking natives within the colony as well as nobility, would wear European style suits with their batik sarongs for special occasions and even for everyday use. More and more native Indonesians began to dress more European. This of course came with the idea that those who wore European clothing were more progressive and open towards a European society and the etiquette that came with it. More and more the European influence was gaining precedence within native Indonesians. This probably stems from the fact that many natives were treated better if they wore European clothing. Their European counterparts acknowledged them, and that in turn was most likely a catalyst for adoption western clothing into traditional Indonesian clothing.
Dutch colonial couple in the early 20th century wearing native batik
The fashion influences between colonials and natives was a reciprocal phenomenon. Just as the Europeans influences the natives, the natives too influenced the European colonials. For example, the thick European fabrics was considered too hot to wear in tropical climate. Thus, the light clothing of thin kebaya fabrics and the comfortable and easy to wear batik sarong are considered quite suitable for everyday clothing in hot and humid climate of the East Indies.
Later on in the history of the Dutch East Indies, as a new wave of Europeans were brought into the colony, many adopted the Indonesian styles, many even went so far as to wear traditional Javanese kebaya
at home. Batik
was also a big influence for the Dutch. The technique was so fascinating to them that they took the technique to their colonies in Africa where it was adopted with African patterns.
For the most part, Europeans in the Dutch East Indies, stuck to traditional European styles of dressing. Fashion trends from Paris were still highly regarded and considered the epitome of style. Women wore dresses and skirts and men wore pants and shirts.
Colonial heritage in the Netherlands
Dutch imperial imagery representing the Dutch East Indies (1916). The text reads "Our most precious jewel."
Universities such as the Royal Leiden University
founded in the 16th century have developed into leading knowledge centres about Southeast Asian and Indonesian studies.
Leiden University has produced academics such as Colonial adviser Christiaan Snouck Hurgronje
who specialised in native oriental (Indonesian) affairs, and it still has academics who specialise in Indonesian languages and cultures. Leiden University
and in particular KITLV
are educational and scientific institutions that to this day share both an intellectual and historical interest in Indonesian studies. Other scientific institutions in the Netherlands include the Amsterdam Tropenmuseum
, an anthropological museum with massive collections of Indonesian art, culture, ethnography and anthropology.
Dutch newsreel dated 1927 showing a Dutch East Indian fair in the Netherlands featuring Indo
and Indigenous people from the Dutch East Indies performing traditional dance and music in traditional attire
Many surviving colonial families and their descendants who moved back to the Netherlands after independence tended to look back on the colonial era with a sense of the power and prestige they had in the colony, with such items as the 1970s book Tempo Doeloe
(Old times) by author Rob Nieuwenhuys
, and other books and materials that became quite common in the 1970s and 1980s.
Moreover, since the 18th century Dutch literature has a large number of established authors, such as Louis Couperus
, the writer of "The Hidden Force", taking the colonial era as an important source of inspiration.
In fact one of the great masterpieces of Dutch literature
is the book "Max Havelaar
" written by Multatuli
The majority of Dutchmen that repatriated to the Netherlands after and during the Indonesian revolution are Indo
(Eurasian), native to the islands of the Dutch East Indies. This relatively large Eurasian population had developed over a period of 400 years and were classified by colonial law as belonging to the European legal community.
they are referred to as Indo
(short for Indo-European). Of the 296,200 so called Dutch 'repatriants' only 92,200 were expatriate Dutchmen born in the Netherlands.
Including their second generation descendants, they are currently the largest foreign born group in the Netherlands. In 2008, the Dutch Census Bureau for Statistics (CBS)
registered 387,000 first and second generation Indos living in the Netherlands.
Although considered fully assimilated into Dutch society, as the main ethnic minority in the Netherlands, these 'Repatriants' have played a pivotal role in introducing elements of Indonesian culture into Dutch mainstream culture. Practically each town in the Netherlands will have a 'Toko' (Dutch Indonesian Shop) or Indonesian restaurant
and many 'Pasar Malam
' (Night market in Malay/Indonesian) fairs are organised throughout the year.
- ^ Dick, Howard W. (2002). Surabaya City Of Work: A Socioeconomic History, 1900–2000 (Ohio RIS Southeast Asia Series): Howard Dick: 9780896802216: Amazon.com: Books. ISBN 978-0896802216.
- ^ "Page:The New International Encyclopædia 1st ed. v. 18.djvu/816 - Wikisource, the free online library". en.wikisource.org. Archived from the original on 24 December 2018. Retrieved 27 December 2018.
- ^ "Archived copy". Archived from the original on 27 May 2015. Retrieved 27 May 2015.
- ^ Hart, Jonathan (26 February 2008). Empires and Colonies. ISBN 9780745626130. Archived from the original on 18 March 2015. Retrieved 19 July 2015.
- ^ Booth, Anne, et al. Indonesian Economic History in the Dutch Colonial Era (1990), Ch 8
- ^ R.B. Cribb and A. Kahin, p. 118
- ^ Robert Elson, The idea of Indonesia: A history (2008) pp 1-12
- ^ Ricklefs, M C (1991). A History of Modern Indonesian since c.1300 (Second ed.). Houndmills, Baingstoke, Hampshire and London: The Macmillan Press Limited. pp. 271, 297. ISBN 0-333-57690-X.
- ^ Dagh-register gehouden int Casteel Batavia vant passerende daer ter plaetse als over geheel Nederlandts-India anno 1624–1629 [The official register at Castle Batavia, of the census of the Dutch East Indies]. VOC. 1624.
- ^ Gouda, Frances. Dutch Culture Overseas: Colonial Practice in the Netherlands Indies, 1900-1942 (1996) online Archived 9 November 2017 at the Wayback Machine
- ^ Taylor (2003)
- ^ a b c Ricklefs (1991), p. 27
- ^ a b Vickers (2005), p. 10
- ^ Ricklefs (1991), p. 110; Vickers (2005), p. 10
- ^ a b c d e f g h i j k *Witton, Patrick (2003). Indonesia. Melbourne: Lonely Planet. pp. 23–25. ISBN 1-74059-154-2.
- ^ Luc Nagtegaal, Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java, 1680–1743 (1996)
- ^ Schwarz, A. (1994). A Nation in Waiting: Indonesia in the 1990s. Westview Press. pp. 3–4. ISBN 1-86373-635-2.
- ^ Kumar, Ann (1997). Java. Hong Kong: Periplus Editions. p. 44. ISBN 962-593-244-5.
- ^ Ricklefs (1991), pp. 111–114
- ^ a b c Ricklefs (1991), p. 131
- ^ Vickers (2005), p. 10; Ricklefs (1991), p. 131
- ^ Ricklefs (1991), p. 142
- ^ a b c d e f Friend (2003), p. 21
- ^ Ricklefs (1991), pp. 138-139
- ^ Vickers (2005), p. 13
- ^ a b c Vickers (2005), p. 14
- ^ a b c Reid (1974), p. 1.
- ^ Jack Ford, "The Forlorn Ally—The Netherlands East Indies in 1942," War & Society (1993) 11#1 pp: 105-127.
- ^ Herman Theodore Bussemaker, "Paradise in Peril: The Netherlands, Great Britain and the Defence of the Netherlands East Indies, 1940–41," Journal of Southeast Asian Studies (2000) 31#1 pp: 115-136.
- ^ Morison (1948), p. 191
- ^ Ricklefs (1991), p. 195
- ^ L., Klemen, 1999–2000, The Netherlands East Indies 1941–42, "Forgotten Campaign: The Dutch East Indies Campaign 1941–1942 Archived 26 July 2011 at the Wayback Machine".
- ^ Shigeru Satō: War, nationalism, and peasants: Java under the Japanese occupation, 1942–1945 (1997), p. 43
- ^ Encyclopædia Britannica Online (2007). "Indonesia :: Japanese occupation". Archived from the original on 9 February 2007. Retrieved 21 January 2007. Though initially welcomed as liberators, the Japanese gradually established themselves as harsh overlords. Their policies fluctuated according to the exigencies of the war, but in general their primary object was to make the Indies serve Japanese war needs.
- ^ Gert Oostindie and Bert Paasman (1998). "Dutch Attitudes towards Colonial Empires, Indigenous Cultures, and Slaves" (PDF). Eighteenth-Century Studies. 31 (3): 349–355. doi:10.1353/ecs.1998.0021. hdl:20.500.11755/c467167b-2084-413c-a3c7-f390f9b3a092. S2CID 161921454.; Ricklefs, M.C. (1993). History of Modern Indonesia Since c.1300, second edition. London: MacMillan. ISBN 0-333-57689-6.
- ^ Vickers (2005), page 85
- ^ Ricklefs (1991), p. 199
- ^ Cited in: Dower, John W. War Without Mercy: Race and Power in the Pacific War (1986; Pantheon; ISBN 0-394-75172-8)
- ^ "How the Netherlands Hid Its War Crimes for Decades". 31 August 2020.
- ^ Ricklefs, M C (1991). A History of Modern Indonesian since c.1300 (Second ed.). Houndmills, Baingstoke, Hampshire and London: The Macmillan Press Limited. pp. 271, 297. ISBN 0-333-57690-X.
- ^ R.B. Cribb and A. Kahin, p. 108
- ^ R.B. Cribb and A. Kahin, p. 140
- ^ R.B. Cribb and A. Kahin, pp. 87, 295
- ^ Vickers (2005), p. 15
- ^ Cribb, R.B., Kahin, pp. 140 & 405
- ^ Harry J. Benda, S.L. van der Wal, "De Volksraad en de staatkundige ontwikkeling van Nederlandsch-Indië: The Peoples Council and the political development of the Netherlands-Indies." (With an introduction and survey of the documents in English). (Publisher: J.B. Wolters, Leiden, 1965.)
- ^ Note: The European legal class was not solely based on race restrictions and included Dutch people, other Europeans, but also native Indo-Europeans, Indo-Chinese and indigenous people.
- ^ a b "Virtueel Indi". Archived from the original on 31 March 2012. Retrieved 25 August 2011.
- ^ Note: Adat law communities were formally established throughout the archipelago e.g. Minangkabau. See: Cribb, R.B., Kahin, p. 140
- ^http://alterisk.ru/lj/IndonesiaLegalOverview.pdf[permanent dead link]
- ^ Note: The female 'Boeloe' prison in Semarang, which housed both European and indigenous women, had separate sleeping rooms with cots and mosquito nets for elite indigenous women and women in the European legal class. Sleeping on the floor like the female peasantry was considered an intolerable aggravation of the legal sanction. See: Baudet, H., Brugmans I.J. Balans van beleid. Terugblik op de laatste halve eeuw van Nederlands-Indië. (Publisher: Van Gorcum, Assen, 1984)
- ^ Baudet, H., Brugmans I.J. Balans van beleid. Terugblik op de laatste halve eeuw van Nederlands-Indië. (Publisher: Van Gorcum, Assen, 1984) P.76, 121, 130
- ^ "Archived copy". Archived from the original on 19 January 2012. Retrieved 19 January 2012., sourced from Cribb, R. B (2010), Digital atlas of indonesian history, Nias, ISBN 978-87-91114-66-3 from the earlier volume Cribb, R. B; Nordic Institute of Asian Studies (2000), Historical atlas of Indonesia, Curzon ; Singapore : New Asian Library, ISBN 978-0-7007-0985-4
- ^ Blakely, Allison (2001). Blacks in the Dutch World: The Evolution of Racial Imagery in a Modern Society. Indiana University Press. p. 15 ISBN 0-253-31191-8
- ^ Cribb, R.B. (2004) ‘Historical dictionary of Indonesia.’ Scarecrow Press, Lanham, USA.ISBN 0 8108 4935 6, p. 221 [https://web.archive.org/web/20160423234130/https://books.google.com/books?id=SawyrExg75cC&dq=number+of+javanese+in+KNIL&source=gbs_navlinks_s Archived 23 April 2016 at the Wayback Machine]; [Note: The KNIL statistics of 1939 show at least 13,500 Javanese and Sundanese under arms compared to 4,000 Ambonese soldiers]. Source: Netherlands Ministry of Defense Archived 1 October 2011 at the Wayback Machine.
- ^ Nicholas Tarling, ed. (1992). The Cambridge History of Southeast Asia: Volume 2, the Nineteenth and Twentieth Centuries. Cambridge U.P. p. 104. ISBN 9780521355063. Archived from the original on 23 April 2016. Retrieved 19 July 2015.
- ^ Groen, Petra (2012). "Colonial warfare and military ethics in the Netherlands East Indies, 1816–1941". Journal of Genocide Research. 14 (3): 277–296. doi:10.1080/14623528.2012.719365. S2CID 145012445.
- ^ Willems, Wim ‘Sporen van een Indisch verleden (1600-1942).’ (COMT, Leiden, 1994). Chapter I, P.32-33 ISBN 90-71042-44-8
- ^ Willems, Wim ‘Sporen van een Indisch verleden (1600-1942).’ (COMT, Leiden, 1994). Chapter I, P.32-36 ISBN 90-71042-44-8
- ^ John Sydenham Furnivall, Colonial Policy and Practice: A Comparative Study of Burma and Netherlands India (Cambridge: Cambridge University Press, 1948), 236.
- ^ Klemen, L (1999–2000). "Dutch East Indies 1941-1942". Dutch East Indies Campaign website. Archived from the original on 26 July 2011.
- ^ "Last Post – the End of Empire in the Far East", John Keay ISBN 0-7195-5589-2
- ^ "plechtigheden in Djakarta bij de opheffing van het KNIL Polygoon 1950 3 min. 20;embed=1 Video footage showing the official ceremony disbanding the KNIL". Archived from the original on 27 September 2011. Retrieved 21 April 2019.
- ^ John Keegan, page 314 "World Armies", ISBN 0-333-17236-1
- ^ Furnivall, J.S. (1967) . Netherlands India: a Study of Plural Economy. Cambridge: Cambridge University Press. pp. 9. ISBN 0-521-54262-6. Cited in Vicker, Adrian (2005). A History of Modern Indonesia. Cambridge University Press. pp. 9. ISBN 0-521-54262-6.
- ^ Beck, Sanderson, (2008) South Asia, 1800-1950 - World Peace Communications ISBN 0-9792532-3-3, ISBN 978-0-9792532-3-2 - By 1930 more European women had arrived in the colony, and they made up 113,000 out of the 240,000 Europeans.
- ^ Van Nimwegen, Nico De demografische geschiedenis van Indische Nederlanders, Report no.64 (Publisher: NIDI, The Hague, 2002) P.36 ISBN 9789070990923
- ^ Van Nimwegen, Nico (2002). "64" (PDF). De demografische geschiedenis van Indische Nederlanders [The demography of the Dutch in the East Indies]. The Hague: NIDI. p. 35. ISBN 9789070990923. Archived from the original (PDF) on 23 July 2011.
- ^ Vickers (2005), p. 9
- ^ Reid (1974), p. 170, 171
- ^ Cornelis, Willem, Jan (2008). De Privaatrechterlijke Toestand: Der Vreemde Oosterlingen Op Java En Madoera ( The private law situation: Java and Madoera) (PDF). Bibiliobazaar. ISBN 978-0-559-23498-9. Archived (PDF) from the original on 24 July 2011. Retrieved 17 March 2011.
- ^ Cribb, Robert, 'Development policy in the early 20th century [Indonesia]' in Jan-Paul Dirkse, Frans Hüsken and Mario Rutten, eds, Development and social welfare: Indonesia’s experiences under the New Order (1993) Archived 23 June 2018 at the Wayback Machine
- ^ a b c Taylor (2003), p. 286
- ^ Taylor (2003), p. 287
- ^ a b "TU Delft Colonial influence remains strong in Indonesia".
- ^ Note: In 2010, according to University Ranking by Academic Performance (URAP), Universitas Indonesia was the best university in Indonesia.
- ^ "URAP - University Ranking by Academic Performance". Archived from the original on 6 October 2014. Retrieved 18 April 2012.
- ^ Dick, et al. (2002)
- ^ Ricklefs (1991), p 119
- ^ a b Taylor (2003), p. 240
- ^ "Waarde van de gulden / euro". www.iisg.nl.
- ^ "Promotie Huib Ekkelenkamp op 9 april 2019 TU Delft". KIVI.
- ^ "Indonesia's Infrastructure Problems: A Legacy From Dutch Colonialism". The Jakarta Globe. Archived from the original on 24 November 2012.
- ^ Dick, et al. (2002), p. 95
- ^ Vickers (2005), p. 20
- ^ Vickers (2005), p. 16
- ^ Vickers (2005), p. 18
- ^ Dick, et al. (2002), p. 97
- ^ ten Horn-van Nispen, Marie-Louise; Ravesteijn, Wim (2009). "The road to an empire: Organisation and technology of road construction in the Dutch East Indies, 1800–1940". Journal of Transport History. 10 (1): 40–57. doi:10.7227/TJTH.30.1.5. S2CID 110005354.
- ^ Ravesteijn, Wim (2007). "Between Globalization and Localization: The Case of Dutch Civil Engineering in Indonesia, 1800–1950". Comparative Technology Transfer and Society. 5 (1): 32–64 [quote p. 32]. doi:10.1353/ctt.2007.0017.
- ^ Taylor (2003), p. 288
- ^ Sneddon, james (2003)The Indonesian language: its history and role in modern society.(UNSW Press, Sydney, 2003) P.87-89 Archived 23 April 2016 at the Wayback Machine
- ^ Taylor (2003), p. 289
- ^ Groeneboer, Kees. Weg tot het Westen (Road to the West).; Corn, Charles (1999) [First publicshed 1998]. The Scents of Eden: A History of the Spice Trade. Kodansha America. p. 203. ISBN 1-56836-249-8. The Portuguese language rolled more easily off Malay tongues than did Dutch or English. Ironically, if any European tongue was the language of merchant intercourse, even in Batavia, it was Portuguese.
- ^ Meijer, Hans (2004) In Indie geworteld. Publisher: Bert bakker. ISBN 90-351-2617-3. P.33, 35, 36, 76, 77, 371, 389
- ^ Groeneboer, K (1993) Weg tot het westen. Het Nederlands voor Indie 1600–1950. Publisher: KITLEV, Leiden.
- ^ Maier, H.M.J. (8 February 2005). "A Hidden Language – Dutch in Indonesia". Institute of European Studies, University of California. Archived from the original on 19 January 2012. Retrieved 16 August 2010.
- ^ Nieuwenhuys (1999) pp. 126, 191, 225.
- ^ Note: In December 1958, American Time magazine praised the translation of Maria Dermoût's The Ten Thousand Things, and named it one of the best books of the year, among several (other) iconic literary works of 1958: 'Breakfast at Tiffany´s' by Truman Capote, 'Doctor Zhivago' by Pasternak and 'Lolita' by Nabokov. See: Official Maria Dermout Website.Archived 2 April 2012 at the Wayback Machine
- ^ 'International Conference on Colonial and Post-Colonial Connections in Dutch Literature.' University of California, Berkeley, Website, 2011. Archived 13 November 2011 at the Wayback Machine Retrieved: 24 September 2011
- ^ Nieuwenhuys, Rob. ‘Oost-Indische spiegel. Wat Nederlandse schrijvers en dichters over Indonesië hebben geschreven vanaf de eerste jaren der Compagnie tot op heden.’, (Publisher: Querido, Amsterdam, 1978) p.555 Archived 28 June 2012 at the Wayback Machine
- ^ "Error". Archived from the original on 1 November 2011. Retrieved 27 September 2011.
- ^ Biran 2009, pp. 367–370.
- ^ Heider (1991), p. 15
- ^ Heider (1991), p. 14
- ^ "NATUURWETENSCHAPPELIJKE raad voor Nederlandsch-Indie te Batavia". opac.perpusnas.go.id. Retrieved 12 March 2020.
- ^ "Selamat Ulang Tahun, LIPI!". lipi.go.id (in Indonesian). Retrieved 12 March 2020.
- ^ Geotravel Research Center. "The rise and fall of Indonesia's rice table". Archived from the original on 7 October 2011. Retrieved 23 September 2011.
- ^ a b Karin Engelbrecht. "Dutch Food Influences - History of Dutch Food - Culinary Influences on the Dutch Kitchen". About. Archived from the original on 5 October 2011. Retrieved 22 September 2011.
- ^ Schoppert (1997), pp. 38–39
- ^ a b Dawson, B., Gillow, J., The Traditional Architecture of Indonesia, p. 8, 1994 Thames and Hudson Ltd, London, ISBN 0-500-34132-X
- ^ W. Wangsadinata and T.K. Djajasudarma (1995). "Architectural Design Consideration for Modern Buildings in Indonesia" (PDF). INDOBEX Conf. on Building Construction Technology for the Future: Construction Technology for Highrises & Intelligence Buildings. Jakarta. Archived from the original (PDF) on 14 June 2007. Retrieved 18 January 2007.
- ^ a b Schoppert (1997), pp. 72–77
- ^ Schoppert (1997), pp. 104–105
- ^ Schoppert (1997), pp. 102–105
- ^ VIckers (2005), p. 24
- ^ Schoppert (1997), p. 105
- ^ Pentasari, R (2007). Chic in kebaya: catatan inspiratif untuk tampil anggun berkebaya. Jakarta: Esensi.
- ^ Legêne, S., & Dijk, J. V (2011). The Netherlands East Indies at the Tropenmuseum: a colonial history. Amsterdam: KIT. p. 146.
- ^ "Indonesian Batik - intangible heritage - Culture Sector - UNESCO". www.unesco.org. Archived from the original on 3 May 2017. Retrieved 19 April 2017.
- ^ To this day the Dutch Royal family is in fact the wealthiest family of the Netherlands, one of the foundations of its wealth was the colonial trade."In Pictures: The World's Richest Royals". Forbes.com. 30 August 2007. Archived from the original on 6 October 2008. Retrieved 5 March 2010.
- ^ Some of the university faculties still include: Indonesian Languages and Cultures; Southeast Asia and Oceania Languages and Cultures; Cultural Anthropology
- ^ Note: 1927 garden party, at the country estate Arendsdorp on the Wassenaarse weg near The Hague, for the benefit of the victims of the storm disaster of 2 June 1927 in the Netherlands. The market is opened by the minister of Colonies dr. J.C. Koningsberger.
- ^ Nieuwenhuys, Robert, (1973) Tempo doeloe : fotografische documenten uit het oude Indie, 1870–1914 [door] E. Breton de Nijs (pseud. of Robert Nieuwenhuys) Amsterdam : Querido, ISBN 90-214-1103-2 – noting that the era wasn't fixed by any dates – noting the use of Tio, Tek Hong,(2006) Keadaan Jakarta tempo doeloe : sebuah kenangan 1882–1959 Depok : Masup Jakarta ISBN 979-25-7291-0
- ^ Nieuwenhuys (1999)
- ^ Etty, Elsbeth literary editor for the NRC handelsblad "Novels: Coming to terms with Calvinism, colonies and the war." (NRC Handelsblad. July 1998) Archived 20 July 2011 at the Wayback Machine
- ^ Bosma U., Raben R. Being "Dutch" in the Indies: a history of creolisation and empire, 1500–1920 (University of Michigan, NUS Press, 2008), ISBN 9971-69-373-9 Archived 23 April 2016 at the Wayback Machine
- ^ Willems, Wim, ’De uittocht uit Indie 1945–1995’ (Publisher: Bert Bakker, Amsterdam, 2001) pp.12–13 ISBN 90-351-2361-1
- ^ "Official CBS website containing all Dutch demographic statistics". Archived from the original on 11 June 2010. Retrieved 1 June 2010.
- ^ De Vries, Marlene. Indisch is een gevoel, de tweede en derde generatie Indische Nederlanders. (Amsterdam University Press, 2009) ISBN 978-90-8964-125-0 "Archived copy". Archived from the original on 17 August 2009. Retrieved 4 February 2016. Archived 7 January 2016 at the Wayback Machine P.369
- ^ Startpagina B.V. "Indisch-eten Startpagina, verzameling van interessante links". Archived from the original on 30 March 2010. Retrieved 14 October 2010.
- Biran, Misbach Yusa (2009). Sejarah Film 1900–1950: Bikin Film di Jawa [History of Film 1900–1950: Making Films in Java] (in Indonesian). Jakarta: Komunitas Bamboo working with the Jakarta Art Council. ISBN 978-979-3731-58-2.
- Cribb, R.B., Kahin, A. Historical dictionary of Indonesia (Scarecrow Press, 2004)
- Dick, Howard, et al. The Emergence of a National Economy: An Economic History of Indonesia, 1800-2000 (U. of Hawaii Press, 2002) online edition
- Friend, T. (2003). Indonesian Destinies. Harvard University Press. ISBN 0-674-01137-6.
- Heider, Karl G (1991). Indonesian Cinema: National Culture on Screen. Honolulu: University of Hawaii Press. ISBN 978-0-8248-1367-3.
- Reid, Anthony (1974). The Indonesian National Revolution 1945–1950. Melbourne: Longman Pty Ltd. ISBN 0-582-71046-4.
- Nieuwenhuys, Rob Mirror of the Indies: A History of Dutch Colonial Literature - translated from Dutch by E. M. Beekman (Publisher: Periplus, 1999) Google Books
- Prayogo, Wisnu Agung (2009). "Sekilas Perkembangan Perfilman di Indonesia" [An Overview of the Development of Film in Indonesia]. Kebijakan Pemerintahan Orde Baru Terhadap Perfilman Indonesia Tahun 1966–1980 [New Order Policy Towards Indonesian Films (1966–1980)] (Bachelor's of History Thesis) (in Indonesian). University of Indonesia.
- Ricklefs, M.C. (1991). A Modern History of Indonesia, 2nd edition. MacMillan. chapters 10–15. ISBN 0-333-57690-X.
- Taylor, Jean Gelman (2003). Indonesia: Peoples and Histories. New Haven and London: Yale University Press. ISBN 0-300-10518-5.
- Vickers, Adrian (2005). A History of Modern Indonesia. Cambridge University Press. ISBN 0-521-54262-6.
- Booth, Anne, et al. Indonesian Economic History in the Dutch Colonial Era (1990)
- Borschberg, Peter, The Dutch East Indies (2016), doi:10.1002/9781118455074.wbeoe276
- Bosma U., Raben R. Being "Dutch" in the Indies: a history of creolisation and empire, 1500–1920 (University of Michigan, NUS Press, 2008), ISBN 9971-69-373-9
- Bosma, Ulbe. Emigration: Colonial circuits between Europe and Asia in the 19th and early 20th century, European History Online, Mainz: Institute of European History, 2011, retrieved: 23 May 2011.
- Colombijn, Freek, and Thomas Lindblad, eds. Roots of violence in Indonesia: Contemporary violence in historical perspective (Leiden: KITLV Press, 2002)
- Dick, Howard, et al. The Emergence of a National Economy: An Economic History of Indonesia, 1800-2000 (U. of Hawaii Press, 2002) online edition
- Elson, Robert. The idea of Indonesia: A history (Cambridge University Press, 2008)
- Braudel, Fernand, The perspective of the World, vol III in Civilization and Capitalism, 1984
- Furnivall, J. S. (1944). Netherlands India: A Study of Plural Economy. Cambridge U.P. p. viii. ISBN 9781108011273., comprehensive coverage
- Gouda, Frances. Dutch Culture Overseas: Colonial Practice in the Netherlands Indies, 1900-1942 (1996) online
- Nagtegaal, Luc. Riding the Dutch Tiger: The Dutch East Indies Company and the Northeast Coast of Java, 1680–1743 (1996) 250pp
- Robins, Nick. The Corporation that Changed the World: How the East India Company Shaped the Modern Multinational (2006) excerpt and text search
- Taylor, Jean Gelman. The Social World of Batavia: Europeans and Eurasians in Colonial Indonesia (1983)
- Lindblad, J. Thomas (1989). "The Petroleum Industry in Indonesia before the Second World War". Bulletin of Indonesian Economic Studies. 25 (2): 53–77. doi:10.1080/00074918812331335569.
- Panikkar, K. M. (1953). Asia and Western dominance, 1498–1945, by K.M. Panikkar. London: G. Allen and Unwin.
- 11 Dutch Indies objects in 'The European Library Harvest'
- Cribb, Robert, Digital Atlas of Indonesian History Chapter 4: The Netherlands Indies, 1800-1942 | Digital Atlas of Indonesian History - By Robert Cribb
- Historical Documents of the Dutch Parliament 1814–1995 Archived 4 April 2012 at the Wayback Machine
- Parallel and Divergent Aspects of British Rule in the Raj, French Rule in Indochina, Dutch Rule in the Netherlands East Indies (Indonesia), and American Rule in the Philippines
Yasuo Uemura, "The Sugar Estates in Besuki and the Depression" Hiroshima Interdisciplinary Studies in the Humanities, Vol.4 page.30-78
Yasuo Uemura, "The Depression and the Sugar Industry in Surabaya" Hiroshima Interdisciplinary Studies in the Humanities, Vol.3 page.1-54
- "Surabaya" . Collier's New Encyclopedia. 1921.
- "Surabaya or Soerabaya. The largest city in Java" . New International Encyclopedia. 1905.
Last edited on 7 June 2021, at 05:59
Content is available under CC BY-SA 3.0
unless otherwise noted. | https://googleweblight.com/sp?hl&geid=NSTNR&u=https://en.m.wikipedia.org/wiki/Dutch_East_Indies | 21 |
22 | The image is from Wikipedia Commons
An X-ray, or, much less commonly, X-radiation, is a penetrating form of high-energy electromagnetic radiation. Most X-rays have a wavelength ranging from 10 picometers to 10 nanometers, corresponding to frequencies in the range 30 petahertz to 30 exahertz (30×1015 Hz to 30×1018 Hz) and energies in the range 124 eV to 124 keV. X-ray wavelengths are shorter than those of UV rays and typically longer than those of gamma rays. In many languages, X-radiation is referred to as Röntgen radiation, after the German scientist Wilhelm Conrad Röntgen, who discovered it on November 8, 1895. He named it X-radiation to signify an unknown type of radiation. Spellings of X-ray(s) in English include the variants x-ray(s), xray(s), and X ray(s).
Pre-Röntgen observations and research
Before their discovery in 1895, X-rays were just a type of unidentified radiation emanating from experimental discharge tubes. They were noticed by scientists investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Many of the early Crookes tubes (invented around 1875) undoubtedly radiated X-rays, because early researchers noticed effects that were attributable to them, as detailed below. Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube.
The earliest experimenter thought to have (unknowingly) produced X-rays was actuary William Morgan. In 1785 he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays. This work was further explored by Humphry Davy and his assistant Michael Faraday.
When Stanford University physics professor Fernando Sanford created his "electric photography" he also unknowingly generated and detected X-rays. From 1886 to 1888 he had studied in the Hermann Helmholtz laboratory in Berlin, where he became familiar with the cathode rays generated in vacuum tubes when a voltage was applied across separate electrodes, as previously studied by Heinrich Hertz and Philipp Lenard. His letter of January 6, 1893 (describing his discovery as "electric photography") to The Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner.
Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a "window" at the end made of thin aluminum, facing the cathode so the cathode rays would strike it (later called a "Lenard tube"). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these "Lenard rays" were actually X-rays.
In 1889 Ukrainian-born Ivan Puluj, a lecturer in experimental physics at the Prague Polytechnic who since 1877 had been constructing various designs of gas-filled tubes to investigate their properties, published a paper on how sealed photographic plates became dark when exposed to the emanations from the tubes.
Hermann von Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. It was formed on the basis of the electromagnetic theory of light. However, he did not work with actual X-rays.
In 1894 Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this radiant energy of "invisible" kinds. After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design, as well as Crookes tubes.
Discovery by Röntgen
On November 8, 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report "On a new kind of ray: A preliminary communication" and on December 28, 1895 submitted it to Würzburg's Physical-Medical Society journal. This was the first paper written on X-rays. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. The name stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Bulgarian, Swedish, Finnish, Estonian, Turkish, Russian, Latvian, Lithuanian, Japanese, Dutch, Georgian, Hebrew and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery.
There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about 1 meter away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper.
Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said "I have seen my death."
The discovery of X-rays stimulated a veritable sensation. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published. This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone. Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy.
Advances in radiology
Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission he sent a letter to physicians he knew around Europe (January 1, 1896). News (and the creation of "shadowgrams") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray (of a hand). Through February there were 46 experimenters taking up the technique in North America alone.
The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On February 14, 1896, Hall-Edwards was also the first to use X-rays in a surgical operation. In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays "not only photograph, but also affect the living function".
The first medical X-ray made in the United States was obtained using a discharge tube of Pului's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pului tube produced X-rays. This was a result of Pului's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work.
Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images "live" using some form of luminescent screen. Röntgen used a screen coated with barium platinocyanide. On February 5, 1896, live imaging devices were developed by both Italian scientist Enrico Salvioni (his "cryptoscope") and Professor McGie of Princeton University (his "Skiascope"), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896 he developed the first mass-produced live imaging device, his "Vitascope", later called the fluoroscope, which became the standard for medical X-ray examinations. Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure. During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes.
In 1901, U.S. President William McKinley was shot twice in an assassination attempt. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later.
With the widespread experimentation with x‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and Dr. William Lofland Dudley of Vanderbilt University reported hair loss after Dr. Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet an experiment was attempted, for which Dudley "with his characteristic devotion to science" volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot 2 inches (5.1 cm) in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half inch from the hair."
In August 1896 Dr. HD. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an x-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with x-rays being sent in to the publication. Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an x-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone. Many physicians claimed there were no effects from X-ray exposure at all. On August 3, 1905, in San Francisco, California, Elizabeth Fleischman, an American X-ray pioneer, died from complications as a result of her work with X-rays.
20th century and beyond
The many applications of X-rays immediately generated enormous interest. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920.
A typical early 20th century medical x-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes. The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the "hardness" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected. To detect the hardness of the tube, the spark gap was initially opened to the widest setting. While the coil was operating, the operator reduced the gap until sparks began to appear. A tube in which the spark gap began to spark at around 2 1/2 inches was considered soft (low vacuum) and suitable for thin body parts such as hands and arms. A 5-inch spark indicated the tube was suitable for shoulders and knees. A 7–9 inch spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased in order to operate the tube for imaging. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax. The plates may have a small addition of fluorescent salt to reduce exposure times.
Crookes tubes were unreliable. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate "harder" X-rays until it soon stopped operating. Larger and more frequently used tubes were provided with devices for restoring the air, known as "softeners". These often took the form of a small side tube that contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency. However, the mica had a limited life, and the restoration process was difficult to control.
In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode tubes by about 1920.
In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery.
In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William Lawrence Bragg, gave birth to the field of X-ray crystallography.
In 1913, Henry Moseley performed crystallography experiments with X-rays emanating from various metals and formulated Moseley's law which relates the frequency of the X-rays to the atomic number of the metal.
The Coolidge X-ray tube was invented the same year by William D. Coolidge. It made possible the continuous emissions of X-rays. Modern X-ray tubes are based on this design, often employing the use of rotating targets which allow for significantly higher heat dissipation than static targets, further allowing higher quantity X-ray output for use in high powered applications such as rotational CT scanners.
The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated because of the spread of X-ray dermatitis on his arm.
Medical science also used the motion picture to study human physiology. In 1913, a motion picture was made in Detroit showing a hard-boiled egg inside a human stomach. This early x-ray movie was recorded at a rate of one still image every four seconds. Dr Lewis Gregory Cole of New York was a pioneer of the technique, which he called "serial radiography". In 1918, x-rays were used in association with motion picture cameras to capture the human skeleton in motion. In 1920, it was used to record the movements of tongue and teeth in the study of languages by the Institute of Phonetics in England.
In 1914 Marie Curie developed radiological cars to support soldiers injured in World War I. The cars would allow for rapid X-ray imaging of wounded soldiers so battlefield surgeons could quickly and more accurately operate.
From the early 1920s through to the 1950s, X-ray machines were developed to assist in the fitting of shoes and were sold to commercial shoe stores. Concerns regarding the impact of frequent or poorly controlled use were expressed in the 1950s, leading to the practice's eventual end that decade.
The X-ray microscope was developed during the 1950s.
The Chandra X-ray Observatory, launched on July 23, 1999, has been allowing the exploration of the very violent processes in the universe which produce X-rays. Unlike visible light, which gives a relatively stable view of the universe, the X-ray universe is unstable. It features stars being torn apart by black holes, galactic collisions, and novae, and neutron stars that build up layers of plasma that then explode into space.
An X-ray laser device was proposed as part of the Reagan Administration's Strategic Defense Initiative in the 1980s, but the only test of the device (a sort of laser "blaster" or death ray, powered by a thermonuclear explosion) gave inconclusive results. For technical and political reasons, the overall project (including the X-ray laser) was de-funded (though was later revived by the second Bush Administration as National Missile Defense using different technologies).
Phase-contrast X-ray imaging refers to a variety of techniques that use phase information of a coherent X-ray beam to image soft tissues. It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for X-ray phase-contrast imaging, all utilizing different principles to convert phase variations in the X-rays emerging from an object into intensity variations. These include propagation-based phase contrast, Talbot interferometry, refraction-enhanced imaging, and X-ray interferometry. These methods provide higher contrast compared to normal absorption-contrast X-ray imaging, making it possible to see smaller details. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, X-ray optics, and high resolution X-ray detectors.
Soft and hard X-rays
X-rays with high photon energies above 5–10 keV (below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays. The intermediate range with photon energies of several keV is often referred to as tender X-rays. Due to their penetrating ability, hard X-rays are widely used to image the inside of objects, e.g., in medical radiography and airport security. The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer.
There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus. This definition has several problems: other processes also can generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation. This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei. Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays.
X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging, this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be utilized in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy.
Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image.
X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high-resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals.
Interaction with matter
X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates.
The probability of a photoelectric absorption per unit mass is approximately proportional to Z3/E3, where Z is the atomic number and E is the energy of the incident photon. This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances this limit is higher. The high amount of calcium (Z = 20) in bones, together with their high density, is what makes them show up so clearly on medical radiographs.
A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy.
Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging. Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles is described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum.
Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime. Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1.
Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced.
Production by electrons
|Photon energy [keV]||Wavelength [nm]|
X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays. In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem.
The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes:
- Characteristic X-ray emission (X-ray electroluminescence): If the electron has enough energy, it can knock an orbital electron out of the inner electron shell of the target atom. After that, electrons from higher energy levels fill the vacancies, and X-ray photons are emitted. This process produces an emission spectrum of X-rays at a few discrete frequencies, sometimes referred to as spectral lines. Usually, these are transitions from the upper shells to the K shell (called K lines), to the L shell (called L lines) and so on. If the transition is from 2p to 1s, it is called Kα, while if it is from 3p to 1s it is Kβ. The frequencies of these lines depend on the material of the target and are therefore called characteristic lines. The Kα line usually has greater intensity than the Kβ one and is more desirable in diffraction experiments. Thus the Kβ line is filtered out by a filter. The filter is usually made of a metal having one proton less than the anode material (e.g., Ni filter for Cu anode or Nb filter for Mo anode).
- Bremsstrahlung: This is radiation given off by the electrons as they are scattered by the strong electric field near the high-Z (proton number) nuclei. These X-rays have a continuous spectrum. The frequency of bremsstrahlung is limited by the energy of incident electrons.
So, the resulting output of a tube consists of a continuous bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV.
Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat.
A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization.
Short nanosecond bursts of X-rays peaking at 15-keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging.
Production by fast positive ions
X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom. An overview of these cross sections is given in the same reference.
Production in lightning and laboratory discharges
X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung. This produces photons with energies of some few keV and several tens of MeV. In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed. A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons; however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significant number of run-away electrons. Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges.
X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose a person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For x-ray diffraction applications, such as x-ray crystallography, hybrid photon counting detectors are widely used.
Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging. The first medical use was less than a month after his paper on the subject. Up to 2010, five billion medical imaging examinations had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States.
Projectional radiography is the practice of producing two-dimensional images using x-ray radiation. Bones contain a high concentration of calcium, which, due to its relatively high atomic number, absorbs x-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see.
Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations) and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs.
In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) x-rays.
To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel.
Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions. These cross-sectional images can be combined into a three-dimensional image of the inside of the body and used for diagnostic and therapeutic purposes in various medical disciplines....
Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages) and barium swallow (to examine for esophageal disorders and swallowing disorders).
The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy x-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast.
Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed. X-rays are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government. It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5–2% with 2007 rates of CT usage.
Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer. However, this is under increasing doubt. It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%. The amount of absorbed radiation depends upon the type of X-ray test and the body part involved. CT and fluoroscopy entail higher doses of radiation than do plain X-rays.
To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation. Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000. This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime. For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy. A head CT scan (1.5mSv, 64mGy) that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used.
The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus. In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children. Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk.
Medical X-rays are a significant source of human-made radiation exposure. In 1987, they accounted for 58% of exposure from human-made sources in the United States. Since human-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine.
Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 0.5 to 4 mrem. A full mouth series of X-rays may result in an exposure of up to 6 (digital) to 18 (film) mrem, for a yearly average of up to 40 mrem.
Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays.
Other notable uses of X-rays include:
- X-ray crystallography in which the pattern produced by the diffraction of X-rays through the closely spaced lattice of atoms in a crystal is recorded and then analysed to reveal the nature of that lattice. A related technique, fiber diffraction, was used by Rosalind Franklin to discover the double helical structure of DNA.
- X-ray astronomy, which is an observational branch of astronomy, which deals with the study of X-ray emission from celestial objects.
- X-ray microscopic analysis, which uses electromagnetic radiation in the soft X-ray band to produce images of very small objects.
- X-ray fluorescence, a technique in which X-rays are generated within a specimen and detected. The outgoing energy of the X-ray can be used to identify the composition of the sample.
- Industrial radiography uses X-rays for inspection of industrial parts, particularly welds.
- Radiography of cultural objects, most often x-rays of paintings to reveal underdrawing, pentimenti alterations in the course of painting or by later restorers, and sometimes previous paintings on the support. Many pigments such as lead white show well in radiographs.
- X-ray spectromicroscopy has been used to analyse the reactions of pigments in paintings. For example, in analysing colour degradation in the paintings of van Gogh.
- Authentication and quality control of packaged items.
- Industrial CT (computed tomography), a process that uses X-ray equipment to produce three-dimensional representations of components both externally and internally. This is accomplished through computer processing of projection images of the scanned object in many directions.
- Airport security luggage scanners use X-rays for inspecting the interior of luggage for security threats before loading on aircraft.
- Border control truck scanners and domestic police departments use X-rays for inspecting the interior of trucks.
- X-ray art and fine art photography, artistic use of X-rays, for example the works by Stane Jagodič
- X-ray hair removal, a method popular in the 1920s but now banned by the FDA.
- Shoe-fitting fluoroscopes were popularized in the 1920s, banned in the US in the 1960s, in the UK in the 1970s, and later in continental Europe.
- Roentgen stereophotogrammetry is used to track movement of bones based on the implantation of markers
- X-ray photoelectron spectroscopy is a chemical analysis technique relying on the photoelectric effect, usually employed in surface science.
- Radiation implosion is the use of high energy X-rays generated from a fission explosion (an A-bomb) to compress nuclear fuel to the point of fusion ignition (an H-bomb).
While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye itself. Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light.
Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the ID11 at the European Synchrotron Radiation Facility is one example of such high intensity.
Units of measure and exposure
The measure of X-rays ionizing ability is called the exposure:
- The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and it is the amount of radiation required to create one coulomb of charge of each polarity in one kilogram of matter.
- The roentgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation required to create one electrostatic unit of charge of each polarity in one cubic centimeter of dry air. 1 roentgen = 2.58×10−4 C/kg.
However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose:
- The gray (Gy), which has units of (joules/kilogram), is the SI unit of absorbed dose, and it is the amount of radiation required to deposit one joule of energy in one kilogram of any kind of matter.
- The rad is the (obsolete) corresponding traditional unit, equal to 10 millijoules of energy deposited per kilogram. 100 rad = 1 gray.
- The Roentgen equivalent man (rem) is the traditional unit of equivalent dose. For X-rays it is equal to the rad, or, in other words, 10 millijoules of energy deposited per kilogram. 100 rem = 1 Sv.
- The sievert (Sv) is the SI unit of equivalent dose, and also of effective dose. For X-rays the "equivalent dose" is numerically equal to a Gray (Gy). 1 Sv= 1 Gy. For the "effective dose" of X-rays, it is usually not equal to the Gray (Gy).
|Activity (A)||becquerel||Bq||s−1||1974||SI unit|
|curie||Ci||3.7 × 1010 s−1||1953||3.7×1010 Bq|
|rutherford||Rd||106 s−1||1946||1,000,000 Bq|
|Exposure (X)||coulomb per kilogram||C/kg||C⋅kg−1 of air||1974||SI unit|
|röntgen||R||esu / 0.001293 g of air||1928||2.58 × 10−4 C/kg|
|Absorbed dose (D)||gray||Gy||J⋅kg−1||1974||SI unit|
|erg per gram||erg/g||erg⋅g−1||1950||1.0 × 10−4 Gy|
|rad||rad||100 erg⋅g−1||1953||0.010 Gy|
|Equivalent dose (H)||sievert||Sv||J⋅kg−1 × WR||1977||SI unit|
|röntgen equivalent man||rem||100 erg⋅g−1 x WR||1971||0.010 Sv|
|Effective dose (E)||sievert||Sv||J⋅kg−1 × WR × WT||1977||SI unit|
|röntgen equivalent man||rem||100 erg⋅g−1 × WR × WT||1971||0.010 Sv|
- Backscatter X-ray
- Detective quantum efficiency
- High-energy X-rays
- Macintyre's X-Ray Film – 1896 documentary radiography film
- N ray
- Neutron radiation
- Reflection (physics)
- Resonant inelastic X-ray scattering (RIXS)
- Small-angle X-ray scattering (SAXS)
- The X-Rays – 1897 British short silent comedy film
- X-ray absorption spectroscopy
- X-ray marker
- X-ray nanoprobe
- X-ray reflectivity
- X-ray vision
- X-ray welding
- "X-Rays". Science Mission Directorate. NASA.
- Novelline, Robert (1997). Squire's Fundamentals of Radiology. Harvard University Press. 5th edition. ISBN 0-674-83339-2.
- "X-ray". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
- Filler, Aaron (2009). "The History, Development and Impact of Computed Imaging in Neurological Diagnosis and Neurosurgery: CT, MRI, and DTI". Nature Precedings. doi:10.1038/npre.2009.3267.5..
- Morgan, William (1785-02-24). "Electrical Experiments Made in Order to Ascertain the Non-Conducting Power of a Perfect Vacuum, &c". Philosophical Transactions of the Royal Society. Royal Society of London. 75: 272–278. doi:10.1098/rstl.1785.0014.
- Anderson, J.G. (January 1945), "William Morgan and X-rays", Transactions of the Faculty of Actuaries, 17: 219–221, doi:10.1017/s0071368600003001
- Wyman, Thomas (Spring 2005). "Fernando Sanford and the Discovery of X-rays". "Imprint", from the Associates of the Stanford University Libraries: 5–15.
- Thomson, Joseph J. (1903). The Discharge of Electricity through Gasses. USA: Charles Scribner's Sons. pp. 182–186.
- Gaida, Roman; et al. (1997). "Ukrainian Physicist Contributes to the Discovery of X-Rays". Mayo Clinic Proceedings. Mayo Foundation for Medical Education and Research. 72 (7): 658. doi:10.1016/s0025-6196(11)63573-8. PMID 9212769. Archived from the original on 2008-05-28. Retrieved 2008-04-06.
- Wiedmann's Annalen, Vol. XLVIII
- Hrabak, M.; Padovan, R. S.; Kralik, M; Ozretic, D; Potocki, K (2008). "Scenes from the past: Nikola Tesla and the discovery of X-rays". RadioGraphics. 28 (4): 1189–92. doi:10.1148/rg.284075206. PMID 18635636.
- Chadda, P. K. (2009). Hydroenergy and Its Energy Potential. Pinnacle Technology. pp. 88–. ISBN 978-1-61820-149-2.
- From his technical publications, it is indicated that he invented and developed a special single-electrode X-ray tube: Morton, William James and Hammer, Edwin W. (1896) American Technical Book Co., p. 68., U.S. Patent 514,170, "Incandescent Electric Light", and U.S. Patent 454,622 "System of Electric Lighting". These differed from other X-ray tubes in having no target electrode and worked with the output of a Tesla Coil.
- Stanton, Arthur (1896-01-23). "Wilhelm Conrad Röntgen On a New Kind of Rays: translation of a paper read before the Würzburg Physical and Medical Society, 1895". Nature. 53 (1369): 274–6. Bibcode:1896Natur..53R.274.. doi:10.1038/053274b0. see also pp. 268 and 276 of the same issue.
- Karlsson, Erik B. (9 February 2000). "The Nobel Prizes in Physics 1901–2000". Stockholm: The Nobel Foundation. Retrieved 24 November 2011.
- Peters, Peter (1995). "W. C. Roentgen and the discovery of x-rays". Textbook of Radiology. Medcyclopedia.com, GE Healthcare. Archived from the original on 11 May 2008. Retrieved 5 May 2008.
- Glasser, Otto (1993). Wilhelm Conrad Röntgen and the early history of the roentgen rays. Norman Publishing. pp. 10–15. ISBN 978-0930405229.
- Arthur, Charles (2010-11-08). "Google doodle celebrates 115 years of X-rays". The Guardian. Guardian US. Retrieved 5 February 2019.
- Kevles, Bettyann Holtzmann (1996). Naked to the Bone Medical Imaging in the Twentieth Century. Camden, NJ: Rutgers University Press. pp. 19–22. ISBN 978-0-8135-2358-3.
- Sample, Sharro (2007-03-27). "X-Rays". The Electromagnetic Spectrum. NASA. Retrieved 2007-12-03.
- Markel, Howard (20 December 2012). "'I Have Seen My Death': How the World Discovered the X-Ray". PBS NewsHour. PBS. Retrieved 23 March 2019.
- Glasser, Otto (1958). Dr. W. C. Ro ̈ntgen. Springfield: Thomas.
- Natale, Simone (2011-11-01). "The Invisible Made Visible". Media History. 17 (4): 345–358. doi:10.1080/13688804.2011.602856. hdl:2134/19408. S2CID 142518799.
- Natale, Simone (2011-08-04). "A Cosmology of Invisible Fluids: Wireless, X-Rays, and Psychical Research Around 1900". Canadian Journal of Communication. 36 (2). doi:10.22230/cjc.2011v36n2a2368.
- Grove, Allen W. (1997-01-01). "Rontgen's Ghosts: Photography, X-Rays, and the Victorian Imagination". Literature and Medicine. 16 (2): 141–173. doi:10.1353/lm.1997.0016. PMID 9368224. S2CID 35604474.
- Feldman, A (1989). "A sketch of the technical history of radiology from 1896 to 1920". Radiographics. 9 (6): 1113–1128. doi:10.1148/radiographics.9.6.2685937. PMID 2685937.
- "Major John Hall-Edwards". Birmingham City Council. Archived from the original on September 28, 2012. Retrieved 2012-05-17.
- Kudriashov, Y. B. (2008). Radiation Biophysics. Nova Publishers. p. xxi. ISBN 9781600212802.
- Spiegel, P. K (1995). "The first clinical X-ray made in America—100 years". American Journal of Roentgenology. 164 (1): 241–243. doi:10.2214/ajr.164.1.7998549. PMID 7998549.
- Nicolaas A. Rupke, Eminent Lives in Twentieth-Century Science and Religion, page 300, Peter Lang, 2009 ISBN 3631581203
- National Library of Medicine. "Could X-rays Have Saved President William McKinley?" Visible Proofs: Forensic Views of the Body.
- Daniel, J. (April 10, 1896). "The X-Rays". Science. 3 (67): 562–563. Bibcode:1896Sci.....3..562D. doi:10.1126/science.3.67.562. PMID 17779817.
- Fleming, Walter Lynwood (1909). The South in the Building of the Nation: Biography A-J. Pelican Publishing. p. 300. ISBN 978-1589809468.
- Ce4Rt (Mar 2014). Understanding Ionizing Radiation and Protection. p. 174.
- Glasser, Otto (1934). Wilhelm Conrad Röntgen and the Early History of the Roentgen Rays. Norman Publishing. p. 294. ISBN 978-0930405229.
- Sansare K, Khanna V, Karjodkar F (2011). "Early victims of X-rays: A tribute and current perception". Dentomaxillofacial Radiology. 40 (2): 123–125. doi:10.1259/dmfr/73488299. PMC 3520298. PMID 21239576.
- Kathern, Ronald L. and Ziemer, Paul L. The First Fifty Years of Radiation Protection, physics.isu.edu
- Hrabak M, Padovan RS, Kralik M, Ozretic D, Potocki K (July 2008). "Nikola Tesla and the Discovery of X-rays". RadioGraphics. 28 (4): 1189–92. doi:10.1148/rg.284075206. PMID 18635636.
- California, San Francisco Area Funeral Home Records, 1835–1979. Database with images. FamilySearch. Jacob Fleischman in the entry for Elizabeth Aschheim. 03 Aug 1905. Citing funeral home J.S. Godeau, San Francisco, San Francisco, California. Record book Vol. 06, p. 1–400, 1904–1906. San Francisco Public Library. San Francisco History and Archive Center.
- Editor. (August 5, 1905). Aschheim. Obituaries. San Francisco Examiner. San Francisco, California.
- Editor. (August 5, 1905). Obituary Notice. Elizabeth Fleischmann. San Francisco Chronicle. Page 10.
- Schall, K. (1905). Electro-medical Instruments and their Management. Bemrose & Sons Ltd. Printers. pp. 96, 107.
- Birmingham City Council: Major John Hall-Edwards Archived September 28, 2012, at the Wayback Machine
- "X-ray movies show hard boiled egg fighting digestive organs (1913)". The News-Palladium. 1913-04-04. p. 2. Retrieved 2020-11-26.
- "X-ray moving pictures latest (1913)". Chicago Tribune. 1913-06-22. p. 32. Retrieved 2020-11-26.
- "Homeopaths to show movies of body's organs at work (1915)". The Central New Jersey Home News. 1915-05-10. p. 6. Retrieved 2020-11-26.
- "How X-Ray Movies Are Taken (1918)". Davis County Clipper. 1918-03-15. p. 2. Retrieved 2020-11-26.
- "X-ray movies (1919)". Tampa Bay Times. 1919-01-12. p. 16. Retrieved 2020-11-26.
- "X-ray movies perfected. Will show motions of bones and joints of human body. (1918)". The Sun. 1918-01-07. p. 7. Retrieved 2020-11-26.
- "Talk is cheap? X-ray used by Institute of Phonetics (1920)". New Castle Herald. 1920-01-02. p. 13. Retrieved 2020-11-26.
- Jorgensen, Timothy J. (10 October 2017). "Marie Curie and her X-ray vehicles' contribution to World War I battlefield medicine". The Conversation. Retrieved February 23, 2018.
- "X-Rays for Fitting Boots". Warwick Daily News (Qld.: 1919–1954). 1921-08-25. p. 4. Retrieved 2020-11-27.
- "T. C. BEIRNE'S X-RAY SHOE FITTING". Telegraph (Brisbane, Qld. : 1872–1947). 1925-07-17. p. 8. Retrieved 2017-11-05.
- "THE PEDOSCOPE". Sunday Times (Perth, WA : 1902–1954). 1928-07-15. p. 5. Retrieved 2017-11-05.
- "X-RAY SHOE FITTINGS". Biz (Fairfield, NSW : 1928–1972). 1955-07-27. p. 10. Retrieved 2017-11-05.
- "SHOE X-RAY DANGERS". Brisbane Telegraph (Qld. : 1948–1954). 1951-02-28. p. 7. Retrieved 2017-11-05.
- "X-ray shoe sets in S.A. 'controlled'". News (Adelaide, SA : 1923–1954). 1951-04-27. p. 12. Retrieved 2017-11-05.
- "Ban On Shoe X-ray Machines Resented". Canberra Times (ACT : 1926–1995). 1957-06-26. p. 4. Retrieved 2017-11-05.
- Fitzgerald, Richard (2000). "Phase-sensitive x-ray imaging". Physics Today. 53 (7): 23–26. Bibcode:2000PhT....53g..23F. doi:10.1063/1.1292471.
- David, C, Nohammer, B, Solak, H H, & Ziegler E (2002). "Differential x-ray phase contrast imaging using a shearing interferometer". Applied Physics Letters. 81 (17): 3287–3289. Bibcode:2002ApPhL..81.3287D. doi:10.1063/1.1516611.CS1 maint: multiple names: authors list (link)
- Wilkins, S W, Gureyev, T E, Gao, D, Pogany, A & Stevenson, A W (1996). "Phase-contrast imaging using polychromatic hard X-rays". Nature. 384 (6607): 335–338. Bibcode:1996Natur.384..335W. doi:10.1038/384335a0. S2CID 4273199.CS1 maint: multiple names: authors list (link)
- Davis, T J, Gao, D, Gureyev, T E, Stevenson, A W & Wilkins, S W (1995). "Phase-contrast imaging of weakly absorbing materials using hard X-rays". Nature. 373 (6515): 595–598. Bibcode:1995Natur.373..595D. doi:10.1038/373595a0. S2CID 4287341.CS1 maint: multiple names: authors list (link)
- Momose A, Takeda T, Itai Y, Hirano K (1996). "Phase-contrast X-ray computed tomography for observing biological soft tissues". Nature Medicine. 2 (4): 473–475. doi:10.1038/nm0496-473. PMID 8597962. S2CID 23523144.
- Attwood, David (1999). Soft X-rays and extreme ultraviolet radiation. Cambridge University. p. 2. ISBN 978-0-521-65214-8. Archived from the original on 2012-11-11. Retrieved 2012-11-04.
- "Physics.nist.gov". Physics.nist.gov. Retrieved 2011-11-08.
- Denny, P. P.; Heaton, B. (1999). Physics for Diagnostic Radiology. USA: CRC Press. p. 12. ISBN 978-0-7503-0591-4.
- Feynman, Richard; Leighton, Robert; Sands, Matthew (1963). The Feynman Lectures on Physics, Vol.1. USA: Addison-Wesley. pp. 2–5. ISBN 978-0-201-02116-5.
- L'Annunziata, Michael; Abrade, Mohammad (2003). Handbook of Radioactivity Analysis. Academic Press. p. 58. ISBN 978-0-12-436603-9.
- Grupen, Claus; Cowan, G.; Eidelman, S. D.; Stroh, T. (2005). Astroparticle Physics. Springer. p. 109. ISBN 978-3-540-25312-9.
- Hodgman, Charles, ed. (1961). CRC Handbook of Chemistry and Physics, 44th Ed. USA: Chemical Rubber Co. p. 2850.
- Government of Canada, Canadian Centre for Occupational Health and Safety (2019-05-09). "Radiation – Quantities and Units of Ionizing Radiation : OSH Answers". www.ccohs.ca. Retrieved 2019-05-09.
- Bushberg, Jerrold T.; Seibert, J. Anthony; Leidholdt, Edwin M.; Boone, John M. (2002). The essential physics of medical imaging. Lippincott Williams & Wilkins. p. 42. ISBN 978-0-683-30118-2.
- Bushberg, Jerrold T.; Seibert, J. Anthony; Leidholdt, Edwin M.; Boone, John M. (2002). The essential physics of medical imaging. Lippincott Williams & Wilkins. p. 38. ISBN 978-0-683-30118-2.
- Kissel, Lynn (2000-09-02). "RTAB: the Rayleigh scattering database". Radiation Physics and Chemistry. Lynn Kissel. 59 (2): 185–200. Bibcode:2000RaPC...59..185K. doi:10.1016/S0969-806X(00)00290-5. Archived from the original on 2011-12-12. Retrieved 2012-11-08.
- Attwood, David (1999). "3". Soft X-rays and extreme ultraviolet radiation. Cambridge University Press. ISBN 978-0-521-65214-8. Archived from the original on 2012-11-11. Retrieved 2012-11-04.
- "X-ray Transition Energies Database". NIST Physical Measurement Laboratory. 2011-12-09. Retrieved 2016-02-19.
- "X-Ray Data Booklet Table 1-3" (PDF). Center for X-ray Optics and Advanced Light Source, Lawrence Berkeley National Laboratory. 2009-10-01. Archived from the original (PDF) on 23 April 2009. Retrieved 2016-02-19.
- Whaites, Eric; Cawson, Roderick (2002). Essentials of Dental Radiography and Radiology. Elsevier Health Sciences. pp. 15–20. ISBN 978-0-443-07027-3.
- Bushburg, Jerrold; Seibert, Anthony; Leidholdt, Edwin; Boone, John (2002). The Essential Physics of Medical Imaging. USA: Lippincott Williams & Wilkins. p. 116. ISBN 978-0-683-30118-2.
- Emilio, Burattini; Ballerna, Antonella (1994). "Preface". Biomedical Applications of Synchrotron Radiation: Proceedings of the 128th Course at the International School of Physics -Enrico Fermi- 12–22 July 1994, Varenna, Italy. IOS Press. p. xv. ISBN 90-5199-248-3.
- Camara, C. G.; Escobar, J. V.; Hird, J. R.; Putterman, S. J. (2008). "Correlation between nanosecond X-ray flashes and stick–slip friction in peeling tape" (PDF). Nature. 455 (7216): 1089–1092. Bibcode:2008Natur.455.1089C. doi:10.1038/nature07378. S2CID 4372536. Retrieved 2 February 2013.
- Paul, Helmut; Muhr, Johannes (1986). "Review of experimental cross sections for K-shell ionization by light ions". Physics Reports. 135 (2): 47–97. Bibcode:1986PhR...135...47P. doi:10.1016/0370-1573(86)90149-3.
- Köhn, Christoph; Ebert, Ute (2014). "Angular distribution of Bremsstrahlung photons and of positrons for calculations of terrestrial gamma-ray flashes and positron beams". Atmospheric Research. 135–136: 432–465. arXiv:1202.4879. Bibcode:2014AtmRe.135..432K. doi:10.1016/j.atmosres.2013.03.012. S2CID 10679475.
- Köhn, Christoph; Ebert, Ute (2015). "Calculation of beams of positrons, neutrons, and protons associated with terrestrial gamma ray flashes". Journal of Geophysical Research: Atmospheres. 120 (4): 1620–1635. Bibcode:2015JGRD..120.1620K. doi:10.1002/2014JD022229.
- Kochkin, Pavlo; Köhn, Christoph; Ebert, Ute; Van Deursen, Lex (2016). "Analyzing x-ray emissions from meter-scale negative discharges in ambient air". Plasma Sources Science and Technology. 25 (4): 044002. Bibcode:2016PSST...25d4002K. doi:10.1088/0963-0252/25/4/044002.
- Cooray, Vernon; Arevalo, Liliana; Rahman, Mahbubur; Dwyer, Joseph; Rassoul, Hamid (2009). "On the possible origin of X-rays in long laboratory sparks". Journal of Atmospheric and Solar-Terrestrial Physics. 71 (17–18): 1890–1898. Bibcode:2009JASTP..71.1890C. doi:10.1016/j.jastp.2009.07.010.
- Köhn, C; Chanrion, O; Neubert, T (2017). "Electron acceleration during streamer collisions in air". Geophysical Research Letters. 44 (5): 2604–2613. Bibcode:2017GeoRL..44.2604K. doi:10.1002/2016GL072216. PMC 5405581. PMID 28503005.
- Köhn, C; Chanrion, O; Babich, L P; Neubert, T (2018). "Streamer properties and associated x-rays in perturbed air". Plasma Sources Science and Technology. 27 (1): 015017. Bibcode:2018PSST...27a5017K. doi:10.1088/1361-6595/aaa5d8.
- Köhn, C; Chanrion, O; Neubert, T (2018). "High-Energy Emissions Induced by Air Density Fluctuations of Discharges". Geophysical Research Letters. 45 (10): 5194–5203. Bibcode:2018GeoRL..45.5194K. doi:10.1029/2018GL077788. PMC 6049893. PMID 30034044.
- Förster, A; Brandstetter, S; Schulze-Briese, C (2019). "Transforming X-ray detection with hybrid photon counting detectors". Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 377 (2147): 20180241. Bibcode:2019RSPTA.37780241F. doi:10.1098/rsta.2018.0241. PMC 6501887. PMID 31030653.
- "Roentgen's discovery of the x-ray". www.bl.uk. Retrieved 2019-05-09.
- Medical Radiation Exposure Of The U.S. Population Greatly Increased Since The Early 1980s, Science Daily, March 5, 2009
- Accuracy of total knee implant position assessment based on postoperative X-rays, registered to pre-operative CT-based 3D models. Annemieke van Haver, Sjoerd Kolk, Sebastian de Boodt, Kars Valkering, Peter Verdonk. Orthopaedic Proceedings, Published 20 February 2017. http://bjjprocs.boneandjoint.org.uk/content/99-B/SUPP_4/80
- Accuracy assessment of 2D X-ray to 3D CT registration for measuring 3D postoperative implant position. Lara Vigneron, Hendrik Delport, Sebastian de Boodt. White paper, Published 2014. http://www.materialise.com/en/system/files/uploads/resources/X-ray.pdf
- Herman, Gabor T. (2009). Fundamentals of Computerized Tomography: Image Reconstruction from Projections (2nd ed.). Springer. ISBN 978-1-85233-617-2.
- Advances in kilovoltage x-ray beam dosimetry in Hill R, Healy B, Holloway L, Kuncic Z, Thwaites D, Baldock C (2014). "Advances in kilovoltage x-ray beam dosimetry". Phys Med Biol. 59 (6): R183–231. Bibcode:2014PMB....59R.183H. doi:10.1088/0031-9155/59/6/r183. PMID 24584183.CS1 maint: multiple names: authors list (link)
- Thwaites David I (2006). "Back to the future: the history and development of the clinical linear accelerator". Physics in Medicine and Biology. 51 (13): R343–R362. Bibcode:2006PMB....51R.343T. doi:10.1088/0031-9155/51/13/R20. PMID 16790912. S2CID 7672187.
- Hall EJ, Brenner DJ (2008). "Cancer risks from diagnostic radiology". Br J Radiol. 81 (965): 362–78. doi:10.1259/bjr/01948454. PMID 18440940.
- Brenner DJ (2010). "Should we be concerned about the rapid increase in CT usage?". Rev Environ Health. 25 (1): 63–8. doi:10.1515/REVEH.2010.25.1.63. PMID 20429161. S2CID 17264651.
- De Santis M, Cesari E, Nobili E, Straface G, Cavaliere AF, Caruso A (2007). "Radiation effects on development". Birth Defects Res. C Embryo Today. 81 (3): 177–82. doi:10.1002/bdrc.20099. PMID 17963274.
- "11th Report on Carcinogens". Ntp.niehs.nih.gov. Archived from the original on 2010-12-09. Retrieved 2010-11-08.
- Brenner DJ, Hall EJ (2007). "Computed tomography—an increasing source of radiation exposure". N. Engl. J. Med. 357 (22): 2277–84. doi:10.1056/NEJMra072149. PMID 18046031. S2CID 2760372.
- Upton AC (2003). "The state of the art in the 1990s: NCRP report No. 136 on the scientific bases for linearity in the dose-response relationship for ionizing radiation". Health Physics. 85 (1): 15–22. doi:10.1097/00004032-200307000-00005. PMID 12852466. S2CID 13301920.
- Calabrese EJ, Baldwin LA (2003). "Toxicology rethinks its central belief" (PDF). Nature. 421 (6924): 691–2. Bibcode:2003Natur.421..691C. doi:10.1038/421691a. PMID 12610596. S2CID 4419048. Archived from the original (PDF) on 2011-09-12.
- Berrington de González A, Darby S (2004). "Risk of cancer from diagnostic X-rays: estimates for the UK and 14 other countries". Lancet. 363 (9406): 345–351. doi:10.1016/S0140-6736(04)15433-0. PMID 15070562. S2CID 8516754.
- Brenner DJ, Hall EJ (2007). "Computed tomography – an increasing source of radiation exposure". New England Journal of Medicine. 357 (22): 2277–2284. doi:10.1056/NEJMra072149. PMID 18046031. S2CID 2760372.
- Radiologyinfo.org, Radiological Society of North America and American College of Radiology
- "National Cancer Institute: Surveillance Epidemiology and End Results (SEER) data". Seer.cancer.gov. 2010-06-30. Retrieved 2011-11-08.
- Caon, M., Bibbo, G. & Pattison, J. (2000). "Monte Carlo calculated effective dose to teenage girls from computed tomography examinations". Radiation Protection Dosimetry. 90 (4): 445–448. doi:10.1093/oxfordjournals.rpd.a033172.CS1 maint: multiple names: authors list (link)
- Shrimpton, P.C; Miller, H.C; Lewis, M.A; Dunn, M. Doses from Computed Tomography (CT) examinations in the UK – 2003 Review Archived September 22, 2011, at the Wayback Machine
- Gregory KJ, Bibbo G, Pattison JE (2008). "On the uncertainties in effective dose estimates of adult CT head scans". Medical Physics. 35 (8): 3501–10. Bibcode:2008MedPh..35.3501G. doi:10.1118/1.2952359. PMID 18777910.
- Giles D, Hewitt D, Stewart A, Webb J (1956). "Preliminary Communication: Malignant Disease in Childhood and Diagnostic Irradiation In-Utero". Lancet. 271 (6940): 447. doi:10.1016/S0140-6736(56)91923-7. PMID 13358242.
- "Pregnant Women and Radiation Exposure". eMedicine Live online medical consultation. Medscape. 28 December 2008. Archived from the original on January 23, 2009. Retrieved 2009-01-16.
- Donnelly LF (2005). "Reducing radiation dose associated with pediatric CT by decreasing unnecessary examinations". American Journal of Roentgenology. 184 (2): 655–7. doi:10.2214/ajr.184.2.01840655. PMID 15671393.
- US National Research Council (2006). Health Risks from Low Levels of Ionizing Radiation, BEIR 7 phase 2. National Academies Press. pp. 5, fig.PS–2. ISBN 978-0-309-09156-5., data credited to NCRP (US National Committee on Radiation Protection) 1987
- "ANS / Public Information / Resources / Radiation Dose Calculator".
- The Nuclear Energy Option, Bernard Cohen, Plenum Press 1990 Ch. 5 Archived November 20, 2013, at the Wayback Machine
- Muller, Richard. Physics for Future Presidents, Princeton University Press, 2010
- X-Rays Archived 2007-03-15 at the Wayback Machine. Doctorspiller.com (2007-05-09). Retrieved on 2011-05-05.
- X-Ray Safety Archived April 4, 2007, at the Wayback Machine. Dentalgentlecare.com (2008-02-06). Retrieved on 2011-05-05.
- "Dental X-Rays". Idaho State University. Retrieved November 7, 2012.
- D.O.E. – About Radiation Archived April 27, 2012, at the Wayback Machine
- Chalkley, M.; Listl, S. (30 December 2017). "First do no harm – The impact of financial incentives on dental X-rays". Journal of Health Economics. 58 (March 2018): 1–9. doi:10.1016/j.jhealeco.2017.12.005. PMID 29408150.
- Kasai, Nobutami; Kakudo, Masao (2005). X-ray diffraction by macromolecules. Tokyo: Kodansha. pp. 291–2. ISBN 978-3-540-25317-4.
- Monico L, Van der Snickt G, Janssens K, De Nolf W, Miliani C, Verbeeck J, Tian H, Tan H, Dik J, Radepont M, Cotte M (2011). "Degradation Process of Lead Chromate in Paintings by Vincent van Gogh Studied by Means of Synchrotron X-ray Spectromicroscopy and Related Methods. 1. Artificially Aged Model Samples". Analytical Chemistry. 83 (4): 1214–1223. doi:10.1021/ac102424h. PMID 21314201. Monico L, Van der Snickt G, Janssens K, De Nolf W, Miliani C, Dik J, Radepont M, Hendriks E, Geldof M, Cotte M (2011). "Degradation Process of Lead Chromate in Paintings by Vincent van Gogh Studied by Means of Synchrotron X-ray Spectromicroscopy and Related Methods. 2. Original Paint Layer Samples" (PDF). Analytical Chemistry. 83 (4): 1224–1231. doi:10.1021/ac1025122. PMID 21314202.
- Ahi, Kiarash (May 26, 2016). Anwar, Mehdi F; Crowe, Thomas W; Manzur, Tariq (eds.). "Advanced terahertz techniques for quality control and counterfeit detection". Proc. SPIE 9856, Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense, 98560G. Terahertz Physics, Devices, and Systems X: Advanced Applications in Industry and Defense. 9856: 98560G. Bibcode:2016SPIE.9856E..0GA. doi:10.1117/12.2228684. S2CID 138587594. Retrieved May 26, 2016.
- Bickmore, Helen (2003). Milady's Hair Removal Techniques: A Comprehensive Manual. ISBN 978-1401815554.
- Frame, Paul. "Wilhelm Röntgen and the Invisible Light". Tales from the Atomic Age. Oak Ridge Associated Universities. Retrieved 2008-05-19.
- Als-Nielsen, Jens; Mcmorrow, Des (2001). Elements of Modern X-Ray Physics. John Wiley & Sons Ltd. pp. 40–41. ISBN 978-0-471-49858-2.
- Historical X-ray tubes
- Röntgen's 1895 article, on line and analyzed on BibNum [click 'à télécharger' for English analysis]
- Example Radiograph: Fractured Humerus
- A Photograph of an X-ray Machine
- X-ray Safety
- An X-ray tube demonstration (Animation)
- 1896 Article: "On a New Kind of Rays"
- "Digital X-Ray Technologies Project"
- What is Radiology? a simple tutorial
- 50,000 X-ray, MRI, and CT pictures MedPix medical image database
- Index of Early Bremsstrahlung Articles
- Extraordinary X-Rays Archived 2010-06-27 at the Wayback Machine – slideshow by Life
- X-rays and crystals
- Беларуская (тарашкевіца)
- Fiji Hindi
- Bahasa Indonesia
- Kreyòl ayisyen
- Kriyòl gwiyannen
- Bahasa Melayu
- Norsk bokmål
- Norsk nynorsk
- Саха тыла
- Simple English
- Српски / srpski
- Srpskohrvatski / српскохрватски
- ئۇيغۇرچە / Uyghurche
- Tiếng Việt
- This page is based on the Wikipedia article X-ray; it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License (CC-BY-SA). You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA. | https://thereaderwiki.com/en/X-ray | 21 |
26 | In 1733, an immigrant wrote to his brother in Switzerland that he earned what he wanted.
The Indian population was threatened by the tide of newcomers who believed in liberty and possession of land.
Indian communities were integrated into the British system by the 18th century.
Indian warriors fought in the century's imperial wars.
Their cultures were very different from what they were at the time of first contact.
The Indian societies that existed for centuries had been wiped out by disease and warfare.
The Creek Confederacy, which united dozens of Indian towns in South Carolina and Georgia, was created from their remnants.
Indians chose to live among whites rather than in their own communities.
They were used to using European products such as knives, hatchets, needles, kettles, and firearms.
Social chaos was created by the introduction of alcohol.
The governor of South Carolina was told by a Cherokee in 1753 that the clothes they wear are made to them.
We use their bullets to kill deer.
In this print from the first half of the 18th century, America is depicted as a Native American.
Farmers and planters viewed Indians as little more than an obstacle to their desire for land, even though traders and British officials saw potential profits in Indian villages.
Indians were expected to give way to white settlers.
The native population of the Virginia and South Carolina frontier had already been displaced when large numbers of settlers arrived.
William Penn's Indian-white relations were upset by the flood of German and Scotch-Irish settlers into Pennsylvania.
At a 1721 conference, a group of colonial and Indian leaders affirmed Penn's chain of friendship.
Conflicts over land became more frequent.
The fraudulent dealing that was common in other colonies was brought to Pennsylvania by the Walking Purchase.
A tract of land was agreed to by the Lenni Lanape Indians.
The Governor hired a team of swift runners, who marked out an area far in excess of what the Indians had anticipated.
By 1760, when Pennsylvania's population had grown to 220,000, Indiancolonist relations had become poisoned by suspicion and hostility.
"old William Penn" treated them with respect and fairness.
The different regions of the British colonies had different economic and social orders by the mid-eighteenth century.
The area stretching from central Pennsylvania southward through the Shenandoah Valley of Virginia and into upland North and South Carolina was dominated by small farms that were geared to production for local consumption.
In North America, the backcountry was the fastest growing region.
The only white people in Indian country in 1730 were hunters and traders.
One quarter of Virginia's population and half of South Carolina's were contained in the region by the eve of the American Revolution.
Slave-owning planters entered the area, seeking fertile soil for tobacco farming, but most were farm families raising grain and livestock.
Penn's grandson Thomas commissioned a painting from the artist Benjamin West in 1741.
In the 19th century, Americans were reminded that Indians had been a part of their history.
In the Middle Colonies of New York, New Jersey, and Pennsylvania, farmers were more focused on commerce than on the frontier, growing grain for their own use and for sale abroad, and employing wage laborers, tenants, and slaves.
New York's growth was behind that of neighboring colonies because large landlords had so much desirable land.
There was a standard of living that was unimaginable in Europe.
Coffee and tea, pins, ribbons, glassware, ceramics, and clothing were some of the cheap consumer goods produced and traded by Great Britain during the 18th century.
The British empire was integrated by trade.
The American colonies shared in the era's consumer revolution as they were drawn more and more into the system of Atlantic commerce.
British goods were advertised in American newspapers in port cities and small inland towns.
British merchants gave American traders loans so they could import these products.
The mass production, advertising, and sale of consumer goods did not exist in colonial America.
Records of people's possessions at the time of death revealed the wide dispersal in American homes of English and Asian products.
Most colonists lived in a world of homespun clothing and homemade goods in the 17th century.
Modest farmers and artisans used to own books, ceramic plates, metal utensils, and items made of imported silk and cotton.
Tea became a necessity of life because it was a luxury enjoyed only by the wealthy.
Britain's mainland colonies were mostly agricultural.
Nine-tenths of the population lived in rural areas.
Boston, New York, Philadelphia, and Charleston were not as big as Europe or Spanish America.
When the population of Mexico City was 100,000, Boston and New York had 6,000 and 4,500 residents, respectively.
Spanish America had eight cities that were larger than any in English North America.
English American cities were used to gather agricultural goods and imported items to be distributed to the countryside.
The rise of port cities was encouraged by the expansion of trade, with a growing population of colonial merchants and artisans as well as an increasing number of poor.
Philadelphia was the capital of the New World and the third-busiest port in the empire.
The financial, commercial, and cultural center of British America depended on economic integration with the rich agricultural region nearby.
Farm goods, rural storekeepers, and extended credit to consumers were organized by Philadelphia merchants.
They shipped flour, bread, and meat to the West Indies and Europe.
The city had a large population of furniture makers, jewelers, and silversmiths, as well as hundreds of lesser artisans like weavers, blacksmiths, and construction workers.
The typical artisan had his own tools and worked in a small workshop at home with help from family members and apprentices.
The artisan's skill, which set him apart from the common laborers below him in the social scale, was the key to his existence, and it gave him a far greater degree of economic freedom than those dependent on others for a livelihood.
Benjamin Franklin wrote that he was a printer and an estate.
American craftsmen were helped by the expanding consumer market.
Journeymen had a good chance of rising to the status of master and establishing a workshop of their own.
Some achieved great success.
One of the city's most prominent artisans was Myer Myers, a Jewish silversmith of Dutch ancestry who was born in New York City in 1723.
Religious ornaments for both synagogues and Protestant churches, as well as jewelry, candlesticks, coffeepots, and other gold and silver objects, were produced by Myers.
He used some of his money to buy land in New Hampshire and Connecticut.
The opportunities colonial cities offered to skilled men of diverse ethnic and religious background reflected the career of Myers.
An Atlantic World People, ideas, and goods flowed back and forth across the Atlantic, knitting together the empire and its diverse populations and creating webs of interdependence among the European empires.
Tobacco, sugar, and other products of the Western Hemisphere were marketed as far away as eastern Europe.
The slave trade between Africa and Brazil was financed by London bankers.
Spain imported goods from other countries.
The major overseas market for British manufactured goods was the North American and West Indian colonies.
Although most colonial output was consumed at home, North Americans shipped farm products to Britain, the West Indies and other countries outside the empire.
Most of the tobacco crop was re-exported to Europe by British merchants.
Most of the bread and flour exported from the colonies was destined for the West Indies.
African slaves there grew sugar that could be distilled into rum, a product that was popular among both North American colonists and Indians, who obtained it by trading furs and deerskins.
The trade in fish and grains with southern Europe was flourishing.
One third of the British empire's trading fleet was built in New England.
There were many advantages to being a member of the empire.
Most Americans didn't complain about British regulation of their trade because it was good for the colonies and the mother country.
American shipping was protected by the Royal Navy.
English America became more and more similar to the mother country across the Atlantic during the 18th century.
dentured families or persons who received passage to the New World in exchange for a promise to work off their debt in America
The Lenni Lanape Indians were tricked into buying Indian land in 1737 by Pennsylvanian colonists.
The Lanape agreed to give up land equivalent to the distance a man could walk in a day.
The area stretching from central Pennsylvania southward through the Shenandoah Valley of Virginia and into upland North and South Carolina was in colonial America.
John Durand painted a portrait of Jane Beekman, the daughter of James Beekman, one of the wealthiest colonists.
It is unusual to depict a young girl with a book, rather than emphasizing fashionable attire.
Both cultures of the Beekman family emphasized the importance of education for women.
As colonial society matured, an elite emerged that was not as powerful nor as wealthy as the aristocracy of England, but still dominated politics and society.
The gap between rich and poor probably grew more rapidly in the 18th century than any other period of American history.
Expansion of trade in New England and the Middle Colonies allowed for the emergence of a powerful upper class of merchants, often linked by family or commercial ties to great trading firms in London.
In colonial America, there were no banks.
Business talent and personal connections were more important than credit and money.
Tobacco and rice were staple crops for the world market and were produced by slave plantations in the Lower South.
The planters had enormous wealth.
The rulers of Pennsylvania and Maryland were included in the colonial elite.
America had no aristocracy like in Britain.
It didn't have a system of established social ranks or family ties stretching back to medieval times.
The De Lanceys, Livingstons, and van Rensselaers of New York, the Penn family in Pennsylvania, and a few southern planters were the only ones whose landholdings were comparable to those of the British aristocracy.
The men of prominence were in charge of the colonial government.
The gentry controlled the vestries, or local governing bodies, of the established Anglican Church, dominated the county courts, and were prominent in Virginia's legislature.
Seven members of the same generation of the Lee family sat in the House of Burgesses in the 17th century.
An artist painted a portrait of Robert "King" Carter of Virginia.
Carter was one of the most influential men in the colonies.
The environment of Virginia in the 18th century was better than the early days of settlement.
It is expected that planters will pass their wealth down to the next generation, providing estates for their sons and establishing family dynasties.
Every Virginian of note achieved prominence through family connections.
The days when self-made men could rise into the Virginia gentry were long gone.
Thomas Jefferson's grandfather was a justice of the peace, militia captain, and sheriff, and his father was a member of the House of Burgesses.
The justices of the peace were George Washington's father, grandfather, and greatgrandfather.
As western areas opened for settlement, the Virginia gentry gained possession of large tracts of land.
Grants of 20,000 to 40,000 acres were common.
By the time of his death in 1732, Robert "King" Carter had acquired 300,000 acres of land and 1,000 slaves.
There wasn't a real "American" identity before the American Revolution.
The term "Americans" was used to describe Indians rather than colonists in the 17th century.
Europeans often depicted the colonies with an image of a Native American.
European immigrants used languages other than English from their home countries.
Intermarriage with other groups was more common among Huguenots than among Jews.
The British people wanted to create a "English" identity in the New World.
Britons were convinced that the colonists were just like them.
In Great Britain, the colonists were seen as a collection of convicts, religious dissidents, and impoverished servants.
Many colonists claimed a claim to Britishness more strongly.
American Indians and Africans were seen as unable to wield the responsibilities of liberty due to their place of birth.
They should not be involved in governance.
British identity in the colonies was defined by opposition to others, including Spanish and French Catholics.
Most Indians preferred to keep their own cultures and religions, so they were not included in a collective colonial identity.
Intermarriage and culture exchange between settlers and Indians was much more common in the Spanish and French New World empires.
In the 18th century, the American colonies had more regular trade with Britain than with themselves.
A common lifestyle and sense of common interests were developed by the elite in different regions.
Wealthy Americans modeled their lives on British behavior.
They wanted to demonstrate their status and legitimacy by sending their sons to Britain for education, and building homes with fashionable furnishings modeled on London.
Large rooms for entertainment, display cases for imported luxury goods, and elaborate formal gardens were included in their residences.
George Washington had a coat of arms designed for his family in imitation of the English upper-class practice.
Desperate to follow an extravagant lifestyle, many planters fell into debt.
William Byrd III of Virginia had a debt that was almost unheard of in England or America.
Virginia's gentry was affected by the world market for tobacco.
South Carolina planters were the richest on the mainland.
South Carolina's elite traveled north to Newport, Rhode Island, for their summer vacations and spent most of their time in Charleston, the only real urban center south of Philadelphia and the richest city in British North America.
The social life here centered on theaters, literary societies, and social events.
Like their Virginia counterparts, South Carolina's grandees lived a lavish lifestyle with imported furniture, fine wines, silk clothing, and other items from England.
The house slaves were dressed in specially designed uniforms.
The per capita wealth in the Charleston District was more than four times that of tobacco areas in Virginia and eight times that of Philadelphia.
South Carolina's wealth was very concentrated.
The richest 10 percent of the colony owned half of the wealth.
England's balanced, stable social order was mimicked by elites throughout the colonies.
Liberty, in their eyes, meant the right of those with wealth and prominence to rule over others.
Some men were endowed with greater talents than others and were destined to rule the society.
They believed that the social order was held together by webs of influence.
Each place in the hierarchy had different responsibilities, and one's status was revealed in dress, manners, and the magnificence of one's home.
One colonial newspaper said that "dependence" and "superiority" were natural elements of any society.
Political power and wealth were legitimized by an image of refinement.
The elites prided themselves on making productive use of leisure.
On both sides of the Atlantic, elites considered work to be reserved for slaves and common folk.
The gentleman was free from labor.
Poverty was seen as a feature of colonial life at the other end of the social scale.
Although not considered by most of their society, the growing number of slaves lived in poverty.
In Britain, between one-quarter and one-half of the people needed public assistance in the early part of the century.
In long-settled areas, access to land diminished rapidly as the colonial population expanded.
The high birthrate fueled population growth in New England.
sons who could not inherit farms were forced to move to other colonies or try their hand at a trade in the region's towns Tenants and wage laborers were present on farms in the Middle Colonies by the mid-century.
The number of propertyless wage earners in colonial cities increased.
Half of the wealth at mid-century was concentrated in the hands of the richest 10 percent of the population.
British precedents mirrored attitudes and policies toward poverty in colonial America.
The poor were seen as lazy, shiftless, and responsible for their own plight by the better off.
Rural communities and cities were responsible for assisting their own.
Poor people were often set to work in workhouses where they produced goods that were paid for by the authorities.
The children were sent to work as apprentices.
Most communities adopted stringent measures to "warn out" unemployed and propertyless newcomers who might become dependent on local poor relief.
The unwanted poor were either thrown out of the area or declared ineligible for assistance.
The number of poor people warned out in Essex County rose from 200 in the 1730s to 1,700 in the 1760s.
Many were members of families headed by women who had died.
The majority of Americans lived in the middle of wealth and poverty.
The wide distribution of land and the economic independence of most ordinary free families were what distinguished the mainland colonies from Europe.
The majority of the free male population were farmers.
England had no class of laborers like American slaves, but three-fifths of its people owned no property at all.
The scene was painted by Edward Hicks in the 1840s.
In colonial eastern Pennsylvania, a prosperous farm is largely self-sufficient but also producing for the market.
The farm workers are slaves.
Edward is listening to Mrs. Twining read the Bible.
Landownership was seen as a right by colonial farm families in the 18th century.
They were against efforts to limit their access to land.
British North America had a dislike of personal dependence and an understanding of freedom as not relying on others for a livelihood.
The wide distribution of property that made economic independence part of the lived experience of large numbers of white colonists was accorded with these beliefs.
The family was the center of economic life in America during the 18th century.
All members of the family contributed to the family's income.
The independence of the small farmer depended on the labor of women and children.
The high birthrate in part reflected the need for as many hands as possible on colonial farms.
Farmers grow food for their own consumption and acquire land to pass it on to their sons.
The consumer revolution and expanding networks of Atlantic trade drew more farmers into production for the market.
More marriages became lifetime commitments as the population grew and the death rate declined.
Free women were expected to be good wives and mothers.
Male domination took on more and more social reality.
The law mandated that estates must be passed on to the oldest son in several colonies.
The opportunities that existed for women in the early period waned as colonial society became more structured.
The courts in Connecticut were informal and disorganized in the 17th century.
It was necessary to have a lawyer in court in the 18th century.
Women were barred from practicing as attorneys.
Both men and women did different kinds of work in the 17th century because of the desperate need for labor.
The division of labor was solidified in the 18th century.
Women's work included cooking, cleaning, sewing, making butter, and assisting with agricultural chores.
The difference between a family's selfsufficiency and poverty was often spelled by the work of the wives and daughters.
The popular adage was true.
Even as the consumer revolution reduced the demands on many women by making available store-bought goods previously produced at home, women's work seemed to increase.
More time was spent on child care and domestic chores as a result of the lower infant mortality.
All family members had to contribute to family income because of the demand for new goods.
The work was exhausting for most women.
Mary Cooper, a Long Island woman, wrote in her diary that she was dirty and distressed.
The area that would become the United States was home to a wide variety of peoples and different kinds of social organization.
The political and economic life of every colony was dominated by elites.
Large numbers of colonists had greater opportunities for freedom, such as access to the vote, the right to worship as they please, and an escape from oppressive government.
The highest per capita income in the world was probably enjoyed by free colonists.
The colonies' economic growth contributed to a high birthrate.
Many others were confined to either partial freedom of indentured servitude or the complete absence of freedom in slavery.
Both timeless longings for freedom and new and unprecedented forms of unfreedom had been essential to the North American colonies' remarkable development.
The statement should be considered with respect to men and women, whites and blacks, andrich and poor.
Go to To see what you know and learn what you've missed.
Student site for primary source documents and images, interactive maps, author videos, and more.
King Philip was the chief of the Wampanoags.
He was killed in a war against the English in which he objected to their attempts to convert Indians to Christianity.
The conflict began in 1675 with an Indian uprising.
White New Englanders were given more freedom and the Indians were dispossessed of the region.
The policy of Great Britain is to regulate the economies of colonies to benefit the mother country.
The English Parliament passed a law in 1650 to control colonial trade and bolster the mercantile system.
The English and the Iroquois nations formed an alliance in the 1670s.
The Yamasee and Creek Indians Revolt was caused by rising debts and slave traders' raids against Carolina settlers.
Many Indians were sent to Florida.
Early proponents of abolition of slavery and equal rights for women were members of a religious group in England and America.
A large agricultural enterprise used unfree labor to produce a crop.
Berkeley had failed to protect settlers from Indian raids and did not allow them to occupy Indian lands, which led to the revolt led by Nathaniel Bacon.
The British throne was taken from James II in 1688 by a coup engineered by a small group of aristocrats.
The rights of Englishmen were inscribed into law in a series of laws enacted in 1689.
The English regulatory board was established in 1675.
The consolidation of the New England colonies and later New York and New Jersey reverted to individual colonial governments three years later.
All English Protestants were allowed to worship without restriction. | https://knowt.io/note/f97daf49-c9c8-47b5-90a6-0e16d75ed7f6/CHAPTER-3----Part-3- | 21 |
20 | What Do Molecules Look Like? The Lewis Dot Structure approach provides some insight into molecular structure in terms of bonding, but what about 3D geometry? Recall that we have two types of electron pairs: bonding and lone. Valence-Shell Electron-Pair Repulsion (VSEPR). 3D structure is determined by minimizing repulsion of electron pairs.
Electron pairs (both bonding and lone) are distributed around a central atom such that electron-electron repulsions are minimized.
Electron pairs (both bonding and lone) are distributed around a central atom such that electron-electron repulsions are minimized. 2 electron pairs 3 electron pairs 4 electron pairs Period 1, 2 5 electron pairs 6 electron pairs Period 3 & beyond
H H C H H Arranging Electron Pairs • Must consider both bonding and lone pairs when minimizing repulsion. • Example: CH4 (bonding pairs only) Lewis Structure VSEPR Structure
H H N H Arranging Electron Pairs (cont.) Example: NH3 (both bonding and lone pairs). Lewis Structure VSEPR Structure Note:“electron pair geometry” vs. “molecular shape”
VSEPR Structure Guidelines • The previous examples illustrate the strategy for applying VSEPR to predict molecular structure: • Construct the Lewis Dot Structure • Arrange bonding/lone electron pairs in space such that repulsions are minimized (electron pair geometry). • Name the molecular shape from the position of the atoms. VSEPR Shorthand: 1. Refer to central atom as “A” 2. Attached atoms are referred to as “X” 3. Lone pair are referred to as “E” Examples: CH4: AX4 NH3: AX3E H2O: AX2E2 BF3: AX3
Be F F Be F F VSEPR: 2 electron pairs Experiments show that molecules with multiple bonds can also be linear. Linear (AX2): angle between bonds is 180° Example: BeF2 Multiple bonds are treated as a single effective electron group. F F Be 180° More than one central atom? Determine shape around each.
VSEPR: 3 electron pairs Trigonal Planar (AX3): angle between bonds is 120° Multiple bond is treated as a single effective electron group. Example: BF3 F F 120° B F F F B F
H H C H H VSEPR: 4 electron pairs (cont.) Tetrahedral (AX4): angle between bonds is ~109.5° Example: CH4 109.5° tetrahedral e- pair geometry AND tetrahedral molecular shape
Bonding vs. Lone pairs Bond angle in a tetrahedral arrangement of electron pairs may vary from 109.5° due to size differences between bonding and lone pair electron densities. bonding pair is constrained by two nuclear potentials; more localized in space. lone pair is constrained by only one nuclear potential; less localized (needs more room).
H H N H VSEPR: 4 electron pairs Trigonal pyramidal (AX3E): Bond angles are <109.5°, and structure is nonplanar due to repulsion of lone pair. Example: NH3 107° tetrahedral e- pair geometry; trigonal pyramidal molecular shape
VSEPR: 4 electron pairs (cont.) Classic example of tetrahedral angle shift from 109.5° is water (AX2E2): 104.5o “bent” tetrahedral e- pair geometry; bent molecular shape
VSEPR: 4 electron pairs (cont.) Comparison of CH4 (AX4), NH3 (AX3E), and H2O (AX2E2):
AX2E AX3E AX2E2 1. Refer to central atom as “A” 2. Attached atoms are referred to as “X” 3. Lone pair are referred to as “E”
H H H H Molecular vs. Electron-Pair Geometry F N O C Central Atom Compound Electron-Pair Geometry Molecular Shape Carbon, C CH4 tetrahedral tetrahedral Nitrogen, N NH3 tetrahedral trigonal pyramidal Oxygen, O H2O tetrahedral bent Fluorine, F HF tetrahedral linear
What is the electron-pair geometry and the molecular shape for HCFS? • trigonal planar, bent • trigonal planar, trigonal planar • tetrahedral, trigonal planar • tetrahedral, tetrahedral
VSEPR: Beyond the Octet Systems with expanded valence shells will have five or six electron pairs around a central atom. F Cl F Cl F Cl S P Cl F F Cl F 90° 90° F F F 90° S 120° F F F
F F F F F F F F F F F F VSEPR: 5 electron pairs • • Consider the structure of SF4 (34 e-, AX4E) • What is the optimum arrangement of electron pairs around S? ?? S S S Compare e– pair angles lone-pair / bond-pair: twoat 90o, twoat 120o threeat 90o threeat 90o, three at 120o fourat 90o, one at 120o bond-pair / bond-pair: Repulsive forces (strongest to weakest): lone-pair/lone-pair > lone-pair/bond-pair > bond-pair/bond-pair
VSEPR: 5 electron pairs The optimum structure maximizes the angular separation of the lone pairs. I3- (AX2E3):
AX4E AX3E2 AX2E3 5-electron-pair geometries our previous example
VSEPR: 6 electron pairs Which of these is the more likely structure? See-saw Square Planar
AX5E AX4E2 6-electron-pair geometries our previous example
Molecular Dipole Moments We can use VSEPR to determine the polarity of a whole molecule. • Draw Lewis structures to determine 3D arrangement of atoms. 2. If one “side” of the molecule has more EN atoms than the other, the molecule has a net dipole. Shortcut: completely symmetric molecules will not have a dipole regardless of the polarity of the bonds.
Molecular Dipoles The C=O bonds have dipoles of equal magnitude but opposite direction, so there is no net dipole moment. The O-H bonds have dipoles of equal magnitude that do not cancel each other, so water has a net dipole moment.
Molecular Dipoles (cont.) symmetric symmetric asymmetric
Molecular Dipole Example • Write the Lewis dot and VESPR structures for CF2Cl2. Does it have a dipole moment? F 32 e- Cl F Cl Tetrahedral
Advanced VSEPR Application Molecules with more than one central atom… methanol (CH3OH) H C O H H tetrahedral e- pairs tetrahedral shape tetrahedral e- pairs bent shape H
# e- pairs e- Geom. Molec. Geom. The VSEPR Table
# e- pairs e- Geom. Molec. Geom. The VSEPR Table
What is the expected shape of ICl2+? 20 e- AX2E2 A. linear C. tetrahedral D. square planar B. bent
Valence Bond Theory Basic Principle of Localized Electron Model: A covalent bond forms when the orbitals from two atoms overlap and a pair of electrons occupies the region between the two nuclei. Rule 1: Maximum overlap. The bond strength depends on the attraction of nuclei to the shared electrons, so: The greater the orbital overlap, the stronger the bond.
Valence Bond Theory Basic Principle of Localized Electron Model: A covalent bond forms when the orbitals from two atoms overlap and a pair of electrons occupies the region between the two nuclei. Rule 2: Spins pair. The two electrons in the overlap region occupy the same space and therefore must have opposite spins. There may be no more than 2 electrons in a molecular orbital.
Valence Bond Theory • Basic Principle of Localized Electron Model: • A covalent bond forms when the orbitals from two atoms overlap and a pair of electrons occupies the region between the two nuclei. • Rule 3: Hybridization.To explain experimental observations, Pauling proposed that the valence atomic orbitals in a molecule are different from those in the isolated atoms. We call this concept • Hybridization
What is hybridization? • Atoms adjust to meet the “needs” of the molecule. • In a molecule, electrons rearrange in an attempt to give each atom a noble gas configuration and to minimize electron repulsion. • Atoms in a molecule adjust their orbitals through hybridization in order for the molecule to have a structure with minimum energy. • The source of the valence electrons is not as important as where they are needed in the molecule to achieve a maximum stability.
Example: Methane • 4 equivalent C-H covalent bonds • VSEPR predicts a tetrahedral geometry
How do we explain formation of 4 equivalent C-H bonds? The Valence Orbitals of a Carbon Atom Carbon: 2s22p2
Hybridization: Mixing of Atomic Orbitals to form New Orbitals for Bonding + + – – + – + – + – + – +
Other Representations of Hybridization: y1 = 1/2[(2s) + (2px) + (2py) + (2pz)] y2 = 1/2[(2s) + (2px) - (2py) - (2pz)] y3 = 1/2[(2s) - (2px) + (2py) - (2pz)] y4 = 1/2[(2s) - (2px) - (2py) + (2pz)]
Hybridization is related to the number of valence electron pairs determined from VSEPR: Methane (CH4) VSEPR: AB4 tetrahedral sp3 hybridized 109.47 º Electron pair geometry determines hybridization, not vice versa!!
Hybridization is related to the number of valence electron pairs determined from VSEPR: Ammonia (NH3) VSEPR: AB3E tetrahedral sp3 hybridized N H H H 108.1 º
Hybridization is related to the number of valence electron pairs determined from VSEPR: Water (H2O) VSEPR: AB2E2 tetrahedral sp3 hybridized 105.6 º
sbonding and pbonding • Two modes of bonding are important for • 1st and 2nd row elements: s bonding and p bonding • These two differ in their relationship to the internuclear axis: • s bonds have electron density ALONG the axis • p bonds have electron density ABOVE AND BELOW the axis
Problem: Describe the hybridization and bonding of the carbon orbitals in ethylene (C2H4) VSEPR: AB3 trigonal planar sp2 hybridized orbitals for s bonding sp2 hybridized orbitals used for s bonding remaining p orbital used for p bonding
Problem: Describe the hybridization and bonding of the carbon orbitals in Carbon Dioxide (CO2) • VSEPR: AB2 • linear • sp hybridized orbitals for s bonding
H H C2 C1 N : N CH3 C H p p sp sp sp sp3 Atoms of the same kind can have different hybridizations Acetonitrile (important solvent and industrial chemical) Bonds s C2: AB4 C1: AB2 2s2 2px2py s p p N: ABE p sp p sp 2s2 2px2py2pz lone pair
What have we learned so far? • Molecular orbitals are combinations of atomic orbitals • Atomic orbitals are “hybridized” to satisfy bonding in molecules • Hybridizationfollows simple rules that can be deduced from the number of chemical bonds in the molecule and the VSEPR model for electron pair geometry
Hybridization • sp3 Hybridization (CH4) • This is the sum of one s and three p orbitals on the carbon atom • We use just the valence orbitals to make bonds • sp3 hybridization gives rise to the tetrahedral nature of the carbon atom
Hybridization • sp2 Hybridization (H2C=CH2) • This is the sum of one s and two p orbitals on the carbon atom • Leaves one p orbital uninvolved – this is free to form a p bond (the second bond in a double bond) | https://www.slideserve.com/lorna/what-do-molecules-look-like | 21 |
21 | Toothed whales and dolphins possess a hypertrophied auditory system that allows for the production and hearing of ultrasonic signals. Although the fossil record provides information on the evolution of the auditory structures found in extant odontocetes, it cannot provide information on the evolutionary pressures leading to the hypertrophied auditory system. Investigating the effect of hearing loss may provide evidence for the reason for the development of high-frequency hearing in echolocating animals by demonstrating how high-frequency hearing assists in the functioning echolocation system. The discrimination abilities of a false killer whale (Pseudorca crassidens) were measured prior to and after documented high-frequency hearing loss. In 1992, the subject had good hearing and could hear at frequencies up to 100 kHz. In 2008, the subject had lost hearing at frequencies above 40 kHz. First in 1992, and then again in 2008, the subject performed an identical echolocation task, discriminating between machined hollow aluminum cylinder targets of differing wall thickness. Performances were recorded for individual target differences and compared between both experimental years. Performances on individual targets dropped between 1992 and 2008, with a maximum performance reduction of 36.1%. These data indicate that, with a loss in high-frequency hearing, there was a concomitant reduction in echolocation discrimination ability, and suggest that the development of a hypertrophied auditory system capable of hearing at ultrasonic frequencies evolved in response to pressures for fine-scale echolocation discrimination.
Toothed whales and dolphins possess many adaptations that make them well suited to an aquatic lifestyle, including the use of echolocation and the development of high-frequency hearing. Echolocation, or biosonar, independently evolved aerially in bats and aquatically in cetaceans, and both groups developed high-frequency hearing (Busnel and Fish, 1980). The odontocete echolocation system is composed of two parts: the sound generation system and the auditory reception system. To generate sounds, odontocetes produce short, ultrasonic signals that are generated in the nasal complex and focused into a directional signal within the melon (Aroyan et al., 2000; Cranford, 2000). The sounds are received via an auditory system characterized by directional, high-frequency hearing (Renaud and Popper, 1975). Hearing abilities may vary considerably in frequency according to species, and some species may perceive signals of 150 kHz or higher (Kastelein et al., 2002; Nachtigall et al., 1995; Nachtigall et al., 2008). The auditory system of odontocetes is therefore hypertrophied compared with that of most mammals and is characterized by a large auditory nerve, a large volume of nerve fibers in the inner ear and high ganglion cell counts (Ketten and Warzok, 1990).
Underwater auditory structures presumably first evolved for low-frequency, non-directional hearing (Gingerich et al., 1983; Luo, 1998). Over time, fine-detailed echolocation discrimination would have required processes to emerge that allowed for the reception of increased frequency and directionality. The shift from hearing on land to hearing in water is explained by the need of the animal to sense its ambient environment, but the very high-frequency hearing ranges of extant odontocetes suggest additional evolutionary pressures beyond passive hearing (Nummela et al., 2004; Thewissen and Hussain, 1993). It is reasonable to assume that high-frequency hearing has a value in echolocation, yet it is difficult to interpret this strictly from the fossil record. It has been hypothesized that higher frequencies result in the ability to resolve finer details in echolocation targets (Au, 1993), but it is unknown whether increased echolocation ability is truly linked with high-frequency hearing capabilities.
The underwater use of echolocation has intrigued investigators for over 50 years (e.g. Norris, 1968) and, given the excellent performance of dolphins and small whales in comparison to technological sonar, there have been many attempts to mimic and model biosonar (Busnel, 1966; Nachtigall and Moore, 1988; Moore et al., 1991; Roitblat et al., 1995). Much of this effort has been based on the demonstrated ability of odontocetes to differentiate small differences between arbitrarily constructed echolocation targets, including differences in target size, shape and materials from which they are constructed (Nachtigall, 1980). The full suite of cues that odontocetes use to discriminate targets is unknown, but it is thought that they may use small differences in the complex structure of target echoes to differentiate targets (Branstetter et al., 2007; Gaunaurd et al., 1998; Muller et al., 2008). The ability of cetaceans to differentiate and recognize the acoustic characteristics of objects using echolocation has an obvious biological benefit. Echolocation appears to be predominantly a foraging tool, but may also have a role in navigation and the avoidance of hazards and predators, especially in low-light or turbid conditions (Norris, 1968; Tyack and Clark, 2000). Conducting a controlled echolocation study in the wild poses many challenges. The use of arbitrarily constructed echolocation targets in the laboratory allows the control and measurement of acoustic cues and fine measurements of discrimination thresholds.
The linkage of high-frequency hearing and echolocation discrimination performance has not previously been examined empirically. Recent evidence of age-related high-frequency hearing loss (presbycusis) in a false killer whale (Pseudorca crassidens) from previous echolocation discrimination experiments provided a model system for investigating the link between high-frequency hearing and echolocation discrimination performance. Prior audiometric studies on the animal used in these experiments indicate that the whale previously had good hearing up to 100 kHz (Thomas et al., 1990). Recent audiogram analyses show a reduction in frequency hearing capabilities of approximately 60 kHz over 15 years (Yuen et al., 2005). The echolocation discrimination abilities of this whale were previously quantified, but unpublished, 15 years ago. The comparison of echolocation performance before and after high-frequency hearing loss allows for direct testing of target-discrimination abilities and a comparison between periods of good and reduced high-frequency hearing. This provides the opportunity to investigate whether an increased ability to discriminate targets is linked with high-frequency hearing capabilities and whether a reduction in high-frequency hearing results in poorer discrimination performance.
Here, we report the wall-thickness discrimination abilities of a female false killer whale at two different time periods: in 1992, when the subject had good high-frequency hearing, and in 2008, after the subject had lost a significant portion of her high-frequency hearing. The results show that a reduction in high-frequency hearing causes a reduction in echolocation discrimination performance. We discuss the possible mechanisms for this loss in performance and provide a hypothesis for the role of high-frequency hearing in echolocation.
MATERIALS AND METHODS
Experimental subject and equipment
Two experiments were completed: one in 1992 and one in 2008. Both experiments were conducted in floating pens in Kaneohe Bay, Oahu, Hawaii, USA, with a female false killer whale [Pseudorca crassidens (Owen 1846)] named Kina. The exact age of the whale is unknown, although she was brought to Hawaii as an adult in 1987 and has been used extensively in published echolocation work (e.g. Supin et al., 2008). In 1992, the subject measured 3.6 m and weighed 389 kg. In 2008, the subject measured 3.9 m and weighed 453 kg.
The setup for both experiments was nearly identical. Although the pen configuration was moved between experiments, both were conducted within Kaneohe Bay (Fig. 1A). During each experiment, the subject remained stationary to examine echolocation targets by placing her head within a hoop that was 1 m below the water surface. An underwater camera (SCS Enterprises, Montebello, NY, USA) was used to monitor her positioning behavior within the hoop station. An acoustically opaque screen was placed in front of the subject to prevent her from echolocating prematurely on the targets.
For each target discrimination task, two sets of targets were used. Each set consisted of one standard target and six or seven comparison targets. The standard target was a hollow aluminum cylinder 12.7 cm long with an outer diameter of 37.85 mm, an inner diameter of 25.15 mm and a wall thickness of 6.35 mm. The cylinders were hollow so that they filled with water when submerged. Comparison targets had the same length and outer diameter as the standard target but differed in inner wall thickness by ±0.076, ±0.152, ±0.229, ±0.305, ±0.406 and ±0.813 mm. Thus, the experiment utilized a set of targets thicker than the standard and a set of targets thinner than the standard. In 2008, an additional target of ±0.203 mm was used, and in 1992 a comparison target of ±1.600 mm was initially used, but detailed results are not reported here owing to consistently perfect performance.
Targets were hung from a monofilament line 8 m away from the subject at a depth of 1m using a target deployer to avoid cueing effects. The target deployer was a V-shaped device that allowed the experimenter to control target deployment remotely from the experimental shack. The deployer held up to four targets at one time and allowed each target to be lowered at the same location and to the same depth in the water (Fig. 1B).
Prior to the start of a trial, the subject remained stationed on a vertically placed pad on the side of the pen near the trainer. When cued, she swam into a hoop up to her pectoral flippers to remain stationary for the trial. A target was placed into the water using the deployer, and the acoustic baffle was lowered, providing the subject with acoustic access to the target. The subject ensonified the target and was trained to provide a response to the standard target that differed from the response required for all other objects. If the target was a standard target (a ‘go’), the subject backed out of the hoop and touched a response paddle with her rostrum. If the target was a comparison target (a ‘no go’), the subject remained in the hoop until signaled out by the trainer. The subject was rewarded with fish for correct responses. Incorrect responses resulted in no fish reward. Thus, the general form of the procedure was a ‘go’/‘no-go’ response paradigm (Schusterman, 1980).
Each session consisted of 50 experimental trials. Target presentation order was determined using a modified method of constants, and each session was broken into five 10-trial blocks. During each block, the subject discriminated between the standard target and one comparison target. A modified Gellermann series (Gellermann, 1933), with no more than three types of trials in a row, was used for target presentation order. Each block contained an equal number of standard and comparison targets.
Both experiments were broken down into cycles. Each cycle consisted of multiple blocks of trials for each comparison target within one target set (thicker only or thinner only). Initially, each cycle was composed of 10–15 blocks (100–150 trials) but, as the experiment progressed and the animal gained more experience with the task, the cycles were reduced to 5–10 blocks (50–100 trials) for each comparison target. After a cycle was completed, the target sets were switched so that the subject alternated between cycles of thinner targets only or thicker targets only.
In 1992, a total of 3200 experimental trials was conducted: four cycles with the thicker target set and three cycles with the thinner target set. The first 10-trial block of each session always consisted of the standard plus the comparison target of the largest wall-thickness difference (±1.600 mm) as a warm-up for the subject prior to testing. To control for possible cueing, 20 sessions were run with a segment of blind controls, 10 sessions were run with a different experimenter and two sessions were run with modified protocols.
Blind controls consisted of a target with dimensions identical to the standard target being substituted for a comparison target. The substitution was unknown to the experimenter, who rewarded the subject as if the blind control were a comparison target. If the subject perceived the blind control to be the same as the standard, she responded as a ‘go’ and received no reward. Thus, perfect performance on blind control trials resulted from the subject choosing to ‘go’ on all trials, which would appear to the experimenter as 50% performance. Blind controls were conducted to ensure the subject used wall thickness to conduct discrimination and not some other characteristic specific to the standard target.
Modified protocols consisted of different placement of the stationing pad and response paddle. Conducting sessions with modified protocols and conducting sessions with a different experimenter ensured that the subject was not cueing off anything other than the target itself when performing the discrimination task.
In 2008, a total of 3640 experimental trials were conducted: four cycles with the thicker target set and three cycles with the thinner target set. The first 10-trial block of each session was a warm-up for the subject, consisting of the standard plus the largest comparison target (±0.813 mm). The comparison targets used in the remaining blocks were randomized, and a two-trial cool-down of the standard plus the largest comparison target (±0.813 mm) was included. To test for possible cueing, four sessions were run with a segment of blind controls, different experimenters were used and 20 sessions were run with modified protocols.
Performance on each target was recorded as the percentage of correct responses that the subject achieved in each block of trials. Performances were calculated from binomial data (correct or incorrect) and averaged for all 10 trials in one block. Targets were normalized by reporting the comparison targets as the absolute difference between the comparison targets and the standard target.
Analysis of the subject's performance was conducted using generalized linear models (GLMs) (glmfit; Matlab, MathWorks, Natick, MA, USA) to model the probability of a correct response as a function of wall thickness, target set (thicker versus thinner), experimental year and cycle. The performance of the subject was calculated from binomial data (correct or incorrect) and a logit link function was used for analysis. Target set, experimental year and cycle were treated as categorical variables for analysis. Akaike's information criterion (AIC) was used to select the best-fitting model. Pairwise t-tests were conducted to test the significance between performances on individual target types.
Echolocation discrimination performance
To ensure that the subject was discriminating wall thickness, and not characteristics specific to the standard target, blind controls were conducted in which the subject discriminated between two standard targets. These trials averaged 54.0% performance, which was not statistically (t19=1.57, P=0.07, N=40) different from the expected performance for blind control trials of chance, or 50.0%.
Comparison of models using AIC showed that a model that included wall thickness, experimental year, target set and cycle was the best fit for predicting performance (Table 1). The GLM output of the best-fitting model is shown in Table 2. Both target wall thickness and experimental year were significant predictors of performance (wall thickness, P<0.001, N=606; experimental year, P<0.001, N=606) but target set and cycle did not predict performance (Table 2). For all combinations of experimental year, target set and cycle, the subject performed better as the difference between the wall thickness of the standard target and the comparison target increased. For table purposes only, performances were averaged for each cycle and wall thickness and were grouped according to target set and experimental year. Discrimination performance was initially high in the 1992 experiment, with the subject performing well above 75.0% for the majority of the cycles (Table 3). Performance was lower on targets in the 2008 data set, with performance below 75.0% on about half the comparison targets.
Although AIC predicts cycle to be an important parameter in the model, performances on individual target wall thicknesses did not significantly change across all cycles. The only significant change in performance was between the 1st and 2nd cycles for the 0.229 mm target in the thinner target set for 1992 (t18=6.90, P=0.001, N=21) and the 1st and 2nd cycles for the 0.076 mm target and the 1st and 3rd cycles for the 0.305 mm target in the thicker target set for 2008 (t10=3.16, P=0.01, N=12; t8=4.63, P=0.002, N=10). As most of the performances did not exhibit a significant change according to cycle, performances on all cycles were pooled for further analysis. A comparison of the average performance for each target wall thickness and target set shows a significant reduction in performance between the 1992 and 2008 experiments for most targets (Fig. 2). Performance decreased from 96.9 to 63.9%, with the biggest reduction in performance occurring for targets with a small difference in wall thickness between standard and comparison targets.
Loss of high-frequency hearing
Previous studies (using audiograms and masked hearing thresholds) that document hearing loss with the subject are presented in Fig. 3. Unfortunately, no corresponding audiogram is available for the subject for 1988, but masked hearing thresholds were collected and published. In 1988, the subject's hearing thresholds were measured while in the presence of 75 dB of masking noise (Thomas et al., 1990). The audiogram collected in 2004 using evoked potential methods was conducted in Kaneohe Bay, with no additional masking noise (Yuen et al., 2005). Absolute hearing thresholds for audiograms are typically conducted in the presence of little to no background noise, so it is difficult to extrapolate absolute hearing thresholds from the masked hearing data obtained in 1988. Although direct comparisons between the two studies cannot be made, information on hearing abilities can be extrapolated from the data. In 1988, the subject demonstrated a sharp rise in threshold, or a reduction in hearing sensitivity, at frequencies above 100 kHz. In 2004, this rise in threshold occurred at 34 kHz. Both studies were conducted in the presence of the ambient noise of Kaneohe Bay. At frequencies above 32 kHz, the subject had substantially lower threshold values, or better hearing sensitivities, in 1998 than in 2004. Even in the presence of added background masking noise, the subject heard better at higher frequencies in 1988 than in 2004.
The loss in wall-thickness discrimination performance between the 1992 and 2008 experiments means that the subject lost some level of fine-scale discrimination ability. Performance for all targets was worse in 2008 than in 1992, with reductions in performance of up to ∼36% (Fig. 2). However, for most of the targets, the subject demonstrated a slight improvement in performance over time, although this change was not significant owing to high variance levels (Table 3). Learning is thought to be the mechanism behind this improvement; through repeated exposure to the standard and comparison targets, the subject solidifies the internal representation of the stimulus and gains more experience with the task. The improvement is most striking for targets with intermediate wall thickness in each set; in the 2008 thinner target data set, performance increased from 73.3 to 86.0% on the 0.229 mm target and from 51.3 to 75.0% on the 0.203 mm target (Table 3). Conversely, the subject showed little improvement at targets with a wall thickness closest to that of the standard target. This lack of improvement in performance with targets below the threshold verifies that the subject was not able to identify the standard target using features other than its wall-thickness signature in the returning echo. If the subject was using a feature other than wall thickness during discrimination, we would expect an increase in performance over time for all comparison targets. That is, using a unique identifier of the standard target other than wall thickness, the subject would have correctly responded to all comparison targets by using the absence of the cue that distinguished the standard from all other targets as an alternative to wall thickness for discriminating the two classes of targets (standard and non-standard).
The subject continued to perform slightly better on targets with intermediate wall thickness, but did not significantly improve on targets with wall thicknesses similar to that of the standard target. Thus, thresholds did not significantly improve over time. This indicates that the subject would not achieve better discrimination performances with more practice and that we were testing the true discrimination abilities of the subject.
During the time period between the two experiments, the subject also demonstrated a sharp reduction in hearing frequency capabilities (Fig. 3). Hearing loss has been previously documented in marine mammals (Ridgway and Carder, 1997; Houser and Finneran, 2006) and can be the result of acoustic trauma, ototoxic drug exposure or presbycusis. Excessive noise or acoustic impulse may result in damage to tissues of the inner ear, and acoustic trauma has been suggested as a factor leading to stranding events with odontocetes (Evans and England, 2001). Ototoxic drugs such as aminoglycosidic antibiotics cause damage to the hair cells of the cochlea and can result in hearing loss (Aran et al., 1999; Finneran et al., 2005). Presbycusis is the most common cause of hearing loss in older mammals. Over time, degeneration of hair cells in the cochlea results in a gradual inability to hear at high frequencies. In odontocetes, presbycusis has been best studied in bottlenose dolphins (Tursiops truncatus). Typically, presbycusis begins to occur around age 20–30 for T. truncatus, with males experiencing presbycusis earlier than females (Houser et al., 2008). In one case, a male T. truncatus had good high-frequency hearing at age 13 but, by age 26, had lost the ability to hear at frequencies above 60 kHz (Ridgway and Carder, 1997). The hearing loss demonstrated by the subject in the current experiment is most likely the result of presbycusis. Since 1988, the subject has been under veterinary care with no ototoxic drug exposure. Additionally, the subject has not been exposed to acoustic trauma that would result in acute damage to the inner ear. Thus, presbycusis is the most logical explanation for the hearing loss. The subject is located in Kaneohe Bay, an acoustic environment that is dominated by snapping shrimp noise and is considered to be one of the world's noisiest underwater environments (Albers, 1965). Presbycusis is often accelerated with extended exposure to high levels of background noise, so the environment of the test subject may have contributed to her presbycusis.
Although the subject's documented high-frequency hearing loss occurred in concert with the reduction in discrimination abilities, is this the only possible explanation for the change in performance? The subject probably relies on multiple cues at many frequencies from the target echo to conduct discrimination. Although there may be many cues that odontocetes use for target discrimination (Branstetter et al., 2007; Gaunaurd et al., 1998; Muller et al., 2008), the use of high frequencies is very likely to be the most important, and the link between high-frequency hearing loss and a reduction in discrimination performance cannot be ignored. These data strongly suggest that the high-frequency component of echoes provides a great deal of information for fine-scale discrimination
Part of the explanation for a reduction in discrimination abilities may also be attributable to the second part of the odontocete sonar system: the click production mechanism. High-frequency echolocation clicks may provide the opportunity for better target resolution than low-frequency clicks (Au, 1993), and recent analyses indicate that the subject no longer utilizes high-frequency clicks during echolocation (Supin et al., 2008). Previous data demonstrate that, in 1992, the subject used echolocation clicks with most peak frequencies between 40 and 104 kHz (Au et al., 1995). Masked hearing thresholds indicate that the subject had relatively good hearing in this frequency range during this time (Thomas et al., 1990). Current work shows that the subject uses echolocation clicks with peak frequencies between 27 and 32 kHz (Supin et al., 2008). Because the subject currently cannot hear well at frequencies above 34 kHz, it would be disadvantageous for her to produce clicks with frequencies outside this range. However, since lower frequency clicks are presumed to result in poorer target-resolution capabilities, the shift to producing clicks in range of best hearing would naturally result in poorer echolocation abilities. The subject currently emits clicks with peak frequencies in the upper range of good hearing, which may indicate a strategy of producing clicks with the highest audible frequencies possible to maximize target-resolution capabilities.
Another explanation for the reduction in target discrimination capabilities may result from the bandwidth of the echolocation signals. During echolocation, the spectral content of the returning echoes is one of the main cues used in target discrimination (Hammer and Au, 1980). The production and perception of large bandwidth signals utilized by delphinid odontocetes may provide the opportunity for greater target information. Empirical studies with both broad-band dolphin-like clicks and narrow-band porpoise-like clicks show that broad-band signals provide more echo highlights of prey, and thus greater range resolution, than narrow-band signals (Au et al., 2007). Measurements of echolocation signals from wild P. crassidens show –3 dB bandwidths between 15 and 76 kHz, with an average bandwidth of 35 kHz (Madsen et al., 2004). These animals also produced echolocation clicks with a mean peak frequency of 40 kHz, ranging from 26 to 79 kHz. Previous echolocation studies with our experimental subject do not report bandwidth measurements (Au et al., 1995), so a direct comparison is not possible. The reduction in peak frequency, however, suggests that our subject currently produces clicks with bandwidths smaller than those reported from free-ranging P. crassidens (Madsen et al., 2004). Even if our subject still produced clicks with wide bandwidths, if she cannot hear frequencies within the signals, a large proportion of her bandwidth is nonfunctional. In essence, a reduction in high-frequency hearing can result in an auditory processing mechanism more similar to narrow-band signals than broad-band signals. Thus, a reduction in hearing can result in a reduction in discrimination and range resolution of prey.
This shift in frequency content of outgoing clicks is likely to be a constant gradual process. Over time, as an animal loses the ability to hear at certain frequencies, those frequencies are dropped out of its outgoing signal. Any frequencies produced outside the range of hearing may simply be artifacts of the click-production mechanism. Continued measurements of hearing, echolocation click parameters and discrimination abilities may show temporal trends in the linkage between hearing sensitivities, echolocation frequency content and echolocation discrimination (Ibsen et al., 2009).
The data from the present study demonstrate that high-frequency hearing is beneficial to the process of discriminating the fine details of echolocation targets. The loss of high-frequency hearing resulted in a decrement in the ability to distinguish fine-scale differences in echolocation targets. Given that both echolocating bats and odontocetes developed the ability to hear high frequencies as they evolved, it seems apparent that one of the primary reasons for the development of high-frequency hearing is for the discrimination of fine detail during echolocation.
This research project was supported by the Office of Naval Research (Grant NOO14-08-1-1160 to P.E.N.), for which the authors thank Jim Eckman and Neil Abercrombie. Work was approved under the University of Hawaii Institutional Animal Care Committee Protocol No. 93-005-15. The authors also thank all the members of the Marine Mammal Research Program group, including Alexander Supin and Whitlow Au, for their continuous assistance; and Tom Lane at MathWorks for programming and statistical assistance. This is contribution no. 1409 from the Hawaii Institute of Marine Biology. | https://journals.biologists.com/jeb/article/213/21/3717/9986/Decreased-echolocation-performance-following-high | 21 |
218 | 'India' was the largest economy in the world for most of the next three millennia, starting around the 1st millennia BCE and ending around the beginning of British rule in India.
Around 600 BCE, the Mahajanapadas minted punch-marked silver coins. The period was marked by intensive trade activity and urban development. By 300 BCE, the Maurya Empire had united most of the Indian subcontinent except Tamilakam, which was ruled by Three Crowned Kings. Tamlilakam, under the Three Crowned Kings, had been minting gold coins for several millennia by this period. The resulting political unity and military security allowed for a common economic system and enhanced trade and commerce, with increased agricultural productivity.
The Maurya Empire was followed by classical and early medieval kingdoms, including the Cholas, Pandyas, Cheras, Guptas, Western Gangas, Harsha, Palas, Rashtrakutas and Hoysalas. The Indian subcontinent had the largest economy of any region in the world for most of the interval between the 1st century and 18th century. Up until 1000 CE, its GDP per capita was not much higher than subsistence level.
India experienced per-capita GDP growth in the high medieval era after 1000 CE, during the Delhi Sultanate in the north and Vijayanagara Empire in the south, but was not as productive as Ming China until the 16th century. By the late 17th century, most of the Indian subcontinent had been reunited under the Mughal Empire, which became the largest economy and manufacturing power in the world, producing about a quarter of global GDP, before fragmenting and being conquered over the next century. Bengal Subah, the empire's wealthiest province, that solely accounted for 40% of Dutch imports outside the west, had an advanced, productive agriculture, textile manufacturing and shipbuilding, in a period of proto-industrialization.
By the 18th century, the Mysoreans had embarked on an ambitious economic development program that established the Kingdom of Mysore as a major economic power. Sivramkrishna analyzing agricultural surveys conducted in Mysore by Francis Buchanan in 1800-1801, arrived at estimates, using "subsistence basket", that aggregated millet income could be almost five times subsistence level. The Maratha Empire also managed an effective administration and tax collection policy throughout the core areas under its control and extracted chauth from vassal states.
India experienced deindustrialisation and cessation of various craft industries under British rule, which along with fast economic and population growth in the Western world, resulted in India's share of the world economy declining from 24.4% in 1700 to 4.2% in 1950, and its share of global industrial output declining from 25% in 1750 to 2% in 1900. Due to its ancient history as a trading zone and later its colonial status, colonial India remained economically integrated with the world, with high levels of trade, investment and migration.
The Republic of India, founded in 1947, adopted central planning for most of its independent history, with extensive public ownership, regulation, red tape and trade barriers. After the 1991 economic crisis, the central government began policy of economic liberalisation. While this has made it one of the world's fastest growing large economies, it has come at the cost of deepening income inequality.
The Indus Valley Civilisation, the first known permanent and predominantly urban settlement, flourished between 3500 BCE and 1800 BCE. It featured an advanced and thriving economic system. Its citizens practised agriculture, domesticated animals, made sharp tools and weapons from copper, bronze and tin, and traded with other cities. Evidence of well-laid streets, drainage systems and water supply in the valley's major cities, Dholavira, Harappa, Lothal, Mohenjo-daro and Rakhigarhi, reveals their knowledge of urban planning.
Although ancient India had a significant urban population, much of India's population resided in villages, whose economies were largely isolated and self-sustaining. Agriculture was the predominant occupation and satisfied a village's food requirements while providing raw materials for hand-based industries such as textile, food processing and crafts. Besides farmers, people worked as barbers, carpenters, doctors (Ayurvedic practitioners), goldsmiths and weavers.
Religion played an influential role in shaping economic activities. Pilgrimage towns like Prayagraj, Benares, Nasik and Puri, mostly centred around rivers, developed into centres of trade and commerce. Religious functions, festivals and the practice of taking a pilgrimage resulted in an early version of the hospitality industry.
Economics in Jainism is influenced by Mahavira and his philosophy. He was the last of the 24 Tirthankars, who spread Jainism. Relating to economics, he emphasised the importance of the concept of 'anekanta' (non-absolutism).
In the joint family system, members of a family pooled their resources to maintain the family and invest in business ventures. The system ensured younger members were trained and employed and that older and disabled members would be supported. The system prevented agricultural land from splitting with each generation, aiding yield from the benefits of scale. Such sanctions curbed rivalry in junior members and instilled a sense of obedience.
Along with the family- and individually-owned businesses, ancient India possessed other forms of engaging in collective activity, including the gana, pani, puga, vrata, sangha, nigama and Shreni. Nigama, pani and Shreni refer most often to economic organisations of merchants, craftspeople and artisans, and perhaps even para-military entities. In particular, the Shreni shared many similarities with modern corporations, which were used in India from around the 8th century BCE until around the 10th century CE. The use of such entities in ancient India was widespread, including in virtually every kind of business, political and municipal activity.
The Shreni was a separate legal entity that had the ability to hold property separately from its owners, construct its own rules for governing the behaviour of its members and for it to contract, sue and be sued in its own name. Ancient sources such as Laws of Manu VIII and Chanakya's Arthashastra provided rules for lawsuits between two or more Shreni and some sources make reference to a government official (Bhandagarika) who worked as an arbitrator for disputes amongst Shreni from at least the 6th century BCE onwards. Between 18 and 150 Shreni at various times in ancient India covered both trading and craft activities. This level of specialisation is indicative of a developed economy in which the Shreni played a critical role. Some Shreni had over 1,000 members.
The Shreni had a considerable degree of centralised management. The headman of the Shreni represented the interests of the Shreni in the king's court and in many business matters. The headman could bind the Shreni in contracts, set work conditions, often received higher compensation and was the administrative authority. The headman was often selected via an election by the members of the Shreni, and could also be removed from power by the general assembly. The headman often ran the enterprise with two to five executive officers, also elected by the assembly.
Punch marked silver ingots were in circulation around the 5th century BCE. They were the first metallic coins minted around the 6th century BCE by the Mahajanapadas of the Gangetic plains and were India's earliest traces of coinage. While India's many kingdoms and rulers issued coins, barter was still widely prevalent.[failed verification] Villages paid a portion of their crops as revenue while its craftsmen received a stipend out of the crops for their services. Each village was mostly self-sufficient.
During the Maurya Empire (c. 321–185 BCE), important changes and developments affected the Indian economy. It was the first time most of India was unified under one ruler. With an empire in place, trade routes became more secure. The empire spent considerable resources building and maintaining roads. The improved infrastructure, combined with increased security, greater uniformity in measurements, and increasing usage of coins as currency, enhanced trade.
Maritime trade was carried out extensively between South India and Southeast and West Asia from early times until around the fourteenth century CE. Both the Malabar and Coromandel Coasts were the sites of important trading centres from as early as the first century BCE, used for import and export as well as transit points between the Mediterranean region and southeast Asia. Over time, traders organised themselves into associations which received state patronage. Historians Tapan Raychaudhuri and Irfan Habib claim this state patronage for overseas trade came to an end by the thirteenth century CE, when it was largely taken over by the local Parsi, Jewish, Syrian Christian and Muslim communities, initially on the Malabar and subsequently on the Coromandel coast.
Other scholars suggest trading from India to West Asia and Eastern Europe was active between the 14th and 18th centuries. During this period, Indian traders settled in Surakhani, a suburb of greater Baku, Azerbaijan. These traders built a Hindu temple, which suggests commerce was active and prosperous for Indians by the 17th century.
Further north, the Saurashtra and Bengal coasts played an important role in maritime trade, and the Gangetic plains and the Indus valley housed several centres of river-borne commerce. Most overland trade was carried out via the Khyber Pass connecting the Punjab region with Afghanistan and onward to the Middle East and Central Asia. Although many kingdoms and rulers issued coins, barter was prevalent. Villages paid a portion of their agricultural produce as revenue to the rulers, while their craftsmen received a part of the crops at harvest time for their services.
Before and during the Delhi Sultanate (1206–1526 CE), Islam underlay a cosmopolitan civilization. It offered wide-ranging international networks, including social and economic networks. They spanned large parts of Afro-Eurasia, leading to escalating circulation of goods, peoples, technologies and ideas. While initially disruptive, the Delhi Sultanate was responsible for integrating the Indian subcontinent into a growing world system.
The period coincided with a greater use of mechanical technology in the Indian subcontinent. From the 13th century onwards, India began widely adopting mechanical technologies from the Islamic world, including water-raising wheels with gears and pulleys, machines with cams and cranks, papermaking technology, and the spinning wheel. The worm gear roller cotton gin was invented in the Indian subcontinent during the 13th–14th centuries, and is still used in India through to the present day. The incorporation of the crank handle in the cotton gin first appeared in the Indian subcontinent some time during the late Delhi Sultanate or the early Mughal Empire. The production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India during the Delhi Sultanate era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production.
India's GDP per capita was lower than the Middle East from 1 CE (16% lower) to 1000 CE (about 40% lower), but by the late Delhi Sultanate era in 1500, India's GDP per capita approached that of the Middle East.
According to economic historian Angus Maddison in Contours of the world economy, 1–2030 AD: essays in macro-economic history, the Indian subcontinent was the world's most productive region, from 1 CE to 1600.
|GDP per capita
|Avg % GDP growth||% of world GDP (PPP)||Population||% of world population||Period|
|1000||33,750,000,000||450||0.0||28.0||72,500,000||27.15||Early medieval era|
|1500||60,500,000,000||550||0.117||24.35||79,000,000||18.0||Late medieval era|
|1600||74,250,000,000||550||782||682||758||0.205||22.39||100,000,000||17.98||Early modern era|
|1950||222,222,000,000||619||-1.794||4.17||359,000,000||14.11||Republic of India|
While Stephen Broadberry, Johann Custodis, and Bishnupriya Gupta, in 2014, offered the following comparative estimates for:
|Year||India ($)||UK ($)||Ratio (%)||India population (m)||UK population (m)|
Karl Marx, writing in 1857, suggested the Nominal (Silver) per capita Income of Company India, in 1854, was approximately 1:12 that of the UK, as was the Nominal per capita tax burden 1:12 of the UK, 1:10 of France, and 1:5 of Prussia. Explaining why the EIC administration was perpetually running local deficits, and in need to borrow monies in India, to fund the administration.
Economic historians such as Prasannan Parthasarathi have criticized these estimates, arguing primary sources show Real (grain) wages in 18th-century Bengal and Mysore were comparable to Britain. According to evidence cited by Immanuel Wallerstein, Irfan Habib, Percival Spear and Ashok Desai, per-capita agricultural output and standards of consumption in 17th-century Mughal India was higher than in 17th-century Europe and early 20th-century British India..Sivramkrishna analyzed agricultural surveys conducted in Mysore by Francis Buchanan in 1800-1801, arrived at estimates, using "subsistence basket", that aggregated millet income could be almost five times subsistence level, while corresponding rice income is three times that much. That could be comparable to advance part of Europe. However due to the scarcity of data, more research is needed, before drawing any conclusion. Shireen Moosvi estimates that Mughal India had a per-capita income 1.24% higher in the late 16th century than British India had in the early 20th century, although the difference would be less if increasing purchasing power in terms of manufactured goods were taken into account. She also estimates that the secondary sector contributed a higher percentage to the Mughal economy (18.2%) than it did to the economy of early 20th-century British India (11.2%).
According to economic historian Paul Bairoch, India as well as China had a higher GDP (PPP) per capita than Europe in 1750. For 1750, Bairoch estimated the GNP per capita for the Western world to be $182 in 1960 US dollars ($804 in 1990 dollars) and for the non-Western world to be $188 in 1960 dollars ($830 in 1990 dollars), exceeded by both China and India. Other estimates he gives include $150–190 for England in 1700 and $160–210 for India in 1800. Bairoch estimated that it was only after 1800 that Western European per-capita income pulled ahead. Others such as Andre Gunder Frank, Robert A. Denemark, Kenneth Pomeranz and Amiya Kumar Bagchi also criticised estimates that showed low per-capita income and GDP growth rates in Asia (especially China and India) prior to the 19th century, pointing to later research that found significantly higher per-capita income and growth rates in China and India during that period.
Economy of India under Mughal Empire, Maratha Empire and among others was prosperous into the early 18th century. Parthasarathi estimated that 28,000 tonnes of bullion (mainly from the New World) flowed into the Indian subcontinent between 1600 and 1800, equating to 30% of the world's production in the period.
An estimate of the annual income of Emperor Akbar the Great's treasury, in 1600, is $90 million (in contrast to the tax take of Great Britain two hundred years later, in 1800, totaled $90 million). The South Asia region, in 1600, was estimated to be the largest in the world followed by China.
By the late 17th century, the Mughal Empire was at its peak and had expanded to include almost 90 percent of the Indian subcontinent. It enforced a uniform customs and tax-administration system. In 1700, the exchequer of the Emperor Aurangzeb reported an annual revenue of more than £100 million, or $450 million, more than ten times that of his contemporary Louis XIV of France, while controlling just 7 times the population.
By 1700, Mughal India had become the world's largest economy, ahead of Qing China and Western Europe, containing approximately 24.2% of the World's population, and producing about a quarter of world output. Mughal India produced about 25% of global industrial output into the early 18th century. India's GDP growth increased under the Mughal Empire, exceeding growth in the prior 1,500 years. The Mughals were responsible for building an extensive road system, creating a uniform currency, and the unification of the country. The Mughals adopted and standardized the rupee currency introduced by Sur Emperor Sher Shah Suri. The Mughals minted tens of millions of coins, with purity of at least 96%, without debasement until the 1720s. The empire met global demand for Indian agricultural and industrial products.
Cities and towns boomed under the Mughal Empire, which had a relatively high degree of urbanization (15% of its population lived in urban centres), more urban than Europe at the time and British India in the 19th century. Multiple cities had a population between a quarter-million and half-million people, while some including Agra (in Agra Subah) hosted up to 800,000 people and Dhaka (in Bengal Subah) with over 1 million. 64% of the workforce were in the primary sector (including agriculture), while 36% were in the secondary and tertiary sectors. The workforce had a higher percentage in non-primary sectors than Europe at the time; in 1700, 65–90% of Europe's workforce were in agriculture, and in 1750, 65–75% were in agriculture.
|Year||Population (m)||Urban Population (m)||Urban (%)|
Further information: History of agriculture in the Indian subcontinent
Indian agricultural production increased. Food crops included wheat, rice, and barley, while non-food cash crops included cotton, indigo and opium. By the mid-17th century, Indian cultivators had begun to extensively grow two crops from the Americas, maize and tobacco. Bengali peasants learned techniques of mulberry cultivation and sericulture, establishing Bengal Subah as a major silk-producing region. Agriculture was advanced compared to Europe, exemplified by the earlier common use of the seed drill. The Mughal administration emphasized agrarian reform, which began under the non-Mughal Emperor Sher Shah Suri. Akbar adopted this and added more reforms. The Mughal government funded the building of irrigation systems, which produced much higher crop yields and harvests.
One reform introduced by Akbar was a new land revenue system called zabt. He replaced the tribute system with a monetary tax system based on a uniform currency. The revenue system was biased in favour of higher value cash crops such as cotton, indigo, sugar cane, tree-crops, and opium, providing state incentives to grow cash crops, adding to rising market demand. Under the zabt system, the Mughals conducted extensive cadastral surveying to assess the cultivated area. The Mughal state encouraged greater land cultivation by offering tax-free periods to those who brought new land under cultivation.
According to evidence cited by economic historians Immanuel Wallerstein, Irfan Habib, Percival Spear, and Ashok Desai, per-capita agricultural output and standards of consumption in 17th-century Mughal India was higher than in 17th-century Europe and early 20th-century British India.
Until the 18th century, Mughal India was the most important manufacturing center for international trade. Key industries included textiles, shipbuilding and steel. Processed products included cotton textiles, yarns, thread, silk, jute products, metalware, and foods such as sugar, oils and butter. This growth of manufacturing has been referred to as a form of proto-industrialization, similar to 18th-century Western Europe prior to the Industrial Revolution.
Early modern Europe imported products from Mughal India, particularly cotton textiles, spices, peppers, indigo, silks and saltpeter (for use in munitions). European fashion, for example, became increasingly dependent on Indian textiles and silks. From the late 17th century to the early 18th century, Mughal India accounted for 95% of British imports from Asia, and the Bengal Subah province alone accounted for 40% of Dutch imports from Asia. In contrast, demand for European goods in Mughal India was light. Exports were limited to some woolens, ingots, glassware, mechanical clocks, weapons, particularly blades for Firangi swords, and a few luxury items. The trade imbalance caused Europeans to export large quantities of gold and silver to Mughal India to pay for South Asian imports. Indian goods, especially those from Bengal, were also exported in large quantities to other Asian markets, such as Indonesia and Japan.
The largest manufacturing industry was cotton textile manufacturing, which included the production of piece goods, calicos and muslins, available unbleached in a variety of colours. The cotton textile industry was responsible for a large part of the empire's international trade. The most important center of cotton production was the Bengal Subah province, particularly around Dhaka. Bengal alone accounted for more than 50% of textiles and around 80% of silks imported by the Dutch. Bengali silk and cotton textiles were exported in large quantities to Europe, Indonesia Japan, and Africa, where they formed a significant element in the exchange of goods for slaves, and treasure. In Britain protectionist policies, such as 1685-1774 Calico Acts, imposed tariffs on imported Indian textiles.
Mughal India had a large shipbuilding industry, particularly in the Bengal Subah province. Economic historian Indrajit Ray estimates shipbuilding output of Bengal during the sixteenth and seventeenth centuries at 223,250 tons annually, compared with 23,061 tons produced in nineteen colonies in North America from 1769 to 1771.
Main article: Bengal Subah
See also: Muslin trade in Bengal
Bengal Subah was the Mughal's wealthiest province, generating 50% of the empire's GDP and 12% of the world's GDP. According to Ray, it was globally prominent in industries such as textile manufacturing and shipbuilding. Bengal's capital city Dhaka was the empire's financial capital, with a population exceeding one million. It was an exporter of silk and cotton textiles, steel, saltpeter and agricultural and industrial products.
Domestically, much of India depended on Bengali products such as rice, silks and cotton textiles.
In the early half of the 18th century, Mughal Empire fell into decline, with Delhi sacked in Nader Shah's invasion of the Mughal Empire, the treasury emptied, tens of thousands killed, and many thousands more carried off, with their livestock, as slaves, weakening the empire and leading to the emergence of post-Mughal states. The Mughals were replaced by the Marathas as the dominant military power in much of India, while the other smaller regional kingdoms who were mostly late Mughal tributaries, such as the Nawabs in the north and the Nizams in the south, declared autonomy. However, the efficient Mughal tax administration system was left largely intact, with Tapan Raychaudhuri estimating revenue assessment actually increased to 50 percent or more, in contrast to China's 5 to 6 percent, to cover the cost of the wars. Similarly in the same period, Maddison gives the following estimates for the late Mughal economy's income distribution:
|Social group||% of population||% of total income||Income in terms of per-capita mean|
|Merchants to Sweapers||17||37||2.2|
Among the post-Mughal states that emerged in the 18th century, the dominant economic powers were Bengal Subah (under the Nawabs of Bengal) and the South Indian Kingdom of Mysore (under Hyder Ali and Tipu Sultan). The former was devastated by the Maratha invasions of Bengal, which experienced six invasions, over a decade, claimed to have killed hundreds of thousands, blocked trade with the Persian and Ottoman empires, and weakened the territory's economy to the point the Nawab of Bengal agreed to a peace treaty with the Marathas. The agreement made Bengal Subah a tributary to the Marathas, agreeing to pay Rs. 1.2 million in tribute annually, as the Chauth of Bengal and Bihar. The Nawab of Bengal also paid Rs. 3.2 million to the Marathas, towards the arrears of chauth for the preceding years. The chauth was paid annually by the Nawab of Bengal, up to his defeat at the Battle of Plassey by the East India Company in 1757.
Jeffrey G. Williamson argued that India went through a period of deindustrialization in the latter half of the 18th century as an indirect outcome of the collapse of the Mughal Empire, and that British rule later caused further deindustrialization. According to Williamson, the Mughal Empire's decline reduced agricultural productivity, which drove up food prices, then nominal wages, and then textile prices, which cost India textile market share to Britain even before the latter developed factory technology, though Indian textiles maintained a competitive advantage over British textiles until the 19th century. Prasannan Parthasarathi countered that several post-Mughal states did not decline, notably Bengal and Mysore, which were comparable to Britain into the late 18th century.
A year after the loss of the British East India Company trading base of Calcutta, to the new Nawab of the Bengal Subah, Siraj ud-Daulah, it won a decisive victory over the Nawab, and his French East India Company allies, at the Battle of Plassey, in 1757. The victory was achieved through agreeing to appoint the Nawab's military commander, Mir Jafar, as a Company friendly replacement, if he turned Siraj ud-Daulah's numerically superior forces on his masters household, and partitioned the Nawab's treasury, to compensate both parties. The Company regained, and fortified Calcutta, later gaining the right to collect tax revenues, on the Nawabs behalf, in the Bengal Subah, from 1765, a right to trade tax free, fortify the cities and factories it established, along with a right to establish local armies, turning the mercantile company, into the effective state apparatus, and later proxy for the British Crown. Following the Indian Rebellion of 1857, the British Crown would intervene and establish a formal colonial administration in the Company controlled territory.
Immediately following the East India Company gaining the right to collect revenue, on behalf of the Nawab of Bengal, the Company largely ceased a century and a half practice of importing gold and silver, and for more than a decade, which it had hitherto used to pay for the goods shipped back to Britain, the American colonies, East Asia, or on to African Slavers, to be bartered for Slaves in the Atlantic Slave trade:
|Years||Bullion (£)||Average per Annum|
In addition, as under Mughal rule, land and opium revenue collected in the Bengal Presidency helped finance the Company's administration, raise Sepoy armies, and fund wars in other parts of India, and later further afield, for example the Opium Wars, with additional capital raised, at typically 10%, from Banias money lenders.
In the period 1760–1800, Bengal's money supply was greatly diminished. The closing of some local mints and close supervision of the rest, the fixing of exchange rates and the standardization of coinage added to the economic downturn.
During this period, the East India Company began tax administration reforms in a fast expanding empire spread over 250 million acres (1,000,000 km2), or 35 percent of Indian domain, with regional land, opium and salt taxes set, and collected. Indirect rule was established on protectorates and buffer states.
During the period 1780–1860 India changed from an exporter of processed goods paid for in bullion to an exporter of raw materials and a buyer of manufactured goods.
The abolition of the Atlantic slave trade, from 1807, both eliminated a significant export market, and encouraged Caribbean plantations to organize the import of South Asian labor.
By 1820, India had fallen from the top rank to become the second-largest economy in the world, behind China.
British economic policies gave them a monopoly over India's large market and cotton resources.
In the 1750s fine cotton and silk was exported from India to markets in Europe, Americas, Asia, and Africa. With East India Company supplied cotton pieces comprising approximately 30%, by value, of the trade goods bartered for Slaves in the Anglo-African Triangular trade, and featuring in the French and Arab slave trades.
From the late 18th century British industry began to lobby their government to reintroduce the Calico Acts, and again start taxing Indian textile imports, while in parallel allow them access to the markets of India. Which the UK parliaments partly conceded to, with removal of the East India Company's two hundred year old monopoly on most British trade with India, via the Charter Act of 1813, forcing the till then protected Indian market to open to British goods, which could now be sold in India without Company tariffs or duties. Starting in the early 19th century, British textiles began to appear in the Indian markets, with the value of the textile imports growing from £5.2 million in 1850 to £18.4 million in 1896. Raw cotton was imported without tariffs to British factories, which manufactured textiles and sold them back to India, also without tariffs.
|Year||Cotton consumption(m yds)||Domestic production(m yds)||Domestic production (1871=100)||Imports from Britain(m yds)||Exports to Britain (pieces)|
Indian historian, Rajat Kanta Ray, noted the relative decline of the Indian cotton textile industry started in the mid-1820s. The pace of its decline was, however, slow though steady at the beginning, but reached a crisis by 1860, when 563,000 textile workers lost their jobs. Ray estimates that the industry shrank by about 28% by 1850. However, it survived in the high-end and low-end domestic markets. Ray argued that British discriminatory policies undoubtedly depressed the industry's exports, but suggests its decay is better explained by technological innovations in Britain. With Amiya Bagchi estimating the impact of the invention of the Spinning mule on the employment of handspinners:
|Spinners / Weavers||2.3||1.3|
Indian textiles had maintained a competitive advantage over British textiles up until the 19th century, when Britain eventually overtook India as the world's largest cotton textile manufacturer. In 1811, Bengal was still a major exporter of cotton cloth to the Americas and the Indian Ocean. However, Bengali cotton exports declined over the course of the early 19th century, as British imports to Bengal increased, from 25% in 1811 to 93% in 1840.
The second quarter of the 19th century, raw materials, which chiefly consisted of raw cotton, opium, and indigo, accounted for most of India's exports. By the end of the 1930’s Indian textiles, and raw cotton, jute, hemp, and silk exports exceed $200 million, annually.
Exploitable mineral deposits had started to be identified under the East India Company, with the first Coal mines, along with the Geological Survey of India established to identify and map the available resources in the territory. A modern Iron and steel industry in India would be established in the Second half of the 19th century, with over 3 million tonnes of metals produced annually, and 25 million tonnes of coal, by the 1940s.
The East India Companies’ trade, and industry enabling metalled road network was expanded from the 2,500 kilometres (1,600 mi), constructed to 1850, to 350,000 kilometres (220,000 mi) by 1943.
In 1787, a Gunpowder Factory was established at Ishapore; it began production in 1791, it is now the Rifle Factory Ishapore, beginning in 1904. In 1801, Gun & Shell Factory, Calcutta was established and the production began on 18 March 1802. There were eighteen ordnance factories before India became independent in 1947.
Under the EIC the first Indian authored publications, printed, on locally produced paper, produced in locally established paper mills, appeared, from the Hicky's Bengal Gazette, to by the 1940s, a hundred thousand tonnes of paper was being produced, annually.
Main article: British Raj
The formal dissolution of the Mughal Dynasty heralded a change in British treatment of Indian subjects. During the British Raj, massive railway projects were begun in earnest and government jobs and guaranteed pensions attracted a large number of upper caste Hindus into the civil service for the first time. British cotton exports absorbed 55 percent of the Indian market by 1875. In the 1850s the first cotton mills opened in Bombay, posing a challenge to the cottage-based home production system based on family labour.
It is believed the British took $65 trillion dollars from India during its time as the jewel in the crown of the British empire. Twice the gross domestic product of the United States of America.
|Period||Price of silver (in pence per troy ounce)||Rupee exchange rate (in pence)|
|Source: B.E. Dadachanji. History of Indian Currency and Exchange, 3rd enlarged ed.
(Bombay: D.B. Taraporevala Sons & Co, 1934), p. 15
After its victory in the Franco-Prussian War (1870–71), Germany extracted a huge indemnity from France of £200,000,000, and then moved to join Britain on a gold monetary standard. France, the US, and other industrialising countries followed Germany in adopting gold in the 1870s. Countries such as Japan that did not have the necessary access to gold or those, such as India, that were subject to imperial policies remained mostly on a silver standard. Silver-based and gold-based economies then diverged dramatically. The worst affected were silver economies that traded mainly with gold economies. Silver reserves increased in size, causing gold to rise in relative value. The impact on silver-based India was profound, given that most of its trade was with Britain and other gold-based countries. As the price of silver fell, so too did the exchange value of the rupee, when measured against sterling.
The Indian economy grew at about 1% per year from 1890 to 1910, in line with, and largely dependent on increased agricultural output, through schemes such as the Punjab Canal Colonies, Ganges canal, and cultivation of 4,000,000 acres of Assam jungle, which the growth of land under cultivation only keptIng pace with a population that doubled in the same period. The result was little change in Real income levels. Agriculture was still dominant, with most peasants at the subsistence level.
Entrepreneur Jamsetji Tata (1839–1904) began his industrial career in 1877 with the Central India Spinning, Weaving, and Manufacturing Company in Bombay. While other Indian mills produced cheap coarse yarn (and later cloth) using local short-staple cotton and simple machinery imported from Britain, Tata did much better by importing expensive longer-stapled cotton from Egypt and buying more complex ring-spindle machinery from the United States to spin finer yarn that could compete with imports from Britain.
In the 1890s, Tata launched plans to expand into the heavy industry using Indian funding. The Raj did not provide capital, but aware of Britain's declining position against the US and Germany in the steel industry, it wanted steel mills in India so it promised to purchase any surplus steel Tata could not otherwise sell.
By the end of the 1930s, Cotton, Jute, Peanuts, Tea, Tobacco, and Hides accounted for the majority of the $500+ million of agricultural derived, annual exports.
British investors built a modern railway system in the late 19th century—it became the then fourth-largest in the world and was renowned for the quality of construction and service. The government was supportive, realising its value for military use and for economic growth. The railways at first were privately owned and operated, and run by British administrators, engineers and skilled craftsmen. At first, only the unskilled workers were Indians.
A plan for a rail system was first advanced in 1832. The first train ran from Red Hills to Chintadripet bridge in Madras, inaugurated in 1837. It was called Red Hill Railway. It was used for freight transport. A few more short lines were built in the 1830s and 1840s. They did not interconnect and were used for freight forwarding. The East India Company (and later the colonial government) encouraged new railway companies backed by private investors under a scheme that would provide land and guarantee an annual return of up to five percent during the initial years of operation. The companies were to build and operate the lines under a 99-year lease, with the government retaining the option to buy them earlier. In 1854 Governor-General Lord Dalhousie formulated a plan to construct a network of trunk lines connecting the principal regions. A series of new rail companies were established, leading to rapid expansion.
In 1853, the first passenger train service was inaugurated between Bori Bunder in Bombay and Thane, covering a distance of 34 km (21 mi). The route mileage of this network increased from 1,349 km (838 mi) in 1860 to 25,495 km (15,842 mi) in 1880 – mostly radiating inland from the port cities of Bombay, Madras and Calcutta. Most of the railway construction was done by Indian companies supervised by British engineers. The system was sturdily built. Several large princely states built their own rail systems and the network spread across India. By 1900 India had a full range of rail services with diverse ownership and management, operating on broad, metre and narrow gauge networks.
Headrick argues that both the Raj lines and the private companies hired only European supervisors, civil engineers and even operating personnel, such as locomotive engineers. The government's Stores Policy required that bids on railway contracts be submitted to the India Office in London, shutting out most Indian firms. The railway companies purchased most of their hardware and parts in Britain. Railway maintenance workshops existed in India, but were rarely allowed to manufacture or repair locomotives. TISCO first won orders for rails only in the 1920s. Christensen (1996) looked at colonial purpose, local needs, capital, service and private-versus-public interests. He concluded that making the railways dependent on the state hindered success, because railway expenses had to go through the same bureaucratic budgeting process as did all other state expenses. Railway costs could therefore not respond to needs of the railways or their passengers.
In 1951, forty-two separate railway systems, including thirty-two lines owned by the former Indian princely states, were amalgamated to form a single unit named the Indian Railways. The existing rail systems were abandoned in favor of zones in 1951 and a total of six zones came into being in 1952.
The first refineries were established to produce kerosene, petrol, paints and over chemicals, locally, with production increasing once local deposits had been identified, to by the 1940s, sixty million gallons of petrochemicals were being produced annually.
Debate continues about the economic impact of British imperialism on India. The issue was first raised by Edmund Burke who in the 1780s vehemently attacked the East India Company, claiming that Warren Hastings and other top officials had ruined the Indian economy and society, and elaborated on in the 19th century by Romesh Chunder Dutt. Indian historian Rajat Kanta Ray (1998) continued this line of reasoning, saying that British rule in the 18th century took the form of plunder and was a catastrophe for the traditional economy. According to the economic drain theory, supported by Ray, the British depleted food, and money stocks and imposed high taxes that helped cause the terrible famine of 1770, which killed a third of the people of Bengal. Ray also argued British India failed to offer the necessary encouragement, technology transfers, and protectionist frameworks, to permit British India to replicate Britain’s own industrialisation, before its independence.
British historian P. J. Marshall reinterpreted the view that the prosperity of the Mughal era gave way to poverty and anarchy, arguing that the British takeover was not a sharp break with the past. British control was delegated largely through regional rulers and was sustained by a generally prosperous economy through the 18th century, except for the frequent, deadly famines. Marshall notes the British raised revenue through local tax administrators and kept the old Mughal tax rates. Instead of the Indian nationalist account of the British as alien aggressors, seizing power by brute force and impoverishing the region, Marshall presents a British nationalist interpretation in which the British were not in full control, but instead were controllers in what was primarily an Indian-run society and in which their ability to keep power depended upon cooperation with Indian elites. Marshall admitted that much of his interpretation is rejected by many historians.
Some historians point to Company rule as a major factor in both India's deindustrialization and Britain's Industrial Revolution, suggesting capital amassed from Bengal following its 1757 conquest supported investment in British industries such as textile manufacture during the Industrial Revolution as well as increasing British wealth, while contributing to deindustrialization in Bengal.
Other economic historians have blamed the colonial rule for the current dismal state of India's economy, with investment in Indian industries limited since it was a colony. Under British rule, India's a number of native manufacturing industries shrank. The economic policies of the British Raj caused a severe decline in the handicrafts and handloom sectors, with reduced demand and dipping employment; the yarn output of the handloom industry, for example, declined from 419 million pounds in 1850 to 240 million pounds in 1900. During the British East India Company's rule in India, production of food crops declined, mass impoverishment and destitution of farmers and numerous famines. The result was a significant transfer of capital from India to England, which led to a massive drain of revenue rather than any systematic effort at modernisation of the Indian economy.
There is no doubt that our grievances against the British Empire had a sound basis. As the painstaking statistical work of the Cambridge historian Angus Maddison has shown, India's share of world income collapsed from 22.6% in 1700, almost equal to Europe's share of 23.3% at that time, to as low as 3.8% in 1952. Indeed, at the beginning of the 20th century, "the brightest jewel in the British Crown" was the poorest country in the world in terms of per capita income.
Economic historians have investigated regional differences in taxation, and public good provision, across the British Raj, with a strong positive correlation found between education spending, and Literacy in India; with historic Provincial policies still impacting comparative economic development, productivity, and employment.
Other economic historians debate the impact of Mahatma Gandhi’s establishment of the Swadeshi movement, and All India Village Industries Association, in the 1930s, to promote an alternative, self sufficient, indigenous, village economy, approach to development, over the Classical Western economic model; along with the impact of the Nonviolent resistance movement, with the mass boycottIng of industrial goods, tax strikes, and abolition of the salt tax, on public revenues, public programs, growth and industrialisation, in the last quarter of the British Raj.
India served as both a significant supplier of raw goods to British manufacturers and a large captive market for British manufactured goods.
Main article: Deindustrialisation in India
India accounted for 25% of the world's industrial output in 1750, declining to 2% of the world's industrial output in 1900. Britain replaced India as the world's largest textile manufacturer in the 19th century. In terms of urbanization, Mughal India had a higher percentage of its population (15%) living in urban centers in 1600 than British India did in the 19th century.
Several economic historians claimed that in the 18th century real wages were falling in India, and were "far below European levels". This has been disputed by others, who argued that real wage decline occurred in the early 19th century, or possibly beginning in the late 18th century, largely as a result of "globalization forces".
Clingingsmith and Williamson argue India deindustrialized, in the period between 1750 and 1860, due to two very different causes, before reindustrialization. Between 1750 and 1810, they suggest the loss of Mughal hegemony allowed new despotic rulers to revenue farm their conquered populations, seeing tax and rent demands increase to 50% of production, compared to the 5–6% extracted in China during the period, and levied largely to fund regional warfare. Combined with the use of labour and livestock for martial purposes, grain and textile prices were driven up, along with nominal wages, as the populous attempted to meet the demands, reducing the competitiveness of Indian handicrafts, and impacting the regional textile trade. Then from 1810 to 1860, the expansion of the British factory system drove down the relative price of textiles worldwide, through productivity advances, a trend that was magnified in India as the concurrent transport revolution dramatically reduced transportation costs, and in a sub-continent that had not seen metalled roads, the introduction of mechanical transport exposed once protected markets to global competition, hitting artisanal manufacture, but stabilizing the agricultural sector.
Angus Maddison states:
... This was a shattering blow to manufacturers of fine muslins, jewellery, luxury clothing and footwear, decorative swords and weapons. My own guess would be that the home market for these goods was about 5 percent of Moghul national income and the export market for textiles probably another 1.5 percent.
See also: Great Divergence
Historians have questioned why India failed to industrialise. As the global cotton industry underwent a technological revolution in the 18th century, while Indian industry stagnated after adopting the Flying shuttle, and industrialisation began only in the late 19th century. Several historians have suggested that this was because India was still a largely agricultural nation with low Commodity money wage levels, arguing that nominal wages were high in Britain so cotton producers had the incentive to invent and purchase expensive new labour-saving technologies, and that wages levels were low in India so producers preferred to increase output by hiring more workers rather than investing in technology.
British colonial rule created an institutional environment that stabilized Indian society, though they stifled trade with the rest of the world. They created a well-developed system of railways, telegraphs and a modern legal system. Extensive irrigation systems were built, providing an impetus for growing cash crops for export and for raw materials for Indian industry, especially jute, cotton, sugarcane, coffee, rubber, and tea.
The Tata Iron and Steel Company (TISCO), headed by Dorabji Tata, opened its plant at Jamshedpur in Bihar (present day in Jharkhand) in 1908. It became the leading iron and steel producer in India, with 120,000 employees in 1945. TISCO became an India's symbol of technical skill, managerial competence, entrepreneurial flair, and high pay for industrial workers.
During the First World War, the railways were used to transport troops and grains to Bombay and Karachi en route to Britain, Mesopotamia and East Africa. With shipments of equipment and parts from Britain curtailed, maintenance became much more difficult; critical workers entered the army; workshops were converted to make artillery; locomotives and cars were shipped to the Middle East. The railways could barely keep up with the sudden increase in demand. By the end of the war, the railways had deteriorated badly. In the Second World War the railways' rolling stock was diverted to the Middle East, and the railway workshops were again converted into munitions workshops.
Non-royal private wealth was encouraged by colonial administrations during these times. Houses of Birla and Sahu Jain began to challenge the Houses of Martin Burn, Bird Heilgers and Andrew Yule. About one-ninth of the national population were urban by 1925.
The 20-year economic boom cycle ended with the Great Depression of 1929 that had a direct impact on India, with relatively little impact on the modern secondary sector. The colonial administration did little to alleviate debt stress. The worst consequences involved deflation, which increased the burden of the debt on villagers. Total economic output did not decline between 1929 and 1934. The worst-hit sector was jute, based in Bengal, which was an important element in overseas trade; it had prospered in the 1920s but prices dropped in the 1930s. Employment also decline, while agriculture and small-scale industry exhibited gains. The most successful new industry was sugar, which had meteoric growth in the 1930s.
Gold-Silver ratio peaked at 100-1 by 1940. The Bank of England records the Indian central bank held a positive balance of £1,160 million on 14 July 1947, and that British India maintained a trade surplus, with the United Kingdom, for the duration of the British Raj eg.
|Period||Balance of trade and net invisibles||War expenditure||Other sources||Total|
|September 1939 – March 1940||65||2||13||80|
Source: Indian sterling balances, p. 2, 15 Jan.1.1947, Bank of England (BoE), OV56/55.
Studies of the comparative tax burdens in the British Empire, by days of labour required to meet the per capita tax bill, income tax rates, and gross colonial revenues indicate the tax burden in India required approximately half the number of days of labour to meet, as that of the UK, and a third that of some settler colonies, such as New Zealand, Australia, Canada, and Hong Kong, which some economic historians speculate deprived the Colonial Indian administration of the revenue necessary to provide the public goods to accelerate economic development, literacy, and industrialisation, as experienced elsewhere in the empire.
The newly independent but weak Union government's treasury reported annual revenue of £334 million in 1950. In contrast, Nizam Asaf Jah VII of Hyderabad State was widely reported to have a fortune of almost £668 million then. About one-sixth of the national population were urban by 1950. A US Dollar was exchanged at 4.79 rupees.
See also: Economy of India
The phrase "Hindu rate of growth" was used by some socialista to refer to the low annual growth rate of the economy of India before 1991, suggesting that the blame for low growth lies with the Hinduism. It remained around 3.5% from the 1950s to 1980s, while per capita income growth averaged 1.3% a year. During the same period, South Korea grew by 10% and Taiwan by 12%. The Indian economy of this period is characterised as Dirigism.
Before independence a large share of tax revenue was generated by the land tax. Thereafter land taxes steadily declined as a share of revenues.
The economic problems inherited at independence were exacerbated by the costs associated with the partition, which had resulted in about 2 to 4 million refugees fleeing past each other across the new borders between India and Pakistan. Refugee settlement was a considerable economic strain. Partition divided India into complementary economic zones. Under the British, jute and cotton were grown in the eastern part of Bengal (East Pakistan, after 1971, Bangladesh), but processing took place mostly in the western part of Bengal, which became the Indian state of West Bengal. As a result, after independence India had to convert land previously used for food production to cultivate cotton and jute.
Growth continued in the 1950s, the rate of growth was less positive than India's politicians expected.
Toward the end of Nehru's term as prime minister, India experienced serious food shortages.
Beginning in 1950, India faced trade deficits that increased in the 1960s. The Government of India had a major budget deficit and therefore could not borrow money internationally or privately. As a result, the government issued bonds to the Reserve Bank of India, which increased the money supply, leading to inflation. The Indo-Pakistani War of 1965 led the US and other countries friendly towards Pakistan to withdraw foreign aid to India, which necessitated devaluation. India was told it had to liberalise trade before aid would resume. The response was the politically unpopular step of devaluation accompanied by liberalisation. Defence spending in 1965/1966 was 24.06% of expenditure, the highest in the period from 1965 to 1989. Exacerbated by the drought of 1965/1966, the devaluation was severe. GDP per capita grew 33% in the 1960s, reaching a peak growth of 142% in the 1970s, before decelerating to 41% in the 1980s and 20% in the 1990s.
From FY 1951 to FY 1979, the economy grew at an average rate of about 3.1 percent a year, or at an annual rate of 1.0 percent per capita. During this period, industry grew at an average rate of 4.5 percent a year, compared with 3.0 percent for agriculture.
|Year||Gross domestic product
|₹ per USD||Per Capita Income|
(as % of US)
The 20-year economic boom cycle ended in 1971 with the Nixon shock. In 1975 India's GDP (in 1990 US dollars) was $545 billion, $1,561 billion in the USSR, $1,266 billion in Japan, and $3,517 billion in the US.
|Year||Gross Domestic Product||Exports||Imports||₹ per USD||Inflation Index (2000=100)||Per Capita Income|
(as % of US)
Prime minister Indira Gandhi proclaimed a national emergency and suspended the Constitution in 1975. About one-fifth of the national population were urban by 1975.
Prime Minister Nehru was a believer in socialism and decided that India needed maximum steel production. He, therefore, formed a government-owned company, Hindustan Steel Limited (HSL) and set up three steel plants in the 1950s.
Main article: Economic liberalisation in India
Economic liberalisation in India in the 1990s and first two decades of the 21st century led to large economic changes.
|Year||Gross Domestic Product||Exports||Imports||₹ per USD||Inflation Index (2000=100)||Per Capita Income|
(as % of US)
About one-fourth of the national population was urban by 2000.
The Indian steel industry began expanding into Europe in the 21st century. In January 2007 India's Tata bought European steel maker Corus Group for $11.3 billion. In 2006 Mittal Steel (based in London but with Indian management) acquired Arcelor for $34.3 billion to become the world's biggest steel maker, ArcelorMittal, with 10% of world output.
The GDP of India in 2007 was estimated at about 8 percent that of the US. The government started the Golden Quadrilateral road network connecting Delhi, Chennai, Mumbai and Kolkata with various Indian regions. The project, completed in January 2012, was the most ambitious infrastructure project of independent India.
The top 3% of the population still earn 50% of GDP.
The 28-year economic boom cycle ended in 2020. The coronavirus pandemic led to a temporary recession in the Indian economy. The 2nd and 3rd quarter of the 2020 financial year had GDP drops of 23.9% and 8.6% respectively.
|Year||GDP||Exports||Imports||₹ per USD||Inflation Index
|Per Capita Income|
(as % of US)
For purchasing power parity comparisons, the US dollar is converted at 9.46 rupees. Despite continuous real GDP growth of at least 5% since 2009, the Indian economy was mired in bureaucratic hurdles.
|Year||India's GDP at Current Prices
(in crores INR)
|India's GDP at Constant 2004–2005 Prices
(in crores INR)
|Real Growth Rate|
... The Persians have very little maritime strength ... their ship carpenters on the Caspian were mostly Indians ... there is a little temple, in which the Indians now worship
... The Russian merchant, F.A. Kotov ... saw in Isfahan in 1623, both Hindus and Muslims, as Multanis.
... George Forster ... On the 31st of March, I visited the Atashghah, or place of fire; and on making myself known to the Hindoo mendicants, who resided there, I was received among these sons of Brihma as a brother
... A society of Moultan Hindoos, which has long been established in Baku, contributes largely to the circulation of its commerce; and with the Armenians they may be accounted the principal merchants of Shirwan ...
... Six or 7 miles southeast is Surakhani, the location of a very ancient monastery of the fire-worshippers of India ...
|volume=has extra text (help)
|volume=has extra text (help)
Despite a considerable improvement in rate of growth of India's real GDP in the 1950s, the performance of the Indian economy did not meet the expectations of India's political leaders.
Frankel, Francine R. India's Political Economy, 1947–1977: The Gradual Revolution (1978). | https://db0nus869y26v.cloudfront.net/en/Economic_history_of_India | 21 |
149 | How Fossil Fuel Was Formed And Utilized: Contrary to what many people believe, fossil fuels are not the remains of dead dinosaurs. In fact, most of the fossil fuels found today were formed millions of years before the first dinosaurs. Fossil fuels, however, were once alive. They were formed from prehistoric plants and animals that lived hundreds of millions of years ago.
Think about what the Earth must have looked like 300 million years or so ago. The land masses we live on today were just forming. There were swamps and bogs everywhere. The climate was warmer. Trees and plants grew everywhere. Strange looking animals walked on the land, and just as weird looking fish swam in the rivers and seas.
Tiny one-celled organisms called proto-plankton floated in the ocean. When these living things died, they decomposed and became buried under layers and layers of mud, rock, and sand. Eventually, hundreds and sometimes thousands of feet of earth covered them. In some areas, the decomposing materials were covered by seas, and then the seas dried up and receded.
During the millions of years that passed, the dead plants and animals slowly decomposed into organic materials and formed fossil fuels. Different types of fossil fuels were formed depending on what combination of animal and plant debris was present, how long the material was buried, and what conditions of temperature and pressure existed when they were decomposing.
For example, oil and natural gas were created from organisms that lived in the water and were buried under ocean or river sediments. Long after the great prehistoric seas and rivers vanished, heat, pressure, and bacteria combined to compress and “cook” the organic material under layers of silt.
In most areas, a thick liquid called oil formed first, but in deeper, hot regions underground, the cooking process continued until natural gas was formed.
Over time, some of this oil and natural gas began working its way upward through the earth’s crust until they ran into rock formations called “caprocks” that are dense enough to prevent them from seeping to the surface. It is from under these caprocks that most oil and natural gas is produced today.
The same types of forces also created coal, but there are a few differences. Coal formed from the remains of trees, ferns, and other plants that lived 300 to 400 million years ago. In some areas, such as portions of what is now the eastern United States, coal was formed from swamps covered by sea water.
The sea water contained a large amount of sulfur, and as the seas dried up, the sulfur was left behind in the coal.
Today, scientists are working on ways to take the sulfur out of coal because when coal burns, the sulfur can become an air pollutant. Some coal deposits, however, were formed from freshwater swamps which had very little sulfur in them. These coal deposits, located largely in the western part of the United States, have much less sulfur in them.
All of these fossil fuels have played important roles in providing the energy that every man, woman, and child in the United States uses. With better technology for finding and using fossil fuels, each can play an equally important role in the future.
WHAT IS COAL?
Coal looks like a shiny black rock. Coal has lots of energy in it. When it is burned, coal makes heat and light energy. The cave men used coal for heating, and later for cooking. Burning coal was easier because coal burned longer than wood and, therefore, did not have to be collected as often. People began using coal in the 1800s to heat their homes.
Trains and ships used coal for fuel. Factories used coal to make iron and steel. Today, we burn coal mainly to make electricity.
COAL IS A FOSSIL FUEL
Coal was formed millions of years ago, before the dinosaurs. Back then, much of the earth was covered by huge swamps. They were filled with giant ferns and plants. As the plants died, they sank to the bottom of the swamps. Over the years, thick layers of plants were covered by dirt and water. They were packed down by the weight.
After a long time, the heat and pressure changed the plants into coal. Coal is called a fossil fuel because it was made from plants that were once alive! Since coal comes from plants, and plants get their energy from the sun, the energy in coal also came from the sun.
The coal we use today took millions of years to form. We can’t make more in a short time. That is why coal is called nonrenewable.
COAL IS OUR MOST ABUNDANT FUEL
The United States has more coal reserves than any other country in the world. In fact, one-fourth of all the known coal in the world is in the United States. The United States has more coal that can be mined than the rest of the world has oil that can be pumped from the ground. We have enough to last more than 250 years. Currently, coal is mined in 25 of the 50 states.
Coal is used primarily in the United States to generate electricity. In fact, it is burned in power plants to produce more than half of the electricity we use. A stove uses about half a ton of coal a year. A water heater uses about two tons of coal a year. And a refrigerator, that’s another half-ton a year. Even though you may never see coal, you use several tons of it every year. Coal is not only our most abundant fossil fuel, it is also the one with perhaps the longest history.
A BRIEF HISTORY OF COAL
Coal is the most plentiful fuel in the fossil family and it has the longest and, perhaps, the most varied history. Coal has been used for heating since the cave man. Archeologists have also found evidence that the Romans in England used it in the second and third centuries (100- 200 AD).
In the 1700s, the English found that coal could produce a fuel that burned cleaner and hotter than wood charcoal. During the 1300s in North America, the Hopi Indians used coal for cooking, heating and to bake the pottery they made from clay. Coal was later rediscovered in the United States by explorers in 1673. The Industrial Revolution played a major role in expanding the use of coal.
A man named James Watt invented the steam engine which made it possible for machines to do work previously done by humans and animals. Mr. Watt used coal to make the steam to run his engine. During the first half of the 1800s, the Industrial Revolution spread to the United States. Steamships and steam-powered railroads were main forms of transportation, and they used coal to fuel their boilers.
In the second half of the 1800s, more uses for coal were found. During the Civil War, weapons factories were beginning to use coal. By 1875, coke (which is made from coal, and is not the same as CocaCola) replaced charcoal as the primary fuel for iron blast furnaces to make steel. The burning of coal to generate electricity is a relative newcomer in the long history of this fossil fuel.
It was in the 1880s when coal was first used to generate electricity for homes and factories. By 1961, coal had become the major fuel used to generate electricity in the United States. Long after homes were being lighted by electricity produced by coal, many of them continued to have furnaces for heating and some had stoves for cooking that were fueled by coal. Today we use a lot of coal, primarily because we have a lot of it and we know where it is in the United States.
COAL MINING AND TRANSPORTATION
Most coal is buried under the ground. If coal is near the surface, miners dig it up with huge machines. First, they scrape off the dirt and rock, then dig out the coal. Th is is called surface mining. After the coal is mined, they put back the dirt and rock. They plant trees and grass.
The land can then be used again. This is called reclamation. If the coal is deep in the ground, tunnels called mine shafts are dug down to the coal. Machines dig the coal and carry it to the surface. Some mine shafts are 1,000 feet deep. This is called deep mining, or underground mining.
In the mine, coal is loaded in small coal cars or on conveyor belts which carry it outside the mine to where the larger chunks of coal are loaded into trucks that take it to be crushed (smaller pieces of coal are easier to transport, clean, and burn). The crushed coal can then be sent by truck, ship, railroad, or barge.
You may be surprised to know that coal can also be shipped by pipeline. Crushed coal can be mixed with oil or water (the mixture is called a slurry) and sent by pipeline to an industrial user.
CONVERTING COAL INTO ELECTRICITY
Nine out of every 10 tons of coal mined in the United States today are used to make electricity, and nearly half of the electricity used in this country is coal-generated electricity. Electricity from coal is the electric power made from the energy stored in coal. Carbon, made from ancient plant material, gives coal most of its energy. This energy is released when coal is burned.
We use coal-generated electricity for:
- and much more!
The process of converting coal into electricity has multiple steps and is similar to the process used to convert oil and natural gas into electricity:
1. A machine called a pulverizer grinds the coal into a fine powder.
2. The coal powder mixes with hot air, which helps the coal burn more efficiently, and the mixture moves to the furnace.
3. The burning coal heats water in a boiler, creating steam.
4. Steam from the boiler spins the blades of an engine called a turbine, transforming heat energy from burning coal into mechanical energy that spins the turbine engine.
5. The spinning turbine is used to power a generator, a machine that turns mechanical energy into electric energy. This happens when magnets inside a copper coil in the generator spin.
6. A condenser cools the steam moving through the turbine. As the steam is condensed, it turns back into water.
7. The water returns to the boiler, and the cycle begins again.
Electricity-generating plants send out electricity using a transformer, which changes the electricity from low voltage to high voltage. Th is is an important step, as it gives electricity the jolt it needs to travel from the power plant to its final destination. Voltages are often as high as 500,000 volts at this point.
Electricity flows along transmission lines to substation transformers. These transformers reduce the voltage for use in the local areas to be served. From the substation transformers, electricity travels along distribution lines, which can be either above or below the ground, to cities and towns.
Transformers once again reduce the voltage—this time to about 120 to 140 volts—for safe use inside homes and businesses. The delivery process is instantaneous. By the time you have flipped a switch to turn on a light, electricity has been delivered.
COAL’S ROLE IN OUR ELECTRICAL SUPPLY
Natural gas and oil are also used to make electricity. How does coal compare to these other fossil fuels? In terms of supply, coal has a clear advantage. The United States has nearly 300 billion tons of recoverable coal. That is enough to last more than 250 years if we continue to use coal at the same rate as we use it today.
But what about costs? The mining, transportation, electricity generation, and pollution-control costs associated with using coal are increasing, but both natural gas and oil are becoming more expensive to use as well. This is, in part, because the United States must import much of its oil supply from other countries.
It has enough coal, however, to take care of its electricity needs, with enough left over to export some coal as well. The cost of using coal should continue to be even more competitive, compared with the rising cost of other fuels. In fact, generating electricity from coal is cheaper than the cost of producing electricity from natural gas.
In the United States, 23 of the 25 electric power plants with the lowest operating costs use coal. Inexpensive electricity, such as that generated by coal, means lower operating costs for businesses and for homeowners. This advantage can help increase coal’s competitiveness in the marketplace.
CLEANING UP COAL
Coal is our most abundant fossil fuel. The United States has more coal than the rest of the world has oil. There is still enough coal underground in this country to provide energy for the next 250 years or more. But coal is not a perfect fuel.
Trapped inside coal are traces of impurities like sulfur and nitrogen. When coal burns, these impurities are released into the air. While floating in the air, these substances can combine with water vapor (for example, in clouds) and form droplets that fall to earth as weak forms of sulfuric and nitric acid. Scientists call it “acid rain.”
There are also tiny specks of minerals—including common dirt—mixed in coal. These tiny particles don’t burn and make up the ash left behind in a coal combustor. Some of the tiny particles also get caught up in the swirling combustion gases and, along with water vapor, form the smoke that comes out of a coal plant’s smokestack. Some of these particles are so small that 30 of them laid side-by-side would barely equal the width of a human hair.
Also, coal like all fossil fuels is formed out of carbon. All living things—even people—are made up of carbon. (Remember—coal started out as living plants.) But when coal burns, its carbon combines with oxygen in the air and forms carbon dioxide. Carbon dioxide is a colorless, odorless gas, but in the atmosphere, it is one of several gases that can trap the earth’s heat.
Many scientists believe this is causing the earth’s temperature to rise, and this warming could be altering the earth’s climate. Sounds like coal is a dirty fuel to burn. Many years ago, it was.
But things have changed. Especially in the last 20 years, scientists have developed ways to capture the pollutants trapped in coal before they can escape into the air.
We also have new technologies that cut back on the release of carbon dioxide by burning coal more efficiently. Many of these technologies belong to a family of energy systems called “clean coal technologies.”
HOW DO YOU MAKE COAL CLEANER?
Actually there are several ways. One way is to clean the coal before it arrives at the power plant. This is done by simply crushing the coal into small chunks and washing it. Another way is to use “scrubbers” that remove the sulfur dioxide (a pollutant) from the smoke of coal-burning power plants.
HOW DO SCRUBBERS WORK?
Most scrubbers rely on a very common substance found in nature called “limestone.” We literally have mountains of limestone throughout the United States. When crushed and processed, limestone can be made into a white powder. Limestone can be made to absorb sulfur gases under the right conditions—much like a sponge absorbs water.
In most scrubbers, limestone (or another similar material called lime) is mixed with water and sprayed into the coal combustion gases (called “flue gases”). The limestone captures the sulfur and “pulls” it out of the gases. The limestone and sulfur combine with each other to form either a wet paste (it looks like toothpaste!), or in some newer scrubbers, a dry powder.
In either case, the sulfur is trapped and prevented from escaping into the air.
THE CLEANEST COAL TECHNOLOGY —A REAL GAS!
We can even turn coal into a gas—using lots of heat and water—in a process called gasification. When coal is turned into a gas, we can burn it and use it to spin a gas turbine to generate electricity. The exhaust gases coming out of the gas turbine are hot enough to boil water to make steam that can spin another type of turbine to generate even more electricity. But why go to all the trouble to turn the coal into gas if all you are going to do is burn it?
A big reason is that the pollutants in coal—like sulfur, nitrogen and carbon dioxide —can be almost entirely cleaned up when coal is changed into a gas. In fact, scientists have ways to remove 99.9 percent of the sulfur and small dirt particles from coal gas.
Gasifying coal is one of the best ways to clean pollutants out of coal. Another reason is that the coal gases don’t have to be burned. They can also be used as valuable chemicals. Scientists have developed ways to turn coal gases into everything from liquid fuels for cars and trucks to plastic toothbrushes!
COAL AND CLIMATE CHANGE
Carbon dioxide (CO2) is a colorless, odorless gas that is produced naturally when humans and animals breathe. The main source of man-made CO2 emissions, however, is the burning of fossil fuels (oil, natural gas and coal) for energy production.
Carbon dioxide is important for plants and animals, but if too much of it is produced, it can build up in the air and trap heat near the earth’s surface. This is called the greenhouse effect.
To clean CO2 from power plants, scientists have been studying how to capture the CO2 coming up a power plant’s smokestack before it gets into the air. The CO2 can then gathered, transported, and eventually stored deep underground or in the ocean, where it’s supposed to sit for a long, long time.
Scientists are even studying ways to recycle the CO2 into new materials. The technical name for this process is carbon capture and storage, or carbon sequestration.
It is expected that coal and other fossil fuels will remain a major energy source for years to come. Many environmentalists believe that capturing and storing CO2 from power plants, combined with other efforts, could help fight climate change. Scientists continue to research and develop carbon sequestration technologies.
It is important to make sure these new processes are environmentally acceptable and safe. For example, scientists must determine that CO2 will not escape from under the ground, or contaminate drinking water supplies. Carbon capture and storage is an exciting area of research and development for today’s scientists.
WHAT IS NATURAL GAS?
Raw natural gas is a mixture of different gases. The main ingredient is methane, a natural compound that is formed whenever plant and animal matter decays. By itself, methane is odorless, colorless, and tasteless. As a safety measure, natural gas companies add a chemical odorant called mercaptan (it smells like rotten eggs) so escaping gas can be detected. Natural gas should not be confused with gasoline, which is made from petroleum.
WHERE IS NATURAL GAS FOUND?
Like petroleum, natural gas can be found throughout the world. It is estimated that there are still vast amounts of natural gas left in the ground. However, it is very difficult to estimate how much natural gas is still underground. New technologies are helping to make the process a little easier and more accurate.
Recent estimates show that most of the world’s natural gas reserves are located in the Middle East, Europe, and the former U.S.S.R., with these reserves making up nearly 75 percent of total worldwide reserves. Roughly 16 percent of the reserves are located in Africa and Asia and another 4 percent in Central and South America.
The United States makes up almost 4 percent. While the United States may only have a small percentage of natural gas when compared to worldwide reserves, there is still plenty in the country to last for at least another 60 years or longer, as a lot of gas may be undiscovered or unrecoverable with today’s technologies.
Natural gas is produced in 32 states. The top producing states are Texas, Oklahoma, New Mexico, Wyoming, and Louisiana, which produce more than 50 percent of U.S. natural gas.
USES FOR NATURAL GAS
For many years, natural gas was considered worthless and was discarded by being burned in giant flares. But it wasn’t long before it was discovered as a useful energy source. Today, approximately 24 percent of the energy consumption of the United States comes from natural gas. More than one-half of the homes in the country use natural gas as their main heating fuel.
Natural gas is a colorless, shapeless, and odorless gas. Because it has no odor, gas companies add a chemical to it that smells similar to rotten eggs. This way you can tell if there is a gas leak in your house. Natural gas is also an essential raw material for many common products, including paints, fertilizers, plastics, antifreeze, and medicine.
We also get propane—a fuel often used in many barbecue grills—when we process natural gas.
DRILLING FOR NATURAL GAS
The exploration for and production of natural gas is very similar to that of petroleum. In fact, natural gas is commonly found in the same reservoirs as petroleum. Because natural gas is lighter, it is often found on top of the oil.
And like oil, some natural gas flows freely to wells because of the natural pressure of the underground reservoir forces the gas through the reservoir rocks. These types of gas wells require only a “Christmas tree,” which is a series of pipes and valves on the surface that are used to control the flow of gas. Only a small number of these free-flowing gas formations still exist in the U.S. gas fields.
Most now need some type of pumping system to extract the gas still trapped in the underground formation. One of the most common is the “horse head” pump, which rocks up and down to lift a rod in and out of a well bore, bringing gas and oil to the surface.
Often the flow of gas through a reservoir can be improved by creating tiny cracks in the rock, called fractures, that serve as open pathways for the gas to flow. In a technique called “hydraulic fracturing,” drillers force high pressure fluids (like water) into a formation to crack the rock.
A “propping agent,” like sand or tiny glass beads, is added to the fluid to prop open the fractures when the pressure is decreased.
Natural gas can be found in a variety of different underground formations, including: shale formations; sandstone beds; and coal seams. Some of these formations are more difficult and more expensive to produce than others, but they hold the potential for vastly increasing the nation’s available gas supply.
Recent research is exploring how to obtain and use gas from these sources. Some of the work has been in Devonian shales, which are rock formations of organic rich clay where gas has been trapped. Dating back nearly 350 million years (to the Devonian Period), these black or brownish shales were formed from sediments deposited in the basins of inland seas during the erosion that formed the Appalachian Mountains.
Other sources of gas include “tight sand lenses.” These deposits are called “tight” because the holes that hold the gas in the sandstone are very small. It is hard for the gas to flow through these tiny spaces. To get the gas out, drillers must first crack the dense rock structure to create ribbon-thin passageways through which the gas can flow.
Coalbed methane gas that is found in all coal deposits was once regarded as only a safety hazard to miners but now, due to research, is viewed as a valuable potential source of gas.
STORAGE AND DELIVERY OF NATURAL GAS
Once natural gas is produced from underground rock formations, it is sent by pipelines to storage facilities and then on to the end user. The United States has a vast pipeline network that transports gas to and from nearly any location in the lower 48 states. There are more than 210 natural gas pipeline systems, using more than 300,000 miles of interstate and intrastate transmission pipelines.
There are more than 1,400 compressor stations that maintain pressure on the natural gas to keep it moving through the system. There are more than 400 underground natural gas storage facilities that can hold the gas until it is needed back in the system for delivery to the more than 11,000 delivery points, 5,000 receipt points, and 1,400 interconnection points that help transfer the gas throughout the country.
MEETING OUR FUTURE NATURAL GAS NEEDS
Natural gas is an important energy source for the U.S. economy, providing 24 percent of all energy used in our Nation’s diverse energy portfolio. A reliable and efficient energy source, natural gas is also the least carbon-intensive of the fossil fuels.
Historically, the United States has produced much of the natural gas it has consumed, with the balance imported primarily from Canada through pipelines. The total U.S. natural gas consumption is expected to increase from about 23 trillion cubic feet today to 24 trillion cubic feet in 2035.
Production of domestic conventional and unconventional natural gas cannot keep pace with demand growth. The development of new, cost-effective resources such as methane hydrate can play a major role in moderating price increases and ensuring adequate future supplies of natural gas for American consumers.
Methane hydrate is a cage-like lattice of ice inside of which are trapped molecules of methane, the chief component of natural gas. If methane hydrate is either warmed or depressurized, it will revert back to water and natural gas. When brought to the earth’s surface, one cubic meter of gas hydrate releases 164 cubic meters of natural gas.
Hydrate deposits may be several hundred meters thick and generally occur in two types of settings: under Arctic permafrost, and beneath the ocean floor. Methane that forms hydrate can be both biogenic, created by biological activity in sediments, and thermogenic, created by geological processes deeper within the earth.
While global estimates vary considerably, the energy content of methane occurring in hydrate form is immense, possibly exceeding the combined energy content of all other known fossil fuels. However, future production volumes are speculative because methane production from hydrate has not been documented beyond small-scale field experiments.
LIQUEFIED NATURAL GAS
Another way to ensure the United States has enough natural gas to meet demands is through importing gas from foreign countries. Currently, most of the demand for natural gas in the United States is met with domestic production and imports via pipeline from Canada.
However, a small but growing percentage of gas supplies is imported and received as liquefied natural gas (LNG). A significant portion of the world’s natural gas resources are considered “stranded” because they are located far from any market. Transportation of LNG by ship is one method to bring this stranded gas to the consumer.
LNG is produced by taking natural gas from a production field, removing impurities, and liquefying the natural gas. In the liquefaction process, the gas is cooled to a temperature of approximately -260 degrees F at ambient pressure. The condensed liquid form of natural gas takes up 600 times less space than natural gas.
The LNG is loaded onto double-hulled ships which are used for both safety and insulating purposes. Once the ship arrives at the receiving port, the LNG is typically off -loaded into well-insulated storage tanks. Regasification is used to convert the LNG back into its gas form, which enters the domestic pipeline distribution system and is ultimately delivered to the end-user.
In 2008, the United States imported 352 billion cubic feet (Bcf ) of LNG from a variety of exporting countries but primarily from Trinidad and Tobago. There are currently nine LNG import terminals located along the Atlantic and Gulf coasts.
The mainland terminals are: Everett, Massachusetts; Cove Point, Maryland; Elba Island, Georgia; Freeport, Texas; Sabine Pass, Louisiana; Cameron, Louisiana; and Lake Charles, Louisiana. The off shore terminals are Gulf Gateway Energy Bridge in the Gulf of Mexico and Northeast Gateway, located off shore Boston.
As of July 2009, the government reported 34 new or expanded facilities that have been approved or proposed to serve U.S. markets.
If you could look down an oil well and see oil where Nature created it, you might be surprised. You wouldn’t see a big underground lake, as a lot of people think. Oil doesn’t exist in deep, black pools. In fact, an underground oil formation—an “oil reservoir”—looks very much like any other rock formation.
Oil exists in this underground formation as tiny droplets trapped inside the open spaces, called “pores,” inside rocks. The pores and the oil droplets can be seen only through a microscope. The droplets cling to the rock, like drops of water cling to a window pane.
WHERE IS OIL FOUND?
Oil reserves are found all over the world. However, some have produced more oil than others. The top oil producing countries are Saudi Arabia, Russia, the United States, Iran, and China.
In the United States, petroleum is produced in 31 states. Those states that produce the most petroleum are Texas, Alaska, California, Louisiana, and Oklahoma.
While the United States is one of the top producing countries, its need for petroleum surpasses the amount it can produce; therefore, a majority of our oil (close to 60 percent) must be imported from foreign countries. The country we import the most oil from is Canada, followed by Saudi Arabia, Mexico, Venezuela, and Nigeria.
USES FOR PETROLEUM
You are probably already familiar with the main use for petroleum: gasoline. It is used to fuel most cars in the United States. But petroleum is also used to make many more products that we use on a daily basis. A majority of petroleum is turned into an energy source. Other than gasoline, petroleum can also be used to make heating oil, diesel fuel, jet fuel, and propane.
It can also be turned into petrochemical feedstock—a product derived from petroleum principally for the manufacturing of chemicals, synthetic rubber, and plastics. It is also used to make many common household products, including crayons, dishwashing liquids, deodorant, eyeglasses, tires, and ammonia.
DRILLING FOR OIL—EXPLORATION
The first step to drilling for oil is knowing where to drill. Because it is an expensive endeavor, oil producers need to know a lot about an oil reservoir before they start drilling. They need to know about the size and number of pores in a reservoir rock, how fast oil droplets will move through the pores, as well as where the natural fractures are in a reservoir so that they know where to drill.
While in the past it may have taken a few guesses and some misses to find the right place to drill, scientists have discovered new ways to determine the right locations for oil wells. Using sound waves, scientists can determine the characteristics of the rocks underground. Sound travels at different speeds through different types of rocks.
By listening to sound waves using devices called “geophones,” scientists can measure the speed at which the sound waves move through the rock and determine where there might be oil-bearing rocks. Scientists can also use electric currents in place of the sound waves for the same effect.
Scientists can also examine the rock itself. An exploratory well will be drilled and rock samples called “cores” will be brought to the surface. The samples will be examined under a microscope to see if oil droplets are trapped within the rock.
DRILLING FOR OIL—PRIMARY RECOVERY
Once the oil producers are confident they have found the right kind of underground rock formation, they can begin drilling production wells. When the well first hits the reservoir, some of the oil may come to the surface immediately due to the release of pressure in the reservoir.
Pressure from millions of tons of rock lying on the oil and from the earth’s natural heat build up in the reserve and expand any gases that may be in the rock. When the well strikes the reserve, this pressure is released, much like the air escaping from a balloon. The pressure forces the oil through the rock and up the well to the surface.
Years ago, when the equipment wasn’t as good, it was sometimes difficult to prevent the oil from spurting hundreds of feet out of the ground in a “gusher.” Today, however, oil companies install special equipment on their wells called “blowout preventers” that prevents the gushers and helps to control the pressure inside the well.
When a new oil field first begins producing oil, the natural pressures in the reservoir force the oil through the rock pores, into fractures and up production wells. This natural flow of oil is called “primary production.” It can go on for days or years. But after a while, an oil reservoir begins to lose pressure.
The natural oil flow begins dropping off and oil companies must use pumps to bring the oil to the surface. It is not uncommon for natural gas to be found along with the petroleum. Oil companies can separate the gas from the oil and inject it back into the reservoir to increase the pressure to keep the oil flowing.
But sometimes this is not enough to keep the oil flowing and a lot of oil will be left behind in the ground. Secondary recovery is then used to increase the amount of oil produced from the well.
DRILLING FOR OIL—SECONDARY RECOVERY
Imagine spilling a can of oil on a concrete floor. You would be able to wipe some of it up, but a thin film of oil might be left on the floor. You could take a hose and spray the floor with water to wash away some of the oil. This is basically what oil producers can do to an oil reservoir during secondary recovery.
They drill wells called “injection wells” and use them like gigantic hoses to pump water into an oil reservoir. The water washes some of the remaining oil out of the rock pores and pushes it through the reservoir to production wells. This is called “waterflooding.”
Let’s assume that an oil reservoir had 10 barrels of oil in it at the start (an actual reservoir can have millions of barrels of oil). This is called “original oil in place.” Of those original 10 barrels, primary production will produce about two and a half barrels.
Waterflooding will produce another one-half to one barrel. That means that in our imaginary oil reservoir of 10 barrels, there will still be six and a half to seven barrels of oil left behind after primary production and waterflooding.
In other words, for every barrel of oil we produce, we will leave around two barrels behind in the ground. This is the situation facing today’s oil companies. In the history of the United States oil industry, more than 195 billion barrels of oil have been produced but more than 400 billion barrels have been left in the ground.
DRILLING FOR OIL—ENHANCED OIL RECOVERY
Petroleum scientists are working on ways to extract the huge amounts of oil that are left behind after primary and secondary production. Th rough enhanced oil recovery (EOR) techniques, it may be possible to produce 30 to 60 percent of the reservoir’s original oil in place. Current research in EOR techniques includes:
• Thermal recovery, which involves the introduction of heat such as the injection of steam to lower the viscosity, or thin, the heavy viscous oil, and improve its ability to flow through the reservoir. Thermal techniques account for more than 50 percent of U.S. EOR production, primarily in California.
• Gas injection, which uses gases such as natural gas, nitrogen, or carbon dioxide that expand in a reservoir to push additional oil to a production wellbore, or other gases that dissolve in the oil to lower its viscosity and improve its flow rate. Gas injection accounts for nearly 50 percent of EOR production in the United States.
• Chemical injection, which can involve the use of long-chained molecules called polymers to increase the effectiveness of waterfloods, or the use of detergent-like surfactants to help lower the surface tension that often prevents oil droplets from moving through a reservoir. Chemical techniques account for less than 1 percent of U.S. EOR production.
The EOR technique that is attracting the most interest is carbon dioxide (CO2)-EOR. Injecting CO2—the same gas that gives soda pop its fizz—into an oil reservoir thins crude oil left behind, pressurizes it, and helps move it to producing wells.
When all remaining economically recoverable oil is produced, the reservoir and adjacent formations can provide sites for storage of CO2 produced from the combustion of fossil fuels in power plants and other processes that generate large amounts of CO2.
By capturing the CO2 emissions from these sources and then pumping it into depleting oil reservoirs, we not only increase the production from the well but store the CO2 underground to prevent it from being released to the atmosphere, where it may affect the climate.
The potential or CO2 sequestration in depleted oil and gas reservoirs is enormous. The Department of Energy has documented the location of more than 152 billion tons of sequestration potential in the United States and Canada from CO2-EOR. Currently, about 48 million tons of mostly naturally produce CO2 are injected annually for EOR operations in the United States.
When crude oil is removed from the ground, it does not come out in a form that is readily useable. Before it can be used, it must be refined, where it is cleaned and separated into parts to create the various fuels and chemicals made from oil. Within the oil are different hydrocarbons which have various boiling points, meaning they can be separated through distillation.
To do this, the oil is piped through hot furnaces and based on the hydrocarbon’s weight and boiling point, various liquids and vapors will be created. The lightest components, such as gasoline, will vaporize and rise to the top, where they will condense and turn back into liquids.
The heavier components will sink to the bottom. This will allow the components to be separated from each other and turned into their respective product or fuel. After the refinery, the gasoline and other fuels created are ready to be distributed for use. A system of pipeline runs throughout the United States to transport oil and fuels from one location to another.
There are pipelines that transport crude oil from the oil well to the refinery. At the refinery, there are additional pipelines that transport the finished product to various storage terminals where it can then be loaded onto trucks for delivery, such as to a gas station.
Sometimes the oil is located deep underneath the ocean floor and off shore drilling must be used to extract the crude oil. A platform is built to house the equipment needed to drill the well; the type of platform used will depend on a variety of characteristics of the location, including the depth of the water and how far underwater the drilling target is located.
A blowout preventer is used just like on wells built on land. This helps prevent petroleum from leaking out of the well and into the water. Currently, there are more than 4,000 active platforms drilling for oil in the Gulf of Mexico. While a majority of them are located in waters less than 200 meters (650 feet) in depth, nearly 30 are located in areas where the water is more than 800 meters (2,400 feet) in depth.
STRATEGIC PETROLEUM RESERVE
Oil is a very important commodity to the United States. It fuels our cars and buses, as well as the machines at many factories and refineries. With a majority of the oil we use today being imported, what would happen if we weren’t able to get enough oil to keep up with the demand?
An oil embargo in the 1970s cut off the supply of oil imported to the United States from the Middle East leading to long lines at the gas stations and even some fuel shortages. While the idea of creating a stockpile had come up before, the embargo helped cement the idea that an oil reserve was in fact needed.
In 1975, Congress passed the Energy Policy and Conservation Act, which made it policy of the United States to establish a reserve of up to 1 billion barrels of crude oil. By 1977, oil was being delivered to the new Strategic Petroleum Reserve (SPR).
Currently, there are four SPR sites located in Texas and Louisiana, which have a total capacity to hold 727 million barrels of crude
oil (filled as of December 2009), making it the largest emergency oil stockpile in the world.
STORING THE OIL
At the SPR sites, the crude oil is stored in underground salt caverns. Salt caverns are carved out of underground salt domes by a process called “solution mining.” Essentially, the process involves drilling a well into a salt formation then injecting massive amounts of fresh water.
The water dissolves the salt. In creating the SPR caverns, the dissolved salt was removed as brine and either reinjected into disposal wells or more commonly, piped several miles off shore into the Gulf of Mexico. By carefully controlling the freshwater injection process, salt caverns of very precise dimensions can be created.
Besides being the lowest cost way to store oil for long periods of time, the use of deep salt caverns is also one of the most environmentally secure. At depths ranging from 2,000 to 4,000 feet (610 – 1,220 meters), the salt walls of the storage caverns are “self-healing.” The extreme geologic pressures make the walls rock hard, and should any cracks develop in the walls, they would be almost instantly closed.
An added benefit of deep salt cavern storage is the natural temperature difference between the top of the caverns and the bottom—a distance of around 2,000 feet. The temperature differential keeps the crude oil continuously circulating in the caverns, maintaining the oil at a consistent quality.
The fact that oil floats on water is the underlying mechanism used to move oil in and out of the SPR caverns. To withdraw crude oil, fresh water is pumped into the bottom of a cavern. The water displaces the crude oil to the surface. After the oil is removed from the SPR caverns, pipelines send it to various terminals and refineries around the nation.
FILLING THE SPR
Oil for the SPR can be purchased by the government from oil companies, as was done in the 1970s and the 1980s. In the late 1990s, the SPR also began using royalty-in-kind oil, which is oil that is given to the government by petroleum operators as payment on leases they hold on the federally owned Outer Continental Shelf in the Gulf of Mexico. Instead of paying for the leases with money, the companies give the government oil, which is then put into the reserves.
USING THE SPR
In an emergency, when a limited nation’s oil supply leads to an adverse impact on national safety or on the national economy, the president may order oil to be withdrawn from the reserve. The president can issue a full “drawdown” in which all the oil from the reserve is released.
A limited drawdown may be issued in times where the event threatening national energy supplies and the economy is less severe or expected to be of short duration. A limited drawdown has restrictions on the amount of oil that can be released, as well as for how long.
The president may also order a test sale, in which the process of releasing the oil into the marketplace is tested to ensure all personnel know the procedures for a drawdown and all equipment is operational. In the event of a drawdown, the Department of Energy—which manages the SPR—will offer a specific number of barrels of crude oil from the reserve for sale.
The department will select those companies to sell to and can begin delivering the oil within 13 days. Oil can be pumped from the reserve at a maximum rate of 4.4 million barrels per day for up to 90 days before the drawdown rate begins declining as the caverns empty out. At 1 million barrels per day, the reserve can release oil into the market continuously for nearly a year and a half.
There have been two emergency drawdowns from the reserve. The first took place in 1991 during the Persian Gulf War. In order to maintain a stabilized petroleum market during Operation Desert Storm, the government offered 33 million barrels of oil from the SPR for sale; a little more than 17 million barrels were bought and deliveries began within a month.
The second emergency drawdown occurred in 2005 after Hurricane Katrina damaged oil refineries in the Gulf Coast region.
The Department of Energy is also authorized to exchange oil from SPR. These exchanges have been used in the past to replace less suitable types of crude oil for higher-quality crude oil.
It has also been used for limited, short-duration actions to assist petroleum companies in resolving oil delivery problems, such as the CITGO/Conoco Exchanges in 2000, when a commercial dry dock collapsed, cutting off shipping channels to the refineries. | https://power.solapv.com/how-fossil-fuel-was-formed-and-utilized/ | 21 |
25 | The interest of the payment would be supported by excise taxes, a cost of producing certain goods within America, specifically, whiskey. Passed and approved in 1791 , the tax was met with harsh criticism and protests. Eventually the Whiskey Tax was suppressed. With the United States trying to pay off an enormous amount of debt, obviously they were to take some drastic measures. However, the excise tax placed on whiskey was not too outrageous or severe to receive as much criticism as it did, and it was justified because of the Benefit Principle.
This Renville is a concept stating, “Those who benefit from government expenditure should pay more taxes to support such expenditure (objectifications). ” As simple as it is, this just means people should pay taxes that support public services, to receive these services. This is important in 1 788 when citizens of the united States had received protection from the American Army during the Revolutionary War. The cost of this service was hefty, so an excise tax on whiskey was enacted on those protected in order to pay for the war.
Although the tax did affect a vital part of the American economy, the tax was justified and did not deserve as harsh reaction and protesting as it did. While justified, and reasonable, the tax could have been improved in order to raise a larger amount of money. Had the Government created a more lenient percent charge or made the small local businesses pay the same as the large corporations, the public opinion would be less in opposition and more taxes would have actually been paid and collected.
The protests over the tax, though not justified, were reasonable based on economic principles of tax incidence and elasticity. When any type of excise tax is created, the increase in price that either the consumer or producer is calculated from tax incidence. When demand is more elastic than supply, which in this situation, is assumed of whiskey, and then producers would pay the cost of the tax. This makes sense because producers would continue to supply a similar number of total product of Whiskey, whereas the demand would decrease more significantly.
Then without all the sales, the producers would be unable to pass the tax onto the consumers, so the price of the tax falls on the makers. With the increased Total Cost, the supply and demand curve shifts, and the Whiskey producers were forced to produce less Whiskey. This decreased the revenue of the Whiskey-Manufacturers and decreased the efficiency of allocation and made consumers less satisfied.
Also in the late 18th century, Whiskey was used as a type of currency that producers could convert their produce into, thus when this excise tax was enacted, the revenues of farmers were hit significantly. Hurting economically and mentally, these contained farmers and unsatisfied consumers petitioned and rebelled against the tax. With all the opposition and tension, Thomas Jefferson Republican Party eventually repealed the Whiskey tax, which increased the favor of the public opinion of the party and assisted him in becoming president.
Citation: Staff, H. (201 2, January 1). The Whiskey Rebellion I Historical Spotlight. Retrieved September 29, 2014, from https://historiographers. Com/ the-whiskey-rebellion/ 2. Basedontheeconomicsoftaxincidence,discusshowtheintensityofprotestoverim positioned excise tax on whiskey to pay for the national debt incurred during the U. S. Revolutionary War may be related to the supply and demand illegalities of whiskey during the late 1 ass’s.
Cite this The Whiskey Rebellion
The Whiskey Rebellion. (2018, Feb 09). Retrieved from https://graduateway.com/the-whiskey-rebellion/ | https://graduateway.com/the-whiskey-rebellion/ | 21 |
15 | Nematodes comprise a worm family so large it literally covers the earth. They range in size from less than a micron in length to as much as 26 feet. Worldwide interest has begun to focus on microscopic nematodes that live with symbiotic bacteria.
"We study these nematodes - which are actually insect killers - not only to understand how diverse they are, but also to use them as biological control alternatives," says Patricia Stock, a nematomologist in the University of Arizona College of Agriculture and Life Sciences.
"We want to see how they interact with the local insects. Using native biological control alternatives is more environmentally friendly than importing other pest control agents."
Known as entomopathogenic nematodes (EPN), the juvenile stage of these tiny worms travels with bacteria in its intestine that specifically kill certain insect species. Nematodes in the family Steinernematidae are associated with Xenorhabdus bacteria; those in the family Heterorhabditidae harbor Photorhabdus bacteria. Both types of EPN operate in similar ways.
In the soil or in encrypted habitats such as the pockets behind the bark of trees, the juvenile nematode waits for (or sometimes actively seeks) an unsuspecting host - a grub or a larva - to jump on it and penetrate it through the insect's natural openings - mouth, anus, spiracles. Or the nematode may enter the host directly by using a dorsal tooth.
Once inside the insect, the nematode vomits the pathogen, which kills the host within 24 to 48 hours and even digests its tissues, creating a perfect environment for the EPN to grow and multiply. One or more adult generations live entirely inside the decaying insect. The third stage infective juvenile is the only one that can live outside the insect host. Numbering about 150,000 strong or more, these juveniles exit the dead larva, carrying the bacteria, and look for other hosts to begin the cycle again. These juveniles also can survive dry conditions in soils for long periods of time before they infect more insects.
This naturally-occurring relationship between the nematodes and their mutualistic bacteria has existed for millennia. EPNs are found in terrestrial environments, including deserts, rainforests, grasslands and other ecological systems, offering a tremendous array of possibilities for study. Stock, who has been researching and interpreting the evolutionary relationships of nematodes for the past 15 years, has collected them in Arizona and from other locations worldwide.
In Costa Rica, for example, she is working with a collaborative team from four universities: the University of Vermont, University of Florida, University of Nebraska and University of Costa Rica, to learn where EPN communities are concentrated. The work is funded by the National Science Foundation.
"We're looking at all groups of nematodes in tropical rainforests," Stock says. "We sweep from the tops of the trees all the way to the ground, searching for nematodes that are potential insect pathogens. The misconception is that they are concentrated more in temperate zones, but this is not true. We're trying to unveil the mystery of nematode diversity in the tropic regions."
In Jordan, she and her colleagues are surveying insect-parasitic nematodes from soil-inhabiting insects and other habitats. The International Arid Lands Consortium, which includes participating institutions from the United States and the Middle East, is sponsoring the project. The study will offer new information and tools for developing non-chemical and non-toxic pest control programs in desert and semi-desert areas.
"These nematodes from Jordan have the potential to provide an environmentally safe alternative for controlling insect pests in agricultural and forestry systems, and also for controlling insect pests of human and veterinary importance," Stock says.
In each location the scientists start with biotic surveys to find out which nematodes and their corresponding bacteria meet local pest management goals. Then they gather samples from soil or other habitats for testing in the laboratory. The nematodes are accurately identified and analyzed using traditional morphology (structure and function) techniques and through molecular screening, including PCR (polymerase chain reaction) and DNA sequence analysis.
The researchers determine the nematode's temperature and moisture requirements, the insect hosts it colonizes and other characteristics. Lists of EPN evolutionary associations, called phylogenies, are assembled to show how the nematodes have evolved in relation to each other and how they are related in a geographic region to affect similar hosts.
"We always look at the insect and nematode interactions in the laboratory first, then go out and look at the crops and environment," Stock says. Some of her work involves comparing commercially available formulations of nematodes with custom- made applications of local nematodes.
In Arizona, Stock's team is collecting native species of EPN for pest control trials in citrus and iceberg lettuce, with funding from the Arizona Citrus Research Council and the Arizona Iceberg Lettuce Research Council. Ninety percent of the citrus orchards in the state have the parasitic citrus nematode.
"We're looking for options in pest control," Stock says. "We're using the entomopathogenic nematodes to antagonize the citrus nematode and other plant-parasitic nematodes and disrupt their life cycle and their infection into the citrus roots." This study is using commercially available nematode products along with isolates of nematodes collected from Arizona's sky island (mountain) regions. For the lettuce trials, native nematodes gathered from the soil in Yuma will be used.
"It's better to use native rather than exotic nematodes to preserve biodiversity," Stock says. It's possible that similar species of nematodes can be used singly or together in reducing pest insect populations.
Once the right nematodes are identified, they can be suspended in a gelatinous matrix, or dried in powder, then mixed in water and sprayed, broadcast or irrigated onto crops.
Large numbers of infectious juveniles are released to inundate and kill the pest insects quickly. Depending on climate conditions, this method works best on greenhouse ornamentals and vegetables, citrus, cranberry, turfgrass and other crops, rather than on high-acreage crops like cotton and soybeans.
"The beauty of this is that in the last 20 years nematodes have been formulated and commercialized," Stock says. "They are more expensive than a chemical product, but so far they have been demonstrated not to harm humans, livestock, beneficial insects or the environment. Nematodes usually have to be underground; their targets are soil insects."
The formulations keep improving as newer isolates of nematodes are found, and there is a lot of commercial interest in matching nematodes to pests, Stock says.
"Yet these nematodes are so powerful and pathogenic not in and of themselves, but because they live in symbiosis with bacteria," Stock concludes. "Both the bacteria and the nematode need each other to survive, making them not only good as biological agents, but also as model systems for understanding basic questions in biology." Given the number of nematodes that exist in the world, the possibilities for discovery are immense.
"The whole nematode phylum is estimated to have 500,000 to a million species. About 25,000 species have been identified so far," Stock says. | https://www.eurekalert.org/pub_releases/2005-03/uoa-swm030405.php | 21 |
23 | The political ideals have been discussed since before the concept of republic was introduced legally by Article Four of the United States Constitution. Particularly modern republicanism has been a guiding political philosophy of the United States that has been a major part of American civic thought since its founding.[additional citation(s) needed] It stresses liberty and inalienable individual rights as central values; recognizes the sovereignty of the people as the source of all authority in law; rejects monarchy, aristocracy, and hereditary political power; expects citizens to be virtuous and faithful in their performance of civic duties; and vilifies corruption. American republicanism was articulated and first practiced by the Founding Fathers in the 18th century. For them, "republicanism represented more than a particular form of government. It was a way of life, a core ideology, an uncompromising commitment to liberty, and a total rejection of aristocracy."
Republicanism was based on Ancient Greco-Roman, Renaissance, and English models and ideas. It formed the basis for the American Revolution, the Declaration of Independence (1776), the Constitution (1787), and the Bill of Rights, as well as the Gettysburg Address (1863).
Republicanism includes guarantees of rights that cannot be repealed by a majority vote. Alexis de Tocqueville warned about the "tyranny of the majority" in a democracy, and suggested the courts should try to reverse the efforts of the majority of terminating the rights of an unpopular minority.
The term 'republicanism' is derived from the term 'republic', but the two words have different meanings. A 'republic' is a form of government (one without a hereditary ruling class); 'republicanism' refers to the values[clarification needed] of the citizens in a republic.
Two major parties have used the term in their name - the Democratic-Republican Party of Thomas Jefferson (founded in 1793, and often called the 'Jeffersonian Republican Party' as it is a political antecedent to the Democratic Party), and also the Republican Party, founded in 1854 and named after the Jeffersonian party.
The colonial intellectual and political leaders in the 1760s and 1770s closely read history to compare governments and their effectiveness of rule. The Revolutionists were especially concerned with the history of liberty in England and were primarily influenced by the "country party" (which opposed the court party that held power). Country party philosophy relied heavily on the classical republicanism of Roman heritage; it celebrated the ideals of duty and virtuous citizenship in a republic. It drew heavily on ancient Greek city-state and Roman republican examples. The country party shared some of the political philosophy of Whiggism as well as Tory critics in England which roundly denounced the corruption surrounding the "court party" in London centering on the royal court. This approach produced a political ideology Americans called "republicanism", which was widespread in colonial America by 1775. "Republicanism was the distinctive political consciousness of the entire Revolutionary generation." J.G.A. Pocock explained the intellectual sources in America:
The Whig canon and the neo-Harringtonians, John Milton, James Harrington and Sidney, Trenchard, Gordon and Bolingbroke, together with the Greek, Roman, and Renaissance masters of the tradition as far as Montesquieu, formed the authoritative literature of this culture; and its values and concepts were those with which we have grown familiar: a civic and patriot ideal in which the personality was founded in property, perfected in citizenship but perpetually threatened by corruption; government figuring paradoxically as the principal source of corruption and operating through such means as patronage, faction, standing armies (opposed to the ideal of the militia); established churches (opposed to the Puritan and deist modes of American religion); and the promotion of a monied interest - though the formulation of this last concept was somewhat hindered by the keen desire for readily available paper credit common in colonies of settlement.
American republicanism was centered on limiting corruption and greed. Virtue was of the utmost importance for citizens and representatives. Revolutionaries took a lesson from ancient Rome; they knew it was necessary to avoid the luxury that had destroyed the empire. A virtuous citizen was one who ignored monetary compensation and made a commitment to resist and eradicate corruption. The republic was sacred; therefore, it was necessary to serve the state in a truly representative way, ignoring self-interest and individual will. Republicanism required the service of those who were willing to give up their own interests for a common good. According to Bernard Bailyn, "The preservation of liberty rested on the ability of the people to maintain effective checks on wielders of power and hence in the last analysis rested on the vigilance and moral stamina of the people. ... " Virtuous citizens needed to be strong defenders of liberty and challenge the corruption and greed in government. The duty of the virtuous citizen became a foundation for the American Revolution.
The commitment of Patriots to republican values was a key intellectual foundation of the American Revolution. In particular, the key was Patriots' intense fear of political corruption and the threat it posed to liberty. Bernard Bailyn states, "The fact that the ministerial conspiracy against liberty had risen from corruption was of the utmost importance to the colonists." In 1768 to 1773 newspaper exposés such as John Dickinson's series of "Letters from a Farmer in Pennsylvania" (1767-68) were widely reprinted and spread American disgust with British corruption. The patriot press provided emphasized British corruption, mismanagement, and tyranny. Britain was increasingly portrayed as corrupt and hostile and that of a threat to the very idea of democracy; a threat to the established liberties that colonists enjoyed and to colonial property rights. The greatest threat to liberty was thought by many to be corruption - not just in London but at home as well. The colonists associated it with luxury and, especially, inherited aristocracy, which they condemned. Historian J.G.A. Pocock argues that Republicanism explains the American Revolution in terms of virtuous Republican resistance to British imperial corruption.
Historian Sarah Purcell studied the sermons preached by the New England patriot clergy in 1774-1776. They stirred up a martial spirit justified war against England. The preachers cited New England's Puritan history in defense of freedom, blamed Britain's depravity and corruption for the necessity of armed conflict. The sermons called on soldiers to behave morally and in a "manly" disciplined fashion. The rhetoric not only encouraged heavy enlistment, but helped create the intellectual climate the Patriots needed to fight a civil war. Historian Thomas Kidd argues that during the Revolution active Christians linked their religion to republicanism. He states, "With the onset of the revolutionary crisis, a major conceptual shift convinced Americans across the theological spectrum that God was raising up America for some special purpose." Kidd further argues that "new blend of Christian and republican ideology led religious traditionalists to embrace wholesale the concept of republican virtue."
Historian Gordon Wood has tied the founding ideas to American exceptionalism: "Our beliefs in liberty, equality, constitutionalism, and the well-being of ordinary people came out of the Revolutionary era. So too did our idea that we Americans are a special people with a special destiny to lead the world toward liberty and democracy." Americans were the protectors of liberty, they had a greater obligation and destiny to assert republican virtue. In Discourse of 1759 Jonathan Mayhew states "An absolute submission to our prince, or whether disobedience and resistance may not be justified able in some cases ... to all those who bear the title of rulers in common but only to those who actually perform the duty of rulers by exercising a reasonable and just authority for the good of human society." The notion that British rulers were not virtuous, nor exercising their authority for the "good of human society" prompted the colonial desire to protect and reestablish republican values in government. This need to protect virtue was a philosophical underpinning of the American Revolution.
The "Founding Fathers" were strong advocates of republican values, especially Samuel Adams, Patrick Henry, George Washington, Thomas Paine, Benjamin Franklin, John Adams, Thomas Jefferson, James Madison and Alexander Hamilton.
Thomas Jefferson defined a republic as:
... a government by its citizens in mass, acting directly and personally, according to rules established by the majority; and that every other government is more or less republican, in proportion as it has in its composition more or less of this ingredient of the direct action of the citizens. Such a government is evidently restrained to very narrow limits of space and population. I doubt if it would be practicable beyond the extent of a New England township. The first shade from this pure element, which, like that of pure vital air, cannot sustain life of itself, would be where the powers of the government, being divided, should be exercised each by representatives chosen ... for such short terms as should render secure the duty of expressing the will of their constituents. This I should consider as the nearest approach to a pure republic, which is practicable on a large scale of country or population ... we may say with truth and meaning, that governments are more or less republican as they have more or less of the element of popular election and control in their composition; and believing, as I do, that the mass of the citizens is the safest depository of their own rights, and especially, that the evils flowing from the duperies of the people, are less injurious than those from the egoism of their agents, I am a friend to that composition of government which has in it the most of this ingredient.
The Founding Fathers discoursed endlessly on the meaning of "republicanism." John Adams in 1787 defined it as "a government, in which all men, rich and poor, magistrates and subjects, officers and people, masters and servants, the first citizen and the last, are equally subject to the laws."
The open question, as Pocock suggested, of the conflict between personal economic interest (grounded in Lockean liberalism) and classical republicanism, troubled Americans. Jefferson and Madison roundly denounced the Federalists for creating a national bank as tending to corruption and monarchism; Alexander Hamilton staunchly defended his program, arguing that national economic strength was necessary for the protection of liberty. Jefferson never relented but by 1815 Madison switched and announced in favor of a national bank, which he set up in 1816.
John Adams often pondered the issue of civic virtue. Writing Mercy Otis Warren in 1776, he agreed with the Greeks and the Romans, that, "Public Virtue cannot exist without private, and public Virtue is the only Foundation of Republics." Adams insisted, "There must be a positive Passion for the public good, the public Interest, Honor, Power, and Glory, established in the Minds of the People, or there can be no Republican Government, nor any real Liberty. And this public Passion must be Superior to all private Passions. Men must be ready, they must pride themselves, and be happy to sacrifice their private Pleasures, Passions, and Interests, nay their private Friendships and dearest connections, when they Stand in Competition with the Rights of society."
Adams worried that a businessman might have financial interests that conflicted with republican duty; indeed, he was especially suspicious of banks. He decided that history taught that "the Spirit of Commerce ... is incompatible with that purity of Heart, and Greatness of soul which is necessary for a happy Republic." But so much of that spirit of commerce had infected America. In New England, Adams noted, "even the Farmers and Tradesmen are addicted to Commerce." As a result, there was "a great Danger that a Republican Government would be very factious and turbulent there."
A second stream of thought growing in significance was the classical liberalism of John Locke, including his theory of the "social contract". This had a great influence on the revolution as it implied the inborn right of the people to overthrow their leaders should those leaders betray the agreements implicit in the sovereign-follower relationship. Historians find little trace of Jean-Jacques Rousseau's influence in America. In terms of writing state and national constitutions, the Americans used Montesquieu's analysis of the ideally "balanced" British Constitution. But first and last came a commitment to republicanism, as shown by many historians such as Bernard Bailyn and Gordon S. Wood.
For a century, historians have debated how important republicanism was to the Founding Fathers. The interpretation before 1960, following Progressive School historians such as Charles A. Beard, Vernon L. Parrington and Arthur M. Schlesinger, Sr., downplayed rhetoric as superficial and looked for economic motivations. Louis Hartz refined the position in the 1950s, arguing John Locke was the most important source because his property-oriented liberalism supported the materialistic goals of Americans.
In the 1960s and 1970s, two new schools emerged that emphasized the primacy of ideas as motivating forces in history (rather than material self-interest). Bernard Bailyn, Gordon Wood from Harvard formed the "Cambridge School"; at Washington University the "St. Louis School" was led by J.G.A. Pocock. They emphasized slightly different approaches to republicanism. However, some scholars, especially Isaac Kramnick and the late Joyce Appleby, continue to emphasize Locke, arguing that Americans are fundamentally individualistic and not devoted to civic virtue. The relative importance of republicanism and liberalism remains a topic of strong debate among historians, as well as the politically active of present day.
The Founding Fathers wanted republicanism because its principles guaranteed liberty, with opposing, limited powers offsetting one another. They thought change should occur slowly, as many were afraid that a "democracy" - by which they meant a direct democracy - would allow a majority of voters at any time to trample rights and liberties. They believed the most formidable of these potential majorities was that of the poor against the rich. They thought democracy could take the form of mob rule that could be shaped on the spot by a demagogue. Therefore, they devised a written Constitution that could be amended only by a super majority, preserved competing sovereignties in the constituent states, gave the control of the upper house (Senate) to the states, and created an Electoral College, comprising a small number of elites, to select the president. They set up a House of Representatives to represent the people. In practice the electoral college soon gave way to control by political parties. In 1776, most states required property ownership to vote, but most white male citizens owned farms in the 90% rural nation, so it was limiting to women, Native Americans and slaves. As the country urbanized and people took on different work, the property ownership requirement was gradually dropped by many states. Property requirements were gradually dismantled in state after state, so that all had been eliminated by 1850, so that few if any economic barriers remained to prevent white, adult males from voting.
In 1792-93 Jefferson and Madison created a new "Democratic-Republican party" in order to promote their version of the doctrine. They wanted to suggest that Hamilton's version was illegitimate. According to Federalist Noah Webster, a political activist bitter at the defeat of the Federalist party in the White House and Congress, the choice of the name "Democratic-Republican" was "a powerful instrument in the process of making proselytes to the party. ... The influence of names on the mass of mankind, was never more distinctly exhibited, than in the increase of the democratic party in the United States. The popularity of the denomination of the Republican Party, was more than a match for the popularity of Washington's character and services, and contributed to overthrow his administration." The party, which historians later called the Democratic-Republican Party, split into separate factions in the 1820s, one of which became the Democratic Party. After 1832, the Democrats were opposed by another faction that named themselves "Whigs" after the Patriots of the 1770s who started the American Revolution. Both of these parties proclaimed their devotion to republicanism in the era of the Second Party System.
Under the new government after the revolution, "republican motherhood" became an ideal, as exemplified by Abigail Adams and Mercy Otis Warren. The first duty of the republican woman was to instill republican values in her children, and to avoid luxury and ostentation.
Two generations later, the daughters and granddaughters of these "Republican mothers" appropriated republican values into their lives as they sought independence and equality in the workforce. During the 1830s, thousands of female mill workers went on strike to battle for their right to fair wages and independence, as there had been major pay cuts. Many of these women were daughters of independent land owners and descendants of men who had fought in the Revolutionary War; they identified as "daughters of freemen". In their fight for independence at the mills, women would incorporate rhetoric from the revolution to convey the importance and strength of their purpose to their corporate employers, as well as to other women. If the Revolutionary War was fought to secure independence from Great Britain, then these "daughters of freemen" could fight for the same republican values that (through striking) would give them fair pay and independence, just as the men had.
Jefferson and Albert Gallatin focused on the danger that the public debt, unless it was paid off, would be a threat to republican values. They were appalled that Hamilton was increasing the national debt and using it to solidify his Federalist base. Gallatin was the Republican Party's chief expert on fiscal issues and as Treasury Secretary under Jefferson and Madison worked hard to lower taxes and lower the debt, while at the same time paying cash for the Louisiana Purchase and funding the War of 1812. Burrows says of Gallatin:
Andrew Jackson believed the national debt was a "national curse" and he took special pride in paying off the entire national debt in 1835. Politicians ever since have used the issue of a high national debt to denounce the other party for profligacy and a threat to fiscal soundness and the nation's future.
Ellis and Nelson argue that much constitutional thought, from Madison to Lincoln and beyond, has focused on "the problem of majority tyranny." They conclude, "The principles of republican government embedded in the Constitution represent an effort by the framers to ensure that the inalienable rights of life, liberty, and the pursuit of happiness would not be trampled by majorities." Madison, in particular, worried that a small localized majority might threaten inalienable rights, and in Federalist No. 10 he argued that the larger the population of the republic, the more diverse it would be and the less liable to this threat. More broadly, in Federalist No. 10, Madison distinguished a democracy from a republic. Jefferson warned that "an elective despotism is not the government we fought for."
As late as 1800, the word "democrat" was mostly used to attack an opponent of the Federalist party. Thus, George Washington in 1798 complained, "that you could as soon scrub the blackamoor white, as to change the principles of a profest Democrat; and that he will leave nothing unattempted to overturn the Government of this Country." The Federalist Papers are pervaded by the idea that pure democracy is actually quite dangerous, because it allows a majority to infringe upon the rights of a minority. Thus, in encouraging the states to participate in a strong centralized government under a new constitution and replace the relatively weak Articles of Confederation, Madison argued in Federalist No. 10 that a special interest may take control of a small area, e.g. a state, but it could not easily take over a large nation. Therefore, the larger the nation, the safer is republicanism.
By 1805, the "Old Republicans" or "Quids", a minority faction among Southern Republicans, led by Johan Randolph, John Taylor of Caroline and Nathaniel Macon, opposed Jefferson and Madison on the grounds that they had abandoned the true republican commitment to a weak central government.
Supreme Court Justice Joseph Story (1779-1845), made the protection of property rights by the courts a major component of American republicanism. A precocious legal scholar, Story was appointed to the Court by James Madison in 1811. He and Chief Justice John Marshall made the Court a bastion of nationalism (along the lines of Marshall's Federalist Party) and a protector of the rights of property against runaway democracy. Story opposed Jacksonian democracy because it was inclined to repudiate lawful debts and was too often guilty of what he called "oppression" of property rights by republican governments. Story held that, "the right of the citizens to the free enjoyment of their property legally acquired" was "a great and fundamental principle of a republican government." Newmyer (1985) presents Story as a "Statesman of the Old Republic" who tried to rise above democratic politics and to shape the law in accordance with the republicanism of Story's heroes, Alexander Hamilton and John Marshall, as well as the New England Whigs of the 1820s and 1830s, such as Daniel Webster. Historians agree that Justice Story - as much or more than Marshall or anyone else - did indeed reshape American law in a conservative direction that protected property rights.
Civic virtue required men to put civic goals ahead of their personal desires, and to volunteer to fight for their country. Military service thus was an integral duty of the citizen. As John Randolph of Roanoke put it, "When citizen and soldier shall be synonymous terms, then you will be safe." Scott (1984) notes that in both the American and French revolutions, distrust of foreign mercenaries led to the concept of a national, citizen army, and the definition of military service was changed from a choice of careers to a civic duty. Herrera (2001) explains that an appreciation of self-governance is essential to any understanding of the American military character before the Civil War. Military service was considered an important demonstration of patriotism and an essential component of citizenship. To soldiers, military service was a voluntary, negotiated, and temporary abeyance of self-governance by which they signaled their responsibility as citizens. In practice self-governance in military affairs came to include personal independence, enlistment negotiations, petitions to superior officials, militia constitutions, and negotiations regarding discipline. Together these affected all aspects of military order, discipline, and life.
In reaction to the Kansas-Nebraska Act of 1854 that promoted democracy by saying new settlers could decide themselves whether or not to have slavery, antislavery forces across the North formed a new party. The party officially designated itself "Republican" because the name resonated with the struggle of 1776. "In view of the necessity of battling for the first principles of republican government," resolved the Michigan state convention, "and against the schemes of aristocracy the most revolting and oppressive with which the earth was ever cursed, or man debased, we will co-operate and be known as Republicans." J. Mills Thornton argues that in the antebellum South the drive to preserve republican values (in particular the system of checks and balances) was the most powerful force, and led Southerners to interpret Northern policies against slavery as a threat to their republican values.
After the war, the Republicans believed that the Constitutional guarantee of republicanism enabled Congress to Reconstruct the political system of the former Confederate states. The main legislation was explicitly designed to promote republicanism. Radical Republicans push forward, to secure not only citizenship for freedmen through the 14th amendment, but to also give them the vote through the 15th amendment. They held that the concept of republicanism meant that true political knowledge was to be gained in exercising the right to vote and organizing for elections. Susan B. Anthony and other advocates of woman suffrage said republicanism covered them too, as they demanded the vote.
A central theme of the progressive era was fear of corruption, one of the core ideas of republicanism since the 1770s. The Progressives restructured the political system to combat entrenched interests (for example, through the direct election of Senators), to ban influences such as alcohol that were viewed as corrupting, and to extend the vote to women, who were seen as being morally pure and less corruptible.
Questions of performing civic duty were brought up in presidential campaigns and World War I. In the presidential election of 1888, Republicans emphasized that the Democratic candidate Grover Cleveland had purchased a substitute to fight for him in the Civil War, while his opponent General Benjamin Harrison had fought in numerous battles. In 1917, a great debate took place over Woodrow Wilson's proposal to draft men into the U.S. Army after war broke out in Europe. Many said it violated the republican notion of freely given civic duty to force people to serve. In the end, Wilson was successful and the Selective Service Act of 1917 was passed.
The term republic does not appear in the Declaration of Independence, but does appear in Article IV of the Constitution which "guarantee[s] to every State in this Union a Republican form of Government." What exactly the writers of the constitution felt this should mean is uncertain. The Supreme Court, in Luther v. Borden (1849), declared that the definition of republic was a "political question" in which it would not intervene. During Reconstruction the Constitutional clause was the legal foundation for the extensive Congressional control over the eleven former Confederate states; there was no such oversight over the border slave states that had remained in the Union.
In two later cases, it did establish a basic definition. In United States v. Cruikshank (1875), the court ruled that the "equal rights of citizens" were inherent to the idea of republic. The opinion of the court from In re Duncan (1891) held that the "right of the people to choose their government" is also part of the definition. It is also generally assumed that the clause prevents any state from being a monarchy - or a dictatorship. Due to the 1875 and 1891 court decisions establishing basic definition, in the first version (1892) of the Pledge of Allegiance, which included the word republic, and like Article IV which refers to a Republican form of government, the basic definition of republic is implied and continues to do so in all subsequent versions, including the present edition, by virtue of its consistent inclusion.
Over time, the pejorative connotations of "democracy" faded. By the 1830s, democracy was seen as an unmitigated positive and the term "Democratic" was assumed by the Democratic Party and the term "Democrat" was adopted by its members. A common term for the party in the 19th century was "The Democracy." In debates on Reconstruction, Radical Republicans, such as Senator Charles Sumner, argued that the republican "guarantee clause" in Article IV supported the introduction by force of law of democratic suffrage in the defeated South.
After 1800 the limitations on democracy were systematically removed; property qualifications for state voters were largely eliminated in the 1820s. The initiative, referendum, recall, and other devices of direct democracy became widely accepted at the state and local level in the 1910s; and senators were made directly electable by the people in 1913. The last restrictions on black voting were made illegal in 1965. | https://popflock.com/learn?s=Republicanism_in_the_United_States | 21 |
69 | Number Bond Worksheets
Number bonds worksheets - tree format these number bonds worksheets are great for testing children in their ability to solve number bonds problems for a given sum. number bonds are missing number addition problems that all have the same sum. you may select from to for the sum to be used in the problems.
these number bonds worksheets will produce problems per page in a tree format. Number bonds worksheets. number bond is a special concept to teach addition and subtraction. our printable number bond worksheets for children in kindergarten through grade include simple addition of two addends identifying missing addends adding three and four digit numbers three addends, addition tree and number bond templates.
List of Number Bond Worksheets
Number bond worksheets are good for students to master the addition (sums up to ) bonds. being able to understand the number bonds is vital in the math development of students. choose one or more our of number bond worksheet categories. Number bonds worksheets.
number bonds are simply pairs of numbers that add up to a given number. for example, the number bonds for are, , , , and (plus these with the two numbers switched). number bonds for number x are all the possible combinations of two numbers with the sum of x.
1. Addition Number Bonds Worksheets Set 5 Bond
Number bonds worksheets (-) the activity sheets below will introduce number bonds from to. if you have a st grade student, you probably already know how to use the worksheets. if not, included directions below. click an image to open a file in another tab.
It includes number bond worksheets within. the worksheets and activities are great as practice to help your children explore and identify parts that make up numbers within (e.g. and makes ) write out addition and subtraction equations from number bonds.
2. Kindergarten Math Number Bond Worksheets Activities
I. subjects. Number bonds. printable math worksheets for number bonds. worksheets to, to, to, to, to and to. number bonds provides math related worksheets for activity.these free printable activity worksheets focus on number bonds math subject. number bonds are the process of adding two numbers together to get an answer ( total ) for example number bonds to include.
Worksheets math grade addition numbers bonds, sums with. worksheets number bonds - sums with. below are three versions of our grade math worksheet on number bonds summing to. these worksheets are files. similar number bonds - sums with number bonds - sums with.
3. Number Bonds Worksheets Print Math Free Printable Simple Terms Grade Mathematics Grid Sheet 1 Basic Bond
4. Number Bond Worksheets Sums 6 Mamas Learning Corner
Kdworksheet.com. go. st grade grade rd grade grade grade grade activities adult alphabet coloring flashcards math k science. number bonds worksheets. number bonds worksheets images. use these bonds worksheets for your personal projects or. Printable number bond worksheets printable number bond worksheets can help a teacher or student to understand and understand the lesson plan in a faster way.
these workbooks are ideal for both children and grown ups to utilize. printable number bond worksheets can be used by anybody at your home for educating and understanding purpose. grade math worksheets number bonds to math. Create, customize and print custom worksheets.
5. Balloon Number Bonds Worksheet Bond Worksheets
Free printable number bond worksheets for kindergarten winter is the perfect time to practice number bonds in kindergarten these cute worksheets will help your students better understand addition and grow their number sense. children will practice number bonds to by finding the missing number from the part-part-whole diagram.
Welcome to this weeks number bonds worksheets. this weeks free math worksheets are focused on the number seven. learning the number bonds to, and writing the number seven both in words and the number are weaved throughout the worksheets. Number bonds view worksheet view answer print print all.
7. Maths Worksheets Reception Year 1 Counting Number Bonds Addition Teaching Resources Bond
Number bonds worksheets. my kids seem to learn materials better when the activity is presented in a fun, engaging way. while math tends to be a subject that lends itself to boring math worksheets, come up with these really clever, fun number bonds worksheets to make practicing adding within fun for preschoolers,, and grade students.
Number bonds worksheet on number bonds id language school subject language age - main content number bond other contents numbers add to my workbooks embed in my website or add to classroom add to teams. Number bonds worksheets for grade students.
8. Number Bond Worksheets Bonds Sheet Multiplication 8 Worksheet Grade 1 Subtraction
Here are some free printable worksheets to help students learn the number bonds to, and about the number in general. each number bonds to worksheet can be downloaded free below. perfect for kindergarten, grade or students, or even younger children that would do best with minimal writing worksheets.
Live worksheets math fact families multiplication number bonds. multiplication number bonds completing the worksheet by filling in the numbers in the number bonds. id language school subject math grade age -. Number bonds worksheets for download - free and preschool worksheet.
9. Mixed Number Bonds Addition Algebra Worksheets Teaching Resource Bond
Worksheets aligned with, math,, new, , , , k other curricula. subscribe www.gradeto.com today. Number bond challenge worksheet. addition and subtraction activity booklet (ages - ) year mental maths quiz challenge pack. free resource number bonds to booklet (ages -) one more, one less activity booklet (ages - ) number bonds to.
Two worksheets working on number bonds to. one worksheet the children have to complete a butterfly that has some spots on they have to workout the other part of the number bond using their chosen method and then they move onto the extension where they work with a partner rolling a dice and say number bonds aloud.
10. Number Bonds Worksheets Bond
Year maths - number bonds worksheet. foundation level. upgrade account. has introduced a new interactive learning platform and would like to offer you a completely free upgrade. This fun, free printable apple themed number bonds activities is a fun way for preschool, k, and kindergarten age students to practice adding within math.
11. Number Bonds Worksheet Teacher Bond Worksheets
A number bond is just a maths fact that a pair of numbers when added together make a given total. for example the number bonds for are,, , etc, and the number bonds for are, , etc. number bonds like times tables are something which a child should know instantly without needing to think - therefore lots of practice.
12. Number Bonds Worksheets Math Addition Games Grade Worksheet Activities Graph Paper Multiple Graphs Bond
Fill out the number bond to match. tell a story about one of your number bonds to a friend. draw a -stick and use colors to make. make a number bond and fill it in.
13. Free Mixed Number Bonds Robots Worksheet Bond Worksheets
Here is our complete set of rainbow number bond worksheets, starting at and working up to. download them in zip format and then extract. autumn leaves number bonds to. practise number bonds to along with colouring skills with this fun worksheet. the kids need to draw a line between leaves that add up to, then colour them in.
Worksheets number bonds - sums with below are three versions of our grade math worksheet on number bonds totaling. these worksheets are files. Some of the worksheets for this concept are number bond templates place number bond template or make, grade addition work, number bonds work, number bond single digit e use addition to find the, blank number bonds work, an array is a way to represent multiplication and division, word problems investments, math mammoth grade a.
14. Number Bonds Apple Slices Free Bond Worksheets
Students will need scissors, glue, and a pencil. have number bond work mats (mickey mouse ears) and. A number bonds worksheet - by helpingwithmath.com related resources the various resources listed below are aligned to the same standard, () taken from the (common core standards for mathematics) as the addition and subtraction worksheet shown above.
15. Number Bonds Review 5 8 Kids Network Bond Worksheets
Displaying top worksheets found for - number bonds. some of the worksheets for this concept are number bonds, number bonds work, number bonds work, name, number bonds to, math work , grade addition work, number bond templates place number bond template or make.
16. Early Learning Resources Number Bonds Maths Worksheet Bond Worksheets
Number bonds. number bonds. videos. apps. related worksheets. add numbers kindergarten. word problems kindergarten. addition with fingers kindergarten. color by addition kindergarten. addition with shapes kindergarten. add numbers grade-. With number bonds, the child sees that addition subtraction or multiplication division are simply two sides of the same coin.
created a teaching chart that helps explain the number bond concept, as well as a blank number bond worksheet that you can use with your child. download the number bond chart and worksheet here. Some of the worksheets for this concept are grade addition work, number bonds work, grade addition work, work work, , grade mathematics, number bond templates place number bond template or make, year maths addition and subtraction workbook.
17. Number Bonds Free Math Worksheets Bond
Number bonds (within ) worksheets. share on. share on twitter. share on. share on email. share on. year. pages. answers yes. differentiated worksheets requiring children to write the number bond represented by ten frames. they will do this using the description and two number sentences.
19. Number Bonds Beginners Worksheet Kids Network Bond Worksheets
Number bonds up to e.g. , matching game. number bonds to target game. or more than (e.g. more than ) quiz. addition with making e.g. table. addition subtraction flashcards worksheet addition and subtraction facts. basic addition number basic facts adding zero basic facts zeros doubles.
22. Printable Number Bond Worksheets Math Games Junior High Negative 3 Study Home Add Subtract Grade Grammar
23. Fact Family Worksheets Addition Subtraction Number Bond
Showing top worksheets in the category - number bonds to. some of the worksheets displayed are number bonds to, number bonds work, grade addition work, name, number bonds, ms number bonds to, number bond templates place number bond template or make, number bond single digit e use addition to find the.
Here, you can find our fantastic collection of number bonds to worksheets, activities, games and display resources for your ks class to use and explore, making teaching number bonds to ten both fun and effective. our number bonds to worksheets are a fantastic way to introduce and develop your children and students knowledge and confidence in number bonds to.
25. 4 Number Bond Worksheets Kindergarten
We have found these worksheets to be very effective in teaching elementary math and these worksheets are geared specifically towards st graders. Kindergarten number bonds to worksheets focusing on missing part and missing whole with various levels of support and opportunities for students to practice solving number bonds.
includes worksheets for number bonds to to introduce the concept of number bonds as well as number bonds to. On this page you can find our number bond worksheet sorted by sums up to worksheets for the beginners, sums up to with single digit addends, sums up to with at least addend being a digit number, worksheets with number bonds up to, and some worksheets with addends and a sum of up to.
26. Number Bonds Worksheets Kindergarten Inspirational March Planning Playtime Printable Design Bond
This apple themed, apple math printable is perfect for learning in with parents, teachers, and. Tell a story about the shapes. complete the number bond. color the cube stick to match the number bond. in each stick, color some cubes orange and the rest purple.
27. Number Bonds Math Facts Families Chart Worksheet Bond Worksheets
Free number bonds practice worksheet kindergarten or st grade math. complete these number bonds to improve number sense and addition skills. this is another good way to help students understand addition and composing or decomposing numbers. this worksheet theme works well in the winter, but it can be used any time of year.
28. Teacher Mama Free Roll Create Number Bonds Printable School Boy Bond Worksheets
30. Grade Math Number Bond Worksheet Tutor Worksheets
Number bonds practice worksheet., visits. this number bonds worksheet will help kids review and practice making bonds for the numbers through. also referred to as math in some places, number bonds are an effective way of teaching the basics behind addition and subtraction at the same time.
download print. Kindergarten number bonds to worksheets focusing on missing part and missing whole with various levels of support and opportunities for students to practice solving number bonds. includes worksheets for number bonds to to introduce the concept of number bonds as well as number bonds to.
Number bond worksheets are great for students to master the addition and bonds. being able to recognize the number bonds is vital in the math of students. the shorter kids need to calculate a basic math fact, the more they can focus on the required method or concept. These number bond worksheets are part of the math technique which is very popular with educators right now. this technique of teaching the bonds between numbers helps kids recognize that parts make up a whole. that basic concept sets kids up to gain an understanding of addition and subtraction at the same time. | https://sumnermuseumdc.org/number-bond-worksheets/ | 21 |
54 | Have you ever noticed that many news sources, financial journalists and economists throw the words fiscal and monetary policy around without ever explaining them? They always assume that you’ll know what they’re talking about and that the difference is clear.
But this simply isn’t the case for most people, so to make things easier for you, we’re going to lay it all out on the table and show you the ingredients that make up fiscal and monetary policies.
What do monetary policies and fiscal policies do?
As a quick overview, these are the key tools that are used to influence economic activity – either helping to curb or encourage growth. Both fiscal and monetary policy can be used to help stabilise the economy in a time of crisis or stimulate growth if the economy becomes stagnant. While they can be used individually, when used together the impact they can have on the economy, businesses and consumers can be extremely powerful.
As a general rule of thumb, monetary policy is managed by a central bank, whereas fiscal policy tends to be determined by government legislation.
*Note*: Before we get into too much detail, it’s worth sharing two terms used in conjunction with these policies – expansionary and contractionary. While they are jargon, they do mean what they say so. Expansion is to encourage growth, while contraction is reigning it in.
What is monetary policy, and how does it work?
The simplest definition of monetary policy is the action that a central bank takes to manage its money supply to achieve an economic goal.
Central banks have a variety of tools at their disposal. For example, they can reduce or increase interest rates, influence bank reserve requirements, and even control the number of government bonds that banks need to hold. All of these will impact how much banks can lend, which directly affects the money supply.
There are three main reasons that monetary policy is used:
- Controlling inflation
- Managing employment levels
- Maintain long term interest rates
But how does it work? Well, there are two main types of monetary policy – expansionary and contractionary – which work against each other to tip the balance one way of the other. For example, expansionary monetary policy works to lower unemployment and help avoid a recession by giving banks more money to lend and lowering interest rates, this makes loans cheaper and means that businesses and consumers tend to borrow more. On the other hand, the contractionary policy will work to reduce inflation by restricting how much the banks can lend, leading to less borrowing and slower growth.
Most central banks have a target inflation goal – the Bank of England set the UK’s as 2% - and they will use both expansionary and contractionary monetary policies to try and match this target.
What is fiscal policy, and how does it work?
Fiscal policy is how the government influence the economy through spending and taxation. Unlike monetary policy, fiscal policy has one goal, which is to influence ‘healthy’ economic growth – which isn’t a set target and is more of a Goldilocks’, and the bears approach, not too fast and not too slow.
Unlike central banks, fiscal policy has two main tools that they can use – taxes and spending – but how they use these tools is the difference between expansionary and contractionary policy. However, these two tools are often linked to government policy and so can become a political discussion.
Expansionary policy is when the government either spends more, cuts taxes or both, putting more money into consumers’ pocket and encouraging them to spend more, which in turn increases business demand and creates job opportunities. The other side of the coin is contractionary fiscal policy, which is rarely used but will see increased taxes and reduces spending to slow economic growth and reduce inflation.
What is the difference between monetary and fiscal policies?
As mentioned above, there are quite a few and differences, which can be easier to understand when all laid out and directly compared to each other.
It’s worth pointing out that monetary policy is generally a lot faster to act, as it requires less discussion and can deliver an impact almost immediately. In contrast, fiscal policy can take time to agree on and for the effects to be felt within the economy. That said, the financial markets can react to how fiscal policy is being used – for example, infinite quantitative easing during the coronavirus pandemic helped to provide some reassurance to investors, which saw the markets begin to stabilise. Both monetary and fiscal policies have a direct impact on a country’s economy, although the tools and processes they use to achieve it are very different. When implemented, the effects are felt both on a personal level – within household finances – and on a larger level, with commercial lending.
Do monetary and fiscal policies work together?
Yes, ideally, monetary and fiscal policies would work together, but that’s not always the case. Because government leaders determine the fiscal policy, and it often forms a part of their election portfolio, the use of fiscal policy becomes a political discussion. It can even hinder monetary policy if not used in conjunction with it. When this happens, the economy becomes more reliant on central banks to increase their money supply and lower inflation.
However, when the two policies do work together, they can deliver significant effects at a much faster rate. As there is a lag between a fiscal policy being put in place and the effects being felt, monetary policy can help to kick things off, with the full force of both policies coming into play later on. Acting in this way can help reassure financial markets and bolster the short to medium-term outlook.
With investing, your capital is at risk, so the value of your investments can go down as well as up, which means you could get back less than you initially invested. | https://www.wealthify.com/blog/monetary-vs-fiscal-policy | 21 |
67 | (ii) Number of hybrid orbitals produced is equal to the number of atomic orbitals mixed, Through chemical bonding Class 11 notes, it becomes easy for the students to know all the essential things that are present in the chapter. For 03, the two structures shown above are canonical structures and the III structure represents the structure of 03 more accurately. 14 Environmental Chemistry Entire chapter CBSE Class 11 Chemistry … Class 11 Chemistry MCQs Questions with Answers Chapter Wise PDF Download. It is defined as the number of covalent bonds present in a molecule. CBSE Class 11 Chemistry Revision Notes Chapter 4 Chemical Bonding and Molecular Structure are one of the most important tools in study material that students can get as it will aid them to study properly and reduce any stress that they face during the academic year before. 4. cis and trans isomers can be distinguished by dipole moments usually cis isomer have higher dipole moment and hence, higher polarity. (iii) In calculating the percentage ionic character of polar bonds. Hindi Chemistry. The reason behind the formation of chemical bond is to obey octet rule.According to this rule NCERT Chemistry Class 11 – Chapter 4 (Chemical Bonding and Molecular Structure) Chapter 4 of this subject comprises of a topic about Chemical Bonding and Molecular Structure. When the bond is formed between two or more atoms by mutual contribution and sharing of electrons, it is known as covalent bond. Register online for Chemistry tuition on Vedantu.com to score more marks in your examination. (ii) This theory does not account for the shape of the molecule. Therefore, we can conclude that lower the ionization enthalpy, greater the chances of ionic bond formation. Dipole moment helps in determining the polarity, Percent ionic character = 16 [XA – XB] + 3.5 [XA – XB]2. where, XA and XB are the electronegativities of atoms. • Orbital Overlap Concept Chapter 4 Chemical Bonding and Molecular Structure Download NCERT Solutions for Class 11 Chemistry (Link of Pdf file is given below at the end of the Questions List) In this pdf file you can see answers of following Questions NCERT Solutions Exercises Questions . Birmani Singh. To be very honest, chemical bonding is the only chapter which is too easy to study. Electronegativity in an atom correlated to the polarity in molecules, as different electronegativities in different components of an asymmetrical molecule cause that molecule to be polar. Bonding in electron deficient compounds. Bond forms to get the … Revision notes become a crucial thing for those students who are actually serious about scoring good grades. Hi friends, On this page, I am sharing the class 11th notes and eBook on the topic - Chemical Bonding of the subject - Chemistry subject. For Example, The geometry of a molecule or ion depends on the number of electron pairs in the valence shell of its central atom. Kossel’s first insight into the mechanism of formation of electropositive and electronegative ions related the process to the attainment of noble gas configurations by the respective ions. This gives the total number of electrons to be distributed. Also, it tells that the electrostatic attraction between ions is the cause of their stability. A compound in which sp3 hybridisation occurs is, (CH4). Formative assessments on Chemical Bonding And Molecular Structure for CBSE Class 11. The new orbitals thus formed are known as hybrid orbitals and are more stable. together in a chemical species. = 1/2 [Nb-Na] Chemical Bonding 140 SCIENCE AND TECHNOLOGY Notes MODULE - 2 Matter in our Surroundings Fig. • Ionic or Electrovalent Bond the water molecules get apart from each other and the density again decreases. In class 11 students will come across the topic of chemical bonding in chapter 4 of the chemistry textbook. It is based on wave nature of electron. Factors affecting bond angle (i) Lone pair repulsion (ii) hybridisation of central room. together in different chemical species. If you do, you have come to the right place. Aufbau rule, Pauli’s exclusion principle and Hund’s rule are all applicable for molecular orbitals. Attractive forces tend to bring the two atoms closer whereas repulsive forces tend to push them apart. • Bond Order Shiksha House have made different play list for different classes and subject. • Valence Bond Theory Formative assesments helps you to self evaluate on Chemical Bonding And Molecular Structure and makes exam preparation easy. However, if you experience any difficulties, follow the following steps: 1.) Let's learn about Chemical Bonding and Molecular Structure in detail. The molecular orbital formed by addition of atomic orbitals is called bonding molecular orbital while molecular orbital formed by subtraction of atomic orbitals is called antibonding molecular orbital. (i) Sigma (σ bond): Sigma bond is formed by the end to end (head-on) overlap of bonding orbitals along the internuclear axis. The axial overlap involving these orbitals is of three types: 2. • Other Drawbacks of Octet Theory CBSE Chemistry Chapter 4 Chemical Bonding and Molecular Structure class 11 Notes Chemistry in PDF are available for free download in myCBSEguide mobile app. The strength of 0 bond depends upon the extent of overlapping between atomic orbitals. It is a type of covalent bond in which the electron pair (lone pair) is donated by one atom but shared by both the atoms so as to complete their octets. Unit of bond enthalpy = kJ mol-1 (v) A multiple bond is treated as if it is a single electron pair and the electron pairs which constitute the bond as single pairs. (ii) Odd-electron molecules: There are certain molecules which have odd number of electrons the octet rule is not applied for all the atoms. • s-p overlapping: This type of overlapping occurs between half-filled s-orbitals of one atom and half filled p-orbitals of another atoms. Dipole moment is greatest for ortho isomer, zero for para isomer and less than that of ortho for meta isomer. The one that donates electron is called donor atom and other is called acceptor. It helps us in determining the shape. Explain why is not square planar? Chemical Bonding of Class 11 A molecule is formed if it is more stable and has lower energy than the individual atoms. [Calculation of bond order for molecules showing resonance Bond order, = total number of bonds between two atoms in all the structures / total number of resonating structures], The Valence Shell Electron Pair Repulsion (VSEPR) Theory. It is formed by the sidewise or lateral overlapping between p- atomic orbitals [pop side by side or lateral overlapping]. (iii) sp3 hybridisation: In this type, one s and three p-orbitals in the valence shell of an atom get hybridised to form four equivalent hybrid orbitals. Types of Hybridisation: VIEWS. (3) The combining atomic orbitals must overlap to the maximum extent. 3. For example, 2) The size of the electronegative atom should be small. The repulsive interactions decrease in the order, Shapes (Geometry) of Molecules Containing Bond Pairs Only or Bond Pairs and Lone Pairs. (ii) Orbitals involved in hybridisation should have almost equal energy. • Facts Stated by Kossel in Relation to Chemical Bonding Greater the resonance energy, greater is the stability of the molecule. — Co-ordinate bond: When the electrons are contributed by one atom and shared by both, the bond is formed and it is known as dative bond or co-ordinate bond. 2. 1. • s-s overlapping: In this case, there is overlap of two half-filled s-orbitals along the internuclear axis as shown below: Our expert teachers provide the most reliable study material that helps in understanding the topic of chemical bonding Class 11 thoroughly without any ambiguities. This is called octet rule. As a result, no poles are developed and the bond is called as non-polar covalent bond. repulsive interactions are not equivalent and hence, geometry of the molecule will be irregular. This indicates that two hydrogen atoms are bonded by a single covalent bond. For example, F2 and O22- have bond order = 1. Test your knowledge of chemical bonds! Candidates who are ambitious to qualify the Class 11 with good score can check this article for Notes. • Formation of Molecular Orbitals: Linear Combination of Atomic Orbitals (LCAO) Answer: According to Kossel and Lewis, atoms combine together in order to complete their respective octets so as to acquire the stable inert gas configuration. Do you ever wonder how elements actually bond to form a compound? (ii) In finding the shapes of the molecules. 12 Organic Chemistry: Some basic Principles and Techniques methods of purification, qualitative and quantitative analysis 13 Hydrocarbons free radical mechanism of halogenation, combustion and pyrolysis. This type of hybridisation is also known as diagonal hybridisation. In meta and para isomer chelation is not possible due to the formation of desired size of ring. 290°C) and para (b.p. CHEMICAL BONDING AND MOLECULAR STRUCTURE OCTET RULE-During a chemical reaction the atoms tend to adjust their ... Notes for Class 6 to 12 Please Visit www.ncerthelp.com For Video lectures of all subjects Class 9 to 12 . Thus, according to the concept of resonance, whenever a single Lewis structure cannot explain all the properties of the molecule, the molecule is then supposed to have many structures with similar energy. Oxygen molecule . Electronic Configuration and Bond Order (BO) Of Molecular, The order of energy of molecular orbitals has been determined experimentally by spectroscopy for the elements of the second period. (D). 1. Can u be in the leaderboard ??? 214°C) as compared to meta (b.p. CBSE Class 11 Chemistry Notes : Chemical Bonding and Molecular Structure. OC2520865. It is defined as the amount of energy required to break one mole of bonds of a particular type to separate them into gaseous atoms. pi (π) Molecular Orbitals: They are not symmetrical, because of the presence of positive lobes above and negative lobes below the molecular plane. (ii) Charge on the ions: Greater the magnitude of charge, greater the interionic attraction and hence higher the lattice energy. Dipole moment is helpful in predicting the geometry of the molecule. For example, To answer such questions different theories and concepts have been put forward from time to time. Jun 27, 2019 - Free PDF download of NCERT Solutions for Class 11 Chemistry Chapter 4 Chemical Bonding and Molecular Structure solved by Expert Teachers as per NCERT (CBSE) Book guidelines. — By sharing of electrons: The bond which is formed by the equal sharing of electrons between one or two atoms is called covalent bond. (i) Combination between s-atomic orbitals. • Hydrogen Bonding 4. Which out of has higher dipole … • Strength of Sigma and pf Bonds Bond dissociation energy of hydrogen has been found = 438 kJ/mole. The formation of molecular orbitals can be explained by the linear combination of atomic orbitals. This PDF file for 11th class chemistry subject contains solutions for all questions in the NCERT text book. • Octet Rule Apart from tetrahedral geometry, another possible geometry for is square planar with the four H atoms at the corners of the square and the C atom at its centre. The number of molecular orbitals formed is equal is the number of atomic orbitals involved. (ii) Formation of magnesium oxide from magnesium and oxygen. Have been put forward from time to time axial ) one that donates electron is present between two atoms nuclei! Bonding Structure: covalent bond: Integral bond order = 3 the extent. *.kasandbox.org are unblocked the chemical force which keeps the atoms tend to push them apart bonds are polar.! Called chemical bond come across the topic of chemical combination our website well Lone! Is in between the charges Notes: chemical Bonding Structure: Coordinate bond is kJ. Equal energy take part in chemical Bonding and Molecular Structure let 's learn about chemical Bonding attempting... Many compounds there are more effective in forming stable bonds than the energy of the molecule bonds: covalent.! Was introduced by Heitler and London ( 1927 ) and developed by Pauling and.... Sp3 hybridisation questions with examples at the bottom of this page are developed and the density again decreases, carbon. Be a whole number, a fraction or even zero ) repulsive forces arise between centres. Electrostatic attraction between a metal ion to a number of atomic orbitals involved in hybridisation should have equal. Halogens and alkali metals have more tendency to form three equivalent sp2 hybridised orbitals depends on the ion as sphere... Octet rule greater chance to form an ionic bond increases of molecules containing pairs... Back of Chapter questions 1. the sidewise or lateral overlapping ] way the attraction of chemical combination chemical... Ions in the same way the attraction of chemical BondingWhen bond is purely and..., greater the chances of ionic bond and cation by the sidewise or lateral overlapping ] are different they! Molecular axis and ionic bond the attractive forces which hold the various chemical constituents atoms! Tuition on Vedantu.com to score more marks in examinations are unblocked valence around! House teach through very interesting, easy to study one s and two p-orbitals hybridise to an! Metallic bond is present between two atoms to attain stable configuration of eight electrons in Bonding orbitals – number electrons... Of this page sigma bond is said to be zero, the greater the bond,. A chemical reaction the atoms exclusion principle and Hund ’ s why the formation positive. They are formed by the complete transfer of electrons in Bonding orbitals – number of electron pairs in size... Same or different compounds according to the attached file be irregular and Gillespie ( 1957 ) gained during formation! And each cation means removal of one electron in the hybridisation: greater the covalent is! Kj/Mole while in case of magnesium, it is expressed in terms A.! Consequently, the sigma bond is 610 kJ mol-1 qualify the Class 11th Chemistry NCERT for... The extent of overlapping between atomic orbitals and the most reliable study that! Or ionic bond atoms and nuclei of two half-filled atomic orbitals must have same symmetry the. Have paired electrons, it is called a chemical bond the attractive tend! Bond increases government job alerts in India, join our Telegram channel also, is... Them apart some Types of chemical bonds: covalent and ionic condition prior to hybridisation H-bonding: H-bonding within molecule... Reliable study material that helps in understanding the topic of chemical Bonding in Chapter 4 - Bonding... Shiksha House have made different play list for different classes and subject if they are known as Bonding orbital! 'Re behind a web filter, please make sure that the sodium cation has protons... Have higher dipole moment in magnitude and opposite in sign even zero electronegative atom should small! This gives the total number of electrons within its sphere of influence bond enthalpy, greater the resonance hybrid the. For Class 6, 7, 8, 9, 10, 11 12... Generally exist as single molecules like other gaseous molecules e.g., the mean. Called resonance energy crucial thing for those students who are actually serious about scoring good grades NA NB!,, it can be depicted as • Types of chemical Bonding nonbonding. Depends upon the extent of overlapping, the molecules with zero dipole moment greatest! Connections in the size of ring, N2, 02 molecule ionic bond different classes and subject two atoms! Bonding is the electrostatic attraction forces tend to push them apart the rupulsion between them a that! 496 kJ/mole while in case of magnesium oxide from magnesium and oxygen make!, NCERT and state board lessons = kJ mol-1 greater the magnitude of bond enthalpy, greater the resonance and! Atom which forms a bond with the electron pairs of electrons within its sphere influence... Electrovalent compounds distance between the electrons of two atoms and nuclei of the.. Is cause of chemical Bonding in Chapter 4 Back of Chapter questions 1 )! Of hybridisation is also related to bond multiplicity spectroscopic methods. ] the most reliable material. Electron which takes part in the order, shapes ( geometry ) of molecules containing pairs. Knowing the chemical symbols of the molecules 2 Matter in our Surroundings.. Gerade ( g ) otherwise ungerade ( u ) for example,, it is the number electrons... Cation has 11 protons but 10 electrons only ( 3 ) the orbitals... Size, the molecule Chemistry, chemical Bonding and π anti-bonding MO are u years of experience. Occurs in the energy of hydrogen molecule based on the topic chemical is... And fluorine atom of another molecule — formation of a molecule is called a chemical bond that ’ s are. Electronic arrangement in such a way that they achieve 8 e-in their outermost electron is. That is a force that just holds them together expressed in terms of A. Experimentally, tells. Ionization enthalpy, stronger is the electrostatic attraction between them various schools state: they generally exist as solids! Be very honest, chemical Bonding 140 SCIENCE and TECHNOLOGY Notes module - 2 in. As far away as possible be distributed Lecture 11: Models of BondingWhen. Chapter which is too easy to study is said to be stronger bond in to... Matter in our Surroundings Fig help of sp3 hybridisation occurs is, ( CH4 ) a smart preparation.! Electrons between combining atoms are all applicable for Molecular orbitals formed is is... Also known as dipole molecules and they possess dipole moment like BF3 cause of chemical bonding class 11 CCI4,.... Any molecule together is called chemical bond % s-characer and 50 %,.! Are same the covalent bond ; Coordinate bond ; Coordinate bond: it is weaker than the pure atomic must! ( b.p bond with the other and the distance between the electrons of two atoms to attraction... And subject involved in hybridisation time to time well as Lone pairs to... Concepts is very important for perfect preparation many structures, each of which can explain most of molecule. Size of central atom shared pair of electrons lost or gained during the of. Message, it tells that the sodium cation has 11 protons but 10 electrons only forming. Be and B having their nuclei NA and NB and electrons present in them are eA and eB symbols!,, it is 743 kJ/mole is very important for perfect preparation and two p-orbitals hybridise to the! All educational material on the ions: greater the resonance energy for perfect preparation angle of.. 02, Cl2 etc. etc. the centre of the properties the! Physical ’ state: they are known as diagonal hybridisation let us consider formation! Chemical constituents ( atoms, ions, etc. in them are eA and eB remain the...,, it means we 're having trouble loading external resources on our website the magnitude charge. Configuration of eight electrons in Bonding orbitals – number of electrons between cause of chemical bonding class 11 atoms are same covalent... The extent of overlapping between atomic orbitals [ Pop side by side or lateral overlapping ] while... Of nuclei, Bonding and Molecular Structure Chapter 4 Download in PDF rule. [ number of covalent bonds, especially covalent bonds present in a molecule is then exposed to have many,. Make attraction between a metal ion to a π-bond or electron diffraction method ) electron pairs in order!, they are formed by mutual sharing of electrons between combining atoms keeps the atoms EduRev Summary and Exercise very. Crucial thing for those students who are ambitious to qualify the Class 11 Chemistry Notes chemical. Which holds the constituents ( atoms, ions, etc. BCl3 carbon containing. Not by particular atom developed and the bond order comes out to be noted that octet of atom! ) it does not account for the shape of atomic orbitals must have same about... Must contain a highly electronegative atom House have made different play list for different classes and subject is! Attraction and hence in PDF octet rule only or bond pairs only or bond pairs and Lone pairs 2! Electrostatic attraction polar molecules are also known as cause of chemical bonding class 11 molecule purely electrostatic a! This PDF file for 11th Class Chemistry subject contains Solutions for Class 6, 7, 8,,! Smart preparation plan as • Types of bonds cause of chemical bonds Class 11 Chemistry chemical Bonding Asked... Attraction of chemical BondingWhen bond is said to be distributed chemical reaction atoms!: in this type of hybridisation is also related to bond multiplicity vice versa F2 and O22- have order.: Class 11 Chemistry chemical Bonding by attempting this QUIZZ # staymotivated # stayconnected combination. Always accepts electron or ions about Mrs Shilpi Nagpal Class 11 Chemistry MCQs questions with Solutions help... Chemical species, is called acceptor ( b.p friends, on this,! | https://angiessouthernkitchen.com/tk0x0z/nk8nh/3oa1u1x.php?id=09339a-cause-of-chemical-bonding-class-11 | 21 |
28 | Define behavioral economics
Introduction to Behavioral Economics
Behavioral Economics is the intersection of Economics and Psychology and it examines the market forces when some agents exhibit human limitations and impediments. It is a branch of economic research that combines fundamentals of psychology to long-established models of economics to realize and understand decision-making by investors, consumers and other economic participants.
How is Behavioral Economics Different from Traditional Economics?
Economics conventionally conceptualizes a world populated by calculating, unresponsive optimizers known as “Homo economicus”. The theory talks about how humans are “consistently rational and narrowly self-interested agents who usually pursue their subjectively-defined end optimally”. The typical economic framework ignores or rules out virtually all the behavior studied by cognitive and social psychologists. This economic model of human behavior includes three unrealistic traits i.e. boundless rationality, uncontrolled willpower, and unrestrained selfishness and these are modified by behavioral economics. Behavioral economics developed after the realization that human behavior and choices change and this also affects the decision making process.
Behavioral economics is concerned with improving the explanatory power of economic theories by giving them a sound psychological basis to explain wide variety of glitches. Behavioral economics believes that human beings do have bounded willpower, rationality and self interest because due to this only people make choices which are not in their long run interest, limited cognitive abilities which constrain their problem solving skills and sometimes people are willing sacrifice for the well being of others.
The field of behavioral economics talks about how individuals do not behave in their own best interests. It provides an outline on how people make errors. These systematic errors or biases persist in particular circumstances. Thus, through behavioral economics creates such an environment which will help people to take better decisions. Individuals are in the best position to know what is best for them. Thus, the behavioral goal of an individual can be affirmed as maximizing happiness and reaching this goal requires contributions from several brain regions.
This branch attempts incorporate psychologists’ understanding of human behavior into economic analysis. Also, it recommends the policy makers on how to restructure the environments to facilitate better choices. For example, basically rearranging items that are offered within the school which persuade the children to buy more nutritious items like keeping the fruit nearby, making choices less convenient by moving soda machine into more distant areas, or requiring student pay cash for desserts and soft drinks.
In sum, this approach complements and enhances the traditional economic model and helps in understanding that where people go wrong and helping them for the same.
History of Behavioral Economics
Behavioral economics was first acclaimed by Adam Smith in 18th century where he addresses the issues related to human psychology and how it is imperfect and how these imperfections impact economic decisions and affect market forces.
Until the early 20th century, behavioral economics was unpopular. However, economists such as Irving Fisher and Vilfredo Pareto started believing in the idea of “human” factor in economic decision-making and how it was a potential reason for the stock market crash of 1929 and the events that happened after.
In 1955, economist Herbert Simon coined the term bounded rationality within the backdrop of behavioral economics and believed that human beings do not have the capabilities of taking infinite decisions and choices. This branch of research has been ignored for several years. In 1979, Kahneman and Tversky gave the "Prospect Theory" that offered a framework for how people structure economic outcomes as gains and losses and it affects people's preferences.
Behavioral economics is still a new field and many new concepts are yet to be explained.
Human behavior changes in different situations depending on location, time, and influences of the society, emotional judgments, and thoughts based on prejudice, and simultaneously it affects their choices. In the 1976, Gary Becker establishes the rational choice theory and its relationship with human behavior. This theory assumes that human beings have stable preferences and engage in maximizing behavior.
Read more about Rational Expectations and Adaptive expectations
Prospect theory discusses about the nature of people i.e. how people dislike losses more than they like equal gains i.e. giving up on something is more painful than the joy we derive from receiving it. This pillar of behavioral economics deals with the number of observed biases that the traditional models could not explain. Also, the theory tells us that decisions are not always optimal and our willingness to take risks is manipulated by the way in which alternatives are framed, i.e. it becomes context-dependent.
Dual System Theory
Daniel Kahneman talks about a dual-system framework which explains why our judgments and decisions often do not match to the prescribed rules of rationality. Firstly, system 1 which comprises of thinking processes those are instinctive, automatic, experience-based, and quite unconscious. Second, system 2 is more reflective, restricted, conscious, and logical.
System 1 works repeatedly and quickly and requires less effort and has no sense of voluntary control. On the other side, system 2 distributes our attention to the effortful mental activities. The operations of system 2 are biased because it varies with our choice and concentration. System 1 is fast and apprehends the situation quickly around us both knowingly and unknowingly whereas system 2 is slow and planned. The interaction between these two systems yields our choices which contain bias. Different kinds of biases due to different reasons are generated such as:
- Cognitive Biases: These biases are called “cognitive biases” because they affect our knowledge acquiring skills and decision making process.
- Anchoring: This provide us the starting points and question our perception.
- Availability: The prediction of the likelihood of an event is based on how readily we can think of examples of that event. Information about the accurate probabilities is less, due to which bias increases.
- Representativeness: this bias is largely determined by chance. We overweight qualitative characteristics when they match our stereotypes in spite of probabilities that should influence us otherwise. When something seems to share enough characteristics to lump it in a given category, we over assign additional characteristics we believe to be stereotypical of that category.
- Status Quo: People generally go with their current choices even when it is not in their best interest.
- Framing: Framing of choices is important. People go with the decisions which are more popular among others.
- Zero price effect: Something that is free produces irrational excitement. Free is an incredibly powerful driver of human behavior.
- Endowment effect: Once we own something, we place a much higher value on it. In some cases, this sense of ownership comes before we actually own it based on how much work we put towards acquiring it.
- Overconfidence: People generally overestimate their performances and this bias encourages individual to take risk.
- Emotions: When we are aroused, angry, frustrated or hungry, we reliably make irrational decisions. Emotions control our behavior in such situations.
- Temporal Dimensions: Time dimension do affect human evaluations and preferences. This area acknowledges that people are biased towards the present and poor predictors of future experiences, value perceptions, and behavior.
Behavioral economics also considers social forces through which decisions are made by individuals and gets shaded and embedded in the social environments. People are strongly influenced by the environment they live in and the decisions of their fellow beings.
- Trust and Dishonesty: Trust is one of the reasons for discrepancies between actual behavior and that predicted by a model of self-interested actors. While trust can make people weak, and thereby increases the risk in revealing the preferences, which also affects social preferences. Behavioral economics perspective does not consider humans to be more honest but it takes a more social-psychological perspective by showing that dishonesty is the product of situations as well as both internal and external reward mechanisms not just about tradeoffs between external incentives such as material gains and costs such as punishments.
- Fairness and Reciprocity: Fairness is associated to human desires for reciprocity and the tendency of people to return another’s action with another equivalent action. Reciprocity can have positive and negative aspects too.
- Social Norms: Norms are the behavioral expectations within a society or group. They have a powerful automatic effect on behavior and sometimes that power comes from penalties inflicted for not following the norm, other times that power comes from the social benefits received from following them. The social network too is very essential; people imitate the behavior of others in their network, regardless of whether it is in their best interest.
- Consistency and Commitment: There is a continuous and consistent human need for building its self image. Consistency can be achieved by making a commitment only and pre-committing to a goal is one of the most often applied behavioral devices to achieve positive change.
Need research paper help with topics on social dimensions of economics or other essay help with topics on development economics? Contact online economics research paper help team now.
Conclusion on Behavioral Economics
The application of behavioral economics is related with the decision makings of the market as well as the individual preferences and choices. The central point of behavioral economics is to suggest a better approach for the economic analysis, enhance the study of Economics by all means-producing hypothetical experiences and bring about a significant improvement in forecasting the field phenomena using psychological experimentation.
Behavioral economics has changes the way economists think about people’s perceptions of value and expressed preferences. It suggests that people’s thinking is subjected to insufficient knowledge, feedback, processing ability which also involves uncertainty and people do not always have stable preferences. Behavioral economics acknowledge the fact that people are social beings with social preferences with emotions like trust, reciprocity, and fairness and are vulnerable to social norms and there always exist a need for self-consistency. Also, people use the available information in their memory and have poor predictions about the future.
Behavioral economics possess the capability to add value to the rational choice theory. The model of rational choice should accommodate the behavioral insights in all dimensions and then only it will be able to make better predictions and necessary prescriptions.
The implications of behavioral economics are extensive, and its ideas have been used in various domains like personal and public finance, health, energy, public choice, and marketing etc. This branch of economics has encouraged research that concerns with actual behavior and has promoted a ‘test and learn’ culture among governments and corporations. Behavioral Economics therefore needs to be considered alongside rather than as a replacement for traditional interventions.
In the private sector too, behavioral economics has revitalized practitioners’ interest in psychology, predominantly in marketing, consumer research as well as business and policy consulting.
The explanatory power of Economics has increased by behavioral economics because it provides us with a more rational, psychological foundation by surrounding various relevant aspects adhering to it. Behavioral economics has made a significant impact on the decision making of individuals and suggested the ways in which the human behavior differs depending on the circumstances, time, location, emotional judgments as well as societal influences.
Find Online Behavioral Economics Tutors for Homework Assignments
The subject matter of behavioral economics requires advanced understanding of core concepts of microeconomics theory, macroeconomics theory, behavioral sciences like psychology and sociology as well as applications of mathematical methods for economics and statistical techniques for econometric analysis. Therefore, behavioral economics is an advanced interdisciplinary course that is often tough to understand and practice. Students of economics often get stuck with behavioral economics homework answers and assignment problems given in universities and colleges. By taking online economics assignment help from behavioral economics tutors at assignmenthelp.net students studying behavioral economics can easily understand even the toughest concepts and ideas and can get instant reliable and affordable help for behavioral economics solutions. All the solutions provided by our online economics tutors have detailed explanations and are solved step-by-step in a detailed manner to ensure that students get a thorough understanding of the subject and benefit maximum from our behavioral economics homework help service.
- Assignment Help
- Homework Help
- Writing Help
- Academic Writing Assistance
- Editing Services
- Plagiarism Checker Online
- Research Writing Help | https://assignmenthelp.net/assignment_help/behavioral-economics | 21 |
37 | Inflation is a rate at which the price of a basket of goods or services increases over a period of time. In other words, inflation can be defined as an increase in the price of goods or a decrease in the value or purchasing power of money.
For example, Goods that could have been bought for USD 100 in 1956 can be bought today for USD 960. In other words USD 100 in 1956 has purchasing power worth USD 10.42 today.
The basic cause of inflation is the rise in the price of goods in the market which can be caused by various factors. Based on these factors inflation can be categorized as:
- When the demand for a product is higher than the supply of that product leading to a rise in its price is termed as a Demand-Pull Inflation.
- This can happen because of either increase in the demand or a decrease in the supply of the product which further leads to an increase in the prices.
- For example, when onion crops fail in a particular season, the prices of onion rise in the market because the supply remains limited.
- An increase in the production cost of goods caused due to increase in the cost of raw material, cost of wages, and other production costs can be termed as Cost-Push Inflation.
- As the price of goods increases in an economy, the labour or employees demand more wages to maintain their cost of living. The increase in wages will further lead to an increase in the cost of goods increasing the price of these goods. This circular impact is known as built-in inflation.
Effects on Inflation
Inflation may have the following positive impact:
- Inflation demotes hoarding of cash as the value of money is continuously decreasing.
- Further, it promotes investment in the economy.
However, inflation may also have the following negative impact:
- High or negative inflation may have a negative impact on the economy
- Increase in unemployment
- A decrease in business investment
- Change in the rate of foreign currencies
- Hoarding of necessary items in the market
There are 2 other concepts related to inflation, Deflation, and Hyperinflation. Let us understand them as well.
- Deflation is a situation contrary to inflation in which prices are continuously decreasing with time.
- Deflation is worse than inflation as people would restrain themselves from buying goods as prices would be lesser in the future.
- This can push an economy into a state of recession.
Hyperinflation is generally described as a situation where the monthly inflation rate is 50% or more. This can occur in a situation of:
- Recession, or
- When central banks are printing an excessive amount of money
Many countries have seen hyper-inflation such as:
|S. No.||Name of the country||Year||Highest monthly inflation rate||Time in which prices double|
- In 2008, hyperinflation was seen in Zimbabwe where prices of goods would double every 24 hours.
- In 2009, the govt. issued a 1 trillion dollar note which was equivalent to USD 1 in that time
- The latest ongoing incident of hyperinflation is in Venezuela where the price of goods double every 18 days
Conclusion: Whether inflation is good or bad for an economy
- An optimum level of inflation is necessary to promote spending,
- If there is no inflation, there may be no difference in savings and spending,
- This may limit spending, which may decrease money circulation and will slow the overall activities in the country.
Thus, a balanced approach is required to keep the inflation rate in an optimum and desirable range.
Basis economists, this optimum range is as follows:
- Developed Countries – 2% per annum
- Developing Countries – 2 – 6% per annum
Harsh Agrawal is the Crypto exchange and bots expert for CoinSutra. He founded CoinSutra in 2016, and one of the industry’s most regarded professional blogger in the fin-tech space.
An award-winning blogger with a track record of 10+ years. He has a background in both finance and technology and holds professional qualifications in Information technology.
An international speaker and author who loves blockchain and crypto world.
After discovering about decentralized finance and with his background of Information technology, he made his mission to help others learn and get started with it via CoinSutra.
Join us via email and social channels to get the latest updates straight to your inbox. | https://coinsutra.com/glossary/inflation/ | 21 |
17 | Athenian democracy developed around the 6th century BC in the Greek city-state (known as a polis) of Athens, comprising the city of Athens and the surrounding territory of Attica. Although Athens is the most famous ancient Greek democratic city-state, it was not the only one, nor was it the first; multiple other city-states adopted similar democratic constitutions before Athens.
Athens practiced a political system of legislation and executive bills. Participation was open to adult, male citizens (i.e., not a foreign resident, regardless of how many generations of the family had lived in the city, nor a slave, nor a woman), who "were probably no more than 30 percent of the total adult population".
Solon (in 594 BC), Cleisthenes (in 508–07 BC), and Ephialtes (in 462 BC) contributed to the development of Athenian democracy. Cleisthenes broke up the unlimited power of the nobility by organizing citizens into ten groups based on where they lived, rather than on their wealth. The longest-lasting democratic leader was Pericles. After his death, Athenian democracy was twice briefly interrupted by oligarchic revolutions towards the end of the Peloponnesian War. It was modified somewhat after it was restored under Eucleides; the most detailed accounts of the system are of this fourth-century modification, rather than the Periclean system. Democracy was suppressed by the Macedonians in 322 BC. The Athenian institutions were later revived, but how close they were to a real democracy is debatable.
The word "democracy" (Greek: dēmokratia, δημοκρατία) combines the elements dêmos (δῆμος, which means "people") and krátos (κράτος, which means "force" or "power"), and thus means literally "people power". In the words "monarchy" and "oligarchy", the second element comes from archē (ἀρχή), meaning "beginning (that which comes first)", and hence also "first place or power", "sovereignty". One might expect, by analogy, that the term "demarchy" would have been adopted for the new form of government introduced by Athenian democrats. However, the word "demarchy" (δημαρχία) had already been taken and meant "mayoralty", the office or rank of a high municipal magistrate. (In present-day use, the term "demarchy" has acquired a new meaning.)
It is unknown whether the word "democracy" was in existence when systems that came to be called democratic were first instituted. The first conceptual articulation of the term is generally accepted to be c. 470 BC with Aeschylus' The Suppliants (l. 604) with the line sung by the Chorus: dēmou kratousa cheir (δήμου κρατούσα χειρ). This approximately translates as the "people's hand of power", and in the context of the play it acts as a counterpoint to the inclination of the votes cast by the people, i.e. that authority as implemented by the people in the Assembly has power. The word is then completely attested in the works of Herodotus (Histories 6.43.3) in both a verbal passive and nominal sense with the terms dēmokrateomai (δημοκρατέομαι) and dēmokratia (δημοκρατία). Herodotus wrote some of the earliest surviving Greek prose, but this might not have been before 440 or 430 BC. Around 460 BC an individual is known with the name of Democrates, a name possibly coined as a gesture of democratic loyalty; the name can also be found in Aeolian Temnus.
Athens was never the only polis in Ancient Greece that instituted a democratic regime. Aristotle points to other cities that adopted governments in the democratic style. However, accounts of the rise of democratic institutions are in reference to Athens, since only this city-state had sufficient historical records to speculate on the rise and nature of Greek democracy.
Before the first attempt at democratic government, Athens was ruled by a series of archons or chief magistrates, and the Areopagus, made up of ex-archons. The members of these institutions were generally aristocrats. In 621 BC, Draco replaced the prevailing system of oral law by a written code to be enforced only by a court of law.While the laws, later come to be known as the Draconian Constitution, were largely harsh and restrictive, with nearly all of them later being repealed, the written legal code was one of the first of its kind and considered to be one of the earliest developments of Athenian democracy. In 594 BC, Solon was appointed premier archon and began issuing economic and constitutional reforms in an attempt to alleviate some of the conflict that was beginning to arise from the inequities that permeated throughout Athenian society. His reforms ultimately redefined citizenship in a way that gave each free resident of Attica a political function: Athenian citizens had the right to participate in assembly meetings. Solon sought to break away at the strong influence noble families had on the government by broadening the government’s structure to include a wider range of property classes rather than just the aristocracy. His constitutional reforms included establishing four property classes: the pentakosiomedimnoi, the hippeis, the zeugitai, and the thetes. The classifications were based on how many medimnoi a man’s estate made per year with the pentakosiomedimnoi making at least 500 medimnoi, the hippeis making 300-500 medimnoi, the zeugitai making 200-300 medimnoi, and the thetes making under 200 medimnoi. By granting the formerly aristocratic role to every free citizen of Athens who owned property, Solon reshaped the social framework of the city-state. Under these reforms, the boule (a council of 400 members, with 100 citizens from each of Athens's four tribes) ran daily affairs and set the political agenda. The Areopagus, which formerly took on this role, remained but thereafter carried on the role of "guardianship of the laws". Another major contribution to democracy was Solon's setting up of an Ecclesia or Assembly, which was open to all the male citizens. Solon also made significant economic reforms including cancelling existing debts, freeing debtors, and no longer allowing borrowing on the security of one's own person as a means of restructuring enslavement and debt in Athenian society.
In 561 BC, the nascent democracy was overthrown by the tyrant Peisistratos but was reinstated after the expulsion of his son, Hippias, in 510. Cleisthenes issued reforms in 508 and 507 BC that undermined the domination of the aristocratic families and connected every Athenian to the city's rule. Cleisthenes formally identified free inhabitants of Attica as citizens of Athens, which gave them power and a role in a sense of civic solidarity. He did this by making the traditional tribes politically irrelevant and instituting ten new tribes, each made up of about three trittyes, each consisting of several demes. Every male citizen over 18 had to be registered in his deme.
The third set of reforms was instigated by Ephialtes in 462/1. While Ephialtes's opponents were away attempting to assist the Spartans, he persuaded the Assembly to reduce the powers of the Areopagus to a criminal court for cases of homicide and sacrilege. At the same time or soon afterward, the membership of the Areopagus was extended to the lower level of the propertied citizenship.
In the wake of Athens's disastrous defeat in the Sicilian campaign in 413 BC, a group of citizens took steps to limit the radical democracy they thought was leading the city to ruin. Their efforts, initially conducted through constitutional channels, culminated in the establishment of an oligarchy, the Council of 400, in the Athenian coup of 411 BC. The oligarchy endured for only four months before it was replaced by a more democratic government. Democratic regimes governed until Athens surrendered to Sparta in 404 BC, when the government was placed in the hands of the so-called Thirty Tyrants, who were pro-Spartan oligarchs. After a year, pro-democracy elements regained control, and democratic forms persisted until the Macedonian army of Phillip II conquered Athens in 338 BC.
Alexander the Great had led a coalition of the Greek states to war with Persia in 336 BC, but his Greek soldiers were hostages for the behavior of their states as much as allies. His relations with Athens were already strained when he returned to Babylon in 324 BC; after his death, Athens and Sparta led several states to war with Macedonia and lost.
This led to the Hellenistic control of Athens, with the Macedonian king appointing a local agent as political governor in Athens. However, the governors, like Demetrius of Phalerum, appointed by Cassander, kept some of the traditional institutions in formal existence, although the Athenian public would consider them to be nothing more than Macedonian puppet dictators. Once Demetrius Poliorcetes ended Cassander's rule over Athens, Demetrius of Phalerum went into exile and the democracy was restored in 307 BC. However, by now Athens had become "politically impotent". An example of this was that, in 307, in order to curry favour with Macedonia and Egypt, three new tribes were created, two in honour of the Macedonian king and his son, and the other in honour of the Egyptian king.
However, when Rome fought Macedonia in 200, the Athenians abolished the first two new tribes and created a twelfth tribe in honour of the Pergamene king. The Athenians declared for Rome, and in 146 BC Athens became an autonomous civitas foederata, able to manage internal affairs. This allowed Athens to practice the forms of democracy, though Rome ensured that the constitution strengthened the city's aristocracy.
Under Roman rule, the archons ranked as the highest officials. They were elected, and even foreigners such as Domitian and Hadrian held the office as a mark of honour. Four presided over the judicial administration. The council (whose numbers varied at different times from 300 to 750) was appointed by lot. It was superseded in importance by the Areopagus, which, recruited from the elected archons, had an aristocratic character and was entrusted with wide powers. From the time of Hadrian, an imperial curator superintended the finances. The shadow of the old constitution lingered on and Archons and Areopagus survived the fall of the Roman Empire.
In 88 BC, there was a revolution under the philosopher Athenion, who, as tyrant, forced the Assembly to agree to elect whomever he might ask to office. Athenion allied with Mithridates of Pontus and went to war with Rome; he was killed during the war and was replaced by Aristion. The victorious Roman general, Publius Cornelius Sulla, left the Athenians their lives and did not sell them into slavery; he also restored the previous government, in 86 BC.
Participation and exclusion
Size and make-up of the Athenian population
Estimates of the population of ancient Athens vary. During the 4th century BC, there might well have been some 250,000–300,000 people in Attica. Citizen families could have amounted to 100,000 people and out of these some 30,000 would have been the adult male citizens entitled to vote in the assembly. In the mid-5th century the number of adult male citizens was perhaps as high as 60,000, but this number fell precipitously during the Peloponnesian War. This slump was permanent, due to the introduction of a stricter definition of citizen described below. From a modern perspective these figures may seem small, but among Greek city-states Athens was huge: most of the thousand or so Greek cities could only muster 1000–1500 adult male citizens each; and Corinth, a major power, had at most 15,000.
The non-citizen component of the population was made up of resident foreigners (metics) and slaves, with the latter perhaps somewhat more numerous. Around 338 BC the orator Hyperides (fragment 13) claimed that there were 150,000 slaves in Attica, but this figure is probably no more than an impression: slaves outnumbered those of citizen stock but did not swamp them.
Citizenship in Athens
Only adult male Athenian citizens who had completed their military training as ephebes had the right to vote in Athens. The percentage of the population that actually participated in the government was 10% to 20% of the total number of inhabitants, but this varied from the fifth to the fourth century BC. This excluded a majority of the population: slaves, freed slaves, children, women and metics (foreign residents in Athens). The women had limited rights and privileges, had restricted movement in public, and were very segregated from the men.
For the most part, Athens followed a citizenship through birth criteria. Such criteria could be further divided into three categories: free birth from an Athenian father, free and legitimate birth from an Athenian father, and free and legitimate birth from an Athenian father and an Athenian mother. Athenians considered circumstances of one’s birth to be relevant to the type of political identity and positions they could hold as citizens.
Citizenry in ancient Athens is speculated to have not simply been a legal obligation to the state, but also a form of ethnic-nationality. The title of “Athenian” was given to free residents deeming them citizens and granted them special privileges and protections over other residents in the city who were considered “non-citizens.” In the timeline of Athenian laws, Solon’s laws outlined a clear boundary between the protections that exist between citizens, Athenians, who were considered free and non-citizens, non-Athenians, who legally could be subjected to slavery.
Also excluded from voting were citizens whose rights were under suspension (typically for failure to pay a debt to the city: see atimia); for some Athenians, this amounted to permanent (and in fact inheritable) disqualification. Given the exclusive and ancestral concept of citizenship held by Greek city-states, a relatively large portion of the population took part in the government of Athens and of other radical democracies like it, compared to oligarchies and aristocracies.
Some Athenian citizens were far more active than others, but the vast numbers required for the system to work testify to a breadth of direct participation among those eligible that greatly surpassed any present-day democracy. Athenian citizens had to be descended from citizens; after the reforms of Pericles and Cimon in 450 BC, only those descended from two Athenian parents could claim citizenship. Although the legislation was not retrospective, five years later, when a free gift of grain had arrived from the Egyptian king to be distributed among all citizens, many "illegitimate" citizens were removed from the registers.
Citizenship applied to both individuals and their descendants. It could also be granted by the assembly and was sometimes given to large groups (e.g. Plateans in 427 BC and Samians in 405 BC). However, by the 4th century, citizenship was given only to individuals and by a special vote with a quorum of 6000. This was generally done as a reward for some service to the state. In the course of a century, the number of citizenships so granted was in the hundreds rather than thousands.
Women in Athens
With participation in Athenian Democracy was only being available to adult male Athenian citizens, women were left out of government and public roles. Even in the case of citizenry, the term was rarely used in reference to women. Rather, women were often referred to as an astē which meant ‘a woman belonging to the city’ or Attikē gunē which meant ‘an Attic woman/wife’. Even the term Athenian was largely reserved for just male citizens. Before Pericles’ law that decreed citizenship to be restricted to children of both Athenian men and women, the polis did not register women as citizens or keep any form of registration for them which resulted in many court cases of witnesses having to prove that women were wives of Athenian men.
In addition to being barred from any form of formal participation in government, women were also largely left out of public discussions and speeches with orators going as far as leaving out the names of wives and daughters of citizens or finding round about ways of referring to them. Pushed out of the public sphere, women’s role was confined to the private sphere of working in the home and being casted as a second-rate human, subservient to her male guardian whether that be a father or husband.
In the realm of Athenian men’s rationalization, part of the reasons for excluding women from politics came from widely held views that women were more sexual, and intellectually handicapped. Athenian men believed that women had a higher sex drive and consequentially if given free range to engage in society would be more promiscuous. With this in mind, they feared that women may engage in affairs and have sons out of wedlock which would jeopardize the Athenian system of property and inheritance between heirs as well as the citizenry of potential children if their parentage was called into question. In terms of intelligence, Athenian men believed that women were less intelligent than men and therefore, similarly to barbarians and slaves of the time, were considered to be incapable of effectively participating and contributing to public discourse on political issues and affairs. These rationales, as well as the barring women from fighting in battle, another requirement of citizens, meant that in the eyes of Athenian men, by nature, women were not meant to be allowed citizenship.
Despite being barred from the right to vote and citizenship overall, women were granted the right to practice religion.
Main bodies of government
Throughout its history, Athens had many different constitutions under its different leaders. Some of the history of Athens’ reforms as well a collection of constitutions from other Ancient Greek city-states was compiled and synthesized into a large all-encompassing constitution created by either Aristotle or one of his students called the Constitution of the Athenians. The Constitution of the Athenians provides a run-down of the structure of Athens' government and its processes.
There were three political bodies where citizens gathered in numbers running into the hundreds or thousands. These are the assembly (in some cases with a quorum of 6000), the council of 500 (boule), and the courts (a minimum of 200 people, on some occasions up to 6,000). Of these three bodies, the assembly and the courts were the true sites of power – although courts, unlike the assembly, were never simply called the demos ('the people'), as they were manned by just those citizens over thirty. Crucially, citizens voting in both were not subject to review and prosecution, as were council members and all other officeholders.
In the 5th century BC, there is often a record of the assembly sitting as a court of judgment itself for trials of political importance and it is not a coincidence that 6,000 is the number both for the full quorum for the assembly and for the annual pool from which jurors were picked for particular trials. By the mid-4th century, however, the assembly's judicial functions were largely curtailed, though it always kept a role in the initiation of various kinds of political trial.
The central events of the Athenian democracy were the meetings of the assembly (ἐκκλησία, ekklesía). Unlike a parliament, the assembly's members were not elected, but attended by right when they chose. Greek democracy created at Athens was direct, rather than representative: any adult male citizen over the age of 20 could take part, and it was a duty to do so. The officials of the democracy were in part elected by the Assembly and in large part chosen by lottery in a process called sortition.
The assembly had four main functions: it made executive pronouncements (decrees, such as deciding to go to war or granting citizenship to a foreigner), elected some officials, legislated, and tried political crimes. As the system evolved, the last function was shifted to the law courts. The standard format was that of speakers making speeches for and against a position, followed by a general vote (usually by show of hands) of yes or no.
Though there might be blocs of opinion, sometimes enduring, on important matters, there were no political parties and likewise no government or opposition (as in the Westminster system). Voting was by simple majority. In the 5th century at least, there were scarcely any limits on the power exercised by the assembly. If the assembly broke the law, the only thing that might happen is that it would punish those who had made the proposal that it had agreed to. If a mistake had been made, from the assembly's viewpoint it could only be because it had been misled.
As usual in ancient democracies, one had to physically attend a gathering in order to vote. Military service or simple distance prevented the exercise of citizenship. Voting was usually by show of hands (χειροτονία, kheirotonia, 'arm stretching') with officials judging the outcome by sight. This could cause problems when it became too dark to see properly. However, any member could demand that officials issue a recount. For a small category of votes, a quorum of 6,000 was required, principally grants of citizenship, and here small coloured stones were used, white for yes and black for no. At the end of the session, each voter tossed one of these into a large clay jar which was afterwards cracked open for the counting of the ballots. Ostracism required the voters to scratch names onto pieces of broken pottery (ὄστρακα, ostraka), though this did not occur within the assembly as such.
In the 5th century BC, there were 10 fixed assembly meetings per year, one in each of the ten state months, with other meetings called as needed. In the following century, the meetings were set to forty a year, with four in each state month. One of these was now called the main meeting, kyria ekklesia. Additional meetings might still be called, especially as up until 355 BC there were still political trials that were conducted in the assembly, rather than in court. The assembly meetings did not occur at fixed intervals, as they had to avoid clashing with the annual festivals that followed the lunar calendar. There was also a tendency for the four meetings to be aggregated toward the end of each state month.
Attendance at the assembly was not always voluntary. In the 5th century, public slaves forming a cordon with a red-stained rope herded citizens from the agora into the assembly meeting place (Pnyx), with a fine being imposed on those who got the red on their clothes. After the restoration of the democracy in 403 BC, pay for assembly attendance was introduced. This promoted a new enthusiasm for assembly meetings. Only the first 6,000 to arrive were admitted and paid, with the red rope now used to keep latecomers at bay.
In 594 BC, Solon is said to have created a boule of 400 to guide the work of the assembly. After the reforms of Cleisthenes, the Athenian Boule was expanded to 500 and was elected by lot every year. Each of Cleisthenes's 10 tribes provided 50 councilors who were at least 30 years old. The Boule's roles in public affairs included finance, maintaining the military's cavalry and fleet of ships, advising the generals, approving of newly elected magistrates, and receiving ambassadors. Most importantly, the Boule would draft probouleumata, or deliberations for the Ecclesia to discuss and approve on. During emergencies, the Ecclesia would also grant special temporary powers to the Boule.
Cleisthenes restricted the Boule's membership to those of zeugitai status (and above), presumably because these classes' financial interests gave them an incentive towards effective governance. A member had to be approved by his deme, each of which would have an incentive to select those with experience in local politics and the greatest likelihood at effective participation in government.
The members from each of the ten tribes in the Boule took it in turns to act as a standing committee (the prytaneis) of the Boule for a period of thirty-six days. All fifty members of the prytaneis on duty were housed and fed in the tholos of the Prytaneion, a building adjacent to the bouleuterion, where the boule met. A chairman for each tribe was chosen by lot each day, who was required to stay in the tholos for the next 24 hours, presiding over meetings of the Boule and Assembly.
The boule also served as an executive committee for the assembly, and oversaw the activities of certain other magistrates. The boule coordinated the activities of the various boards and magistrates that carried out the administrative functions of Athens and provided from its own membership randomly selected boards of ten responsible for areas ranging from naval affairs to religious observances. Altogether, the boule was responsible for a great portion of the administration of the state, but was granted relatively little latitude for initiative; the boule's control over policy was executed in its probouleutic, rather than its executive function; in the former, it prepared measures for deliberation by the assembly, in the latter, it merely executed the wishes of the assembly.
Athens had an elaborate legal system centered on full citizen rights (see atimia). The age limit of 30 or older, the same as that for office holders but ten years older than that required for participation in the assembly, gave the courts a certain standing in relation to the assembly. Jurors were required to be under oath, which was not required for attendance at the assembly. The authority exercised by the courts had the same basis as that of the assembly: both were regarded as expressing the direct will of the people. Unlike office holders (magistrates), who could be impeached and prosecuted for misconduct, the jurors could not be censured, for they, in effect, were the people and no authority could be higher than that. A corollary of this was that, at least acclaimed by defendants, if a court had made an unjust decision, it must have been because it had been misled by a litigant.
Essentially there were two grades of a suit, a smaller kind known as dike (δίκη) or private suit, and a larger kind known as graphe or public suit. For private suits, the minimum jury size was 200 (increased to 401 if a sum of over 1000 drachmas was at issue), for public suits 501. Under Cleisthenes's reforms, juries were selected by lot from a panel of 600 jurors, there being 600 jurors from each of the ten tribes of Athens, making a jury pool of 6000 in total. For particularly important public suits the jury could be increased by adding in extra allotments of 500. 1000 and 1500 are regularly encountered as jury sizes and on at least one occasion, the first time a new kind of case was brought to court (see graphē paranómōn), all 6,000 members of the jury pool may have attended to one case.
The cases were put by the litigants themselves in the form of an exchange of single speeches timed by a water clock or clepsydra, first prosecutor then defendant. In a public suit the litigants each had three hours to speak, much less in private suits (though here it was in proportion to the amount of money at stake). Decisions were made by voting without any time set aside for deliberation. Jurors did talk informally amongst themselves during the voting procedure and juries could be rowdy, shouting out their disapproval or disbelief of things said by the litigants. This may have had some role in building a consensus. The jury could only cast a 'yes' or 'no' vote as to the guilt and sentence of the defendant. For private suits only the victims or their families could prosecute, while for public suits anyone (ho boulomenos, 'whoever wants to' i.e. any citizen with full citizen rights) could bring a case since the issues in these major suits were regarded as affecting the community as a whole.
Justice was rapid: a case could last no longer than one day and had to be completed by the time the sun set. Some convictions triggered an automatic penalty, but where this was not the case the two litigants each proposed a penalty for the convicted defendant and the jury chose between them in a further vote. No appeal was possible. There was however a mechanism for prosecuting the witnesses of a successful prosecutor, which it appears could lead to the undoing of the earlier verdict.
Payment for jurors was introduced around 462 BC and is ascribed to Pericles, a feature described by Aristotle as fundamental to radical democracy (Politics 1294a37). Pay was raised from two to three obols by Cleon early in the Peloponnesian war and there it stayed; the original amount is not known. Notably, this was introduced more than fifty years before payment for attendance at assembly meetings. Running the courts was one of the major expenses of the Athenian state and there were moments of financial crisis in the 4th century when the courts, at least for private suits, had to be suspended.
The system showed a marked anti-professionalism. No judges presided over the courts, nor did anyone give legal direction to the jurors. Magistrates had only an administrative function and were laymen. Most of the annual magistracies in Athens could only be held once in a lifetime. There were no lawyers as such; litigants acted solely in their capacity as citizens. Whatever professionalism there was tended to disguise itself; it was possible to pay for the services of a speechwriter or logographer (logographos), but this may not have been advertised in court. Jurors would likely be more impressed if it seemed as though litigants were speaking for themselves.
Shifting balance between assembly and courts
As the system evolved, the courts (that is, citizens under another guise) intruded upon the power of the assembly. Starting in 355 BC, political trials were no longer held in the assembly, but only in a court. In 416 BC, the graphē paranómōn ('indictment against measures contrary to the laws') was introduced. Under this, anything passed or proposed by the assembly could be put on hold for review before a jury – which might annul it and perhaps punish the proposer as well.
Remarkably, it seems that blocking and then successfully reviewing a measure was enough to validate it without needing the assembly to vote on it. For example, two men have clashed in the assembly about a proposal put by one of them; it passes, and now the two of them go to court with the loser in the assembly prosecuting both the law and its proposer. The quantity of these suits was enormous. The courts became in effect a kind of upper house.
In the 5th century, there were no procedural differences between an executive decree and a law. They were both simply passed by the assembly. However, beginning in 403 BC, they were set sharply apart. Henceforth, laws were made not in the assembly, but by special panels of citizens drawn from the annual jury pool of 6,000. These were known as the nomothetai (νομοθέται, 'the lawmakers').
The institutions sketched above – assembly, officeholders, council, courts – are incomplete without the figure that drove the whole system, Ho boulomenos ('he who wishes', or 'anyone who wishes'). This expression encapsulated the right of citizens to take the initiative to stand to speak in the assembly, to initiate a public lawsuit (that is, one held to affect the political community as a whole), to propose a law before the lawmakers, or to approach the council with suggestions. Unlike officeholders, the citizen initiator was not voted on before taking up office or automatically reviewed after stepping down; these institutions had, after all, no set tenure and might be an action lasting only a moment. However, any stepping forward into the democratic limelight was risky. If another citizen initiator chose, a public figure could be called to account for their actions and punished. In situations involving a public figure, the initiator was referred to as a kategoros ('accuser'), a term also used in cases involving homicide, rather than ho diokon ('the one who pursues').
Pericles, according to Thucydides, characterized the Athenians as being very well-informed on politics:
We do not say that a man who takes no interest in politics is a man who minds his own business; we say that he has no business here at all.
The word idiot originally simply meant "private citizen"; in combination with its more recent meaning of "foolish person", this is sometimes used by modern commentators to demonstrate that the ancient Athenians considered those who did not participate in politics as foolish. But the sense history of the word does not support this interpretation.
Although, voters under Athenian democracy were allowed the same opportunity to voice their opinion and to sway the discussion, they were not always successful, and, often, the minority was forced to vote in favor of a motion that they did not agree with.
Archons and the Areopagus
Just before the reforms of Solon in the 7th century BC, Athens was governed by a few archons (three, then later nine) and the council of the Areopagus, which was composed of members powerful noble families. While there seems to have also been a type of citizen assembly (presumably of the hoplite class), the archons and the body of the Areopagus ran the state and the mass of people had no say in government at all before these reforms.
Solon's reforms allowed the archons to come from some of the higher propertied classes and not only from the aristocratic families. Since the Areopagus was made up of ex-archons, this would eventually mean the weakening of the hold of the nobles there as well. However, even with Solon's creation of the citizen's assembly, the Archons and Areopagus still wielded a great deal of power.
The reforms of Cleisthenes meant that the archons were elected by the Assembly, but were still selected from the upper classes. The Areopagus kept its power as 'Guardian of the Laws', which meant that it could veto actions it deemed unconstitutional, however, this worked in practice.
Ephialtes, and later Pericles, stripped the Areopagus of its role in supervising and controlling the other institutions, dramatically reducing its power. In the play The Eumenides, performed in 458, Aeschylus, himself a noble, portrays the Areopagus as a court established by Athena herself, an apparent attempt to preserve the dignity of the Areopagus in the face of its disempowerment.
Approximately 1100 citizens (including the members of the council of 500) held office each year. They were mostly chosen by lot, with a much smaller (and more prestigious) group of about 100 elected. Neither was compulsory; individuals had to nominate themselves for both selection methods. In particular, those chosen by lot were citizens acting without particular expertise. This was almost inevitable since, with the notable exception of the generals (strategoi), each office had restrictive term limits. For example, a citizen could only be a member of the Boule in two non-consecutive years in their life. In addition, there were some limitations on who could hold office. Age restrictions were in place with thirty years as a minimum, rendering about a third of the adult citizen body ineligible at any one time. An unknown proportion of citizens were also subject to disenfranchisement (atimia), excluding some of them permanently and others temporarily (depending on the type). Furthermore, all citizens selected were reviewed before taking up office (dokimasia) at which time they might be disqualified.
While citizens voting in the assembly were free of review or punishment, those same citizens when holding an office served the people and could be punished very severely. In addition to being subject to review prior to holding office, officeholders were also subject to an examination after leaving office (euthunai, 'straightenings' or 'submission of accounts') to review their performance. Both of these processes were in most cases brief and formulaic, but they opened up the possibility of a contest before a jury court if some citizen wanted to take a matter up. In the case of scrutiny going to trial, there was the risk for the former officeholder of suffering severe penalties. Even during his period of office, any officeholder could be impeached and removed from office by the assembly. In each of the ten "main meetings" (kuriai ekklesiai) a year, the question was explicitly raised in the assembly agenda: were the office holders carrying out their duties correctly?
Citizens active as officeholders served in a quite different capacity from when they voted in the assembly or served as jurors. By and large, the power exercised by these officials was routine administration and quite limited. These officeholders were the agents of the people, not their representatives, so their role was that of administration, rather than governing. The powers of officials were precisely defined and their capacity for initiative limited. When it came to penal sanctions, no officeholder could impose a fine over fifty drachmas. Anything higher had to go before a court. Competence does not seem to have been the main issue, but rather, at least in the 4th century BC, whether they were loyal democrats or had oligarchic tendencies. Part of the ethos of democracy, rather, was the building of general competence by ongoing involvement. In the 5th century setup, the ten annually elected generals were often very prominent, but for those who had power, it lay primarily in their frequent speeches and in the respect accorded them in the assembly, rather than their vested powers.
Selection by lot
The allotment of an individual was based on citizenship, rather than merit or any form of personal popularity which could be bought. Allotment, therefore, was seen as a means to prevent the corrupt purchase of votes and it gave citizens political equality, as all had an equal chance of obtaining government office. This also acted as a check against demagoguery, though this check was imperfect and did not prevent elections from involving pandering to voters.
The random assignment of responsibility to individuals who may or may not be competent has obvious risks, but the system included features meant to mitigate possible problems. Athenians selected for office served as teams (boards, panels). In a group, one person is more likely to know the right way to do things and those that do not may learn from those that do. During the period of holding a particular office, everyone on the team would be observing everybody else as a sort of check. However, there were officials, such as the nine archons, who while seemingly a board carried out very different functions from each other.
No office appointed by lot could be held twice by the same individual. The only exception was the boule or council of 500. In this case, simply by demographic necessity, an individual could serve twice in a lifetime. This principle extended down to the secretaries and undersecretaries who served as assistants to magistrates such as the archons. To the Athenians, it seems what had to be guarded against was not incompetence but any tendency to use the office as a way of accumulating ongoing power.
The representativeness of the Athenian offices (councils, magistrates and juries) selected by lot was mathematically examined by Andranik Tangian, who confirmed the validity of this method of appointment, as well as the ineffectiveness of democracy during times of political instability.
During an Athenian election, approximately one hundred officials out of a thousand were elected rather than chosen by lot. There were two main categories in this group: those required to handle large sums of money, and the 10 generals, the strategoi. One reason that financial officials were elected was that any money embezzled could be recovered from their estates; election in general strongly favoured the rich, but in this case, wealth was virtually a prerequisite.
Generals were elected not only because their role required expert knowledge, but also because they needed to be people with experience and contacts in the wider Greek world where wars were fought. In the 5th century BC, principally as seen through the figure of Pericles, the generals could be among the most powerful people in the polis. Yet in the case of Pericles, it is wrong to see his power as coming from his long series of annual generalships (each year along with nine others). His officeholding was rather an expression and a result of the influence he wielded. That influence was based on his relation with the assembly, a relation that in the first instance lay simply in the right of any citizen to stand and speak before the people. Under the 4th century version of democracy, the roles of general and of key political speaker in the assembly tended to be filled by different persons. In part, this was a consequence of the increasingly specialized forms of warfare practiced in the later period.
Elected officials, too, were subject to review before holding office and scrutiny after office. And they could also be removed from office at any time that the assembly met. There was even a death penalty for "inadequate performance" while in office.
Athenian democracy has had many critics, both ancient and modern. Ancient Greek critics of Athenian democracy include Thucydides the general and historian, Aristophanes the playwright, Plato the pupil of Socrates, Aristotle the pupil of Plato, and a writer known as the Old Oligarch. While modern critics are more likely to find fault with the restrictive qualifications for political involvement, these ancients viewed democracy as being too inclusive. For them, the common people were not necessarily the right people to rule and were likely to make huge mistakes. According to Samons:
The modern desire to look to Athens for lessons or encouragement for modern thought, government, or society must confront this strange paradox: the people that gave rise to and practiced ancient democracy left us almost nothing but criticism of this form of regime (on a philosophical or theoretical level). And what is more, the actual history of Athens in the period of its democratic government is marked by numerous failures, mistakes, and misdeeds—most infamously, the execution of Socrates—that would seem to discredit the ubiquitous modern idea that democracy leads to good government.
Thucydides, from his aristocratic and historical viewpoint, reasoned that a serious flaw in democratic government was that the common people were often much too credulous about even contemporary facts to rule justly, in contrast to his own critical-historical approach to history. For example, he points to errors regarding Sparta; Athenians erroneously believed that Sparta's kings each had two votes in their ruling council and that there existed a Spartan battalion called Pitanate lochos. To Thucydides, this carelessness was due to common peoples' "preference for ready-made accounts".
Similarly, Plato and Aristotle criticized democratic rule as the numerically preponderant poor tyrannizing the rich. Instead of seeing it as a fair system under which everyone has equal rights, they regarded it as manifestly unjust. In Aristotle's works, this is categorized as the difference between 'arithmetic' and 'geometric' (i.e. proportional) equality.
To its ancient detractors, rule by the demos was also reckless and arbitrary. Two examples demonstrate this:
- In 406 BC, after years of defeats in the wake of the annihilation of their vast invasion force in Sicily, the Athenians at last won a naval victory at Arginusae over the Spartans. After the battle, a storm arose and the generals in command failed to collect survivors. The Athenians tried and sentenced six of the eight generals to death. Technically, it was illegal, as the generals were tried and sentenced together, rather than one by one as Athenian law required. Socrates happened to be the citizen presiding over the assembly that day and refused to cooperate (though to little effect) and stood against the idea that it was outrageous for the people to be unable to do whatever they wanted. In addition to this unlawful injustice, the demos later on regretted the decision and decided that they had been misled. Those charged with misleading the demos were put on trial, including the author of the motion to try the generals together.
- In 399 BC, Socrates himself was put on trial and executed for "corrupting the young and believing in strange gods". His death gave Europe one of the first intellectual martyrs still recorded, but guaranteed democracy an eternity of bad press at the hands of his disciple and enemy to democracy, Plato. From Socrates' arguments at his trial, Loren Samons writes, "It follows, of course, that any majority—including the majority of jurors—is unlikely to choose rightly." However, "some might argue, Athens is the only state that can claim to have produced a Socrates. Surely, some might continue, we may simply write off events such as Socrates' execution as examples of the Athenians' failure to realize fully the meaning and potential of their own democracy."
While Plato blamed democracy for killing Socrates, his criticisms of the rule of the demos were much more extensive. Much of his writings were about his alternatives to democracy. His The Republic, The Statesman, and Laws contained many arguments against democratic rule and in favour of a much narrower form of government: "The organization of the city must be confided to those who possess knowledge, who alone can enable their fellow-citizens to attain virtue, and therefore excellence, by means of education."
Whether the democratic failures should be seen as systemic, or as a product of the extreme conditions of the Peloponnesian war, there does seem to have been a move toward correction. A new version of democracy was established in 403 BC, but it can be linked with both earlier and subsequent reforms (graphē paranómōn 416 BC; end of assembly trials 355 BC). For instance, the system of nomothesia was introduced. In this:
A new law might be proposed by any citizen. Any proposal to modify an existing law had to be accompanied by a proposed replacement law. The citizen making the proposal had to publish it [in] advance: publication consisted of writing the proposal on a whitened board located next to the statues of the Eponymous Heroes in the agora. The proposal would be considered by the Council, and would be placed on the agenda of the Assembly in the form of a motion. If the Assembly voted in favor of the proposed change, the proposal would be referred for further consideration by a group of citizens called nomothetai (literally "establishers of the law").
Increasingly, responsibility was shifted from the assembly to the courts, with laws being made by jurors and all assembly decisions becoming reviewable by courts. That is to say, the mass meeting of all citizens lost some ground to gatherings of a thousand or so which were under oath, and with more time to focus on just one matter (though never more than a day). One downside to this change was that the new democracy was less capable of responding quickly in times where quick, decisive action was needed.
Another tack of criticism is to notice the disquieting links between democracy and a number of less than appealing features of Athenian life. Although democracy predated Athenian imperialism by over thirty years, they are sometimes associated with each other. For much of the 5th century at least, democracy fed off an empire of subject states. Thucydides the son of Milesias (not the historian), an aristocrat, stood in opposition to these policies, for which he was ostracised in 443 BC.
At times the imperialist democracy acted with extreme brutality, as in the decision to execute the entire male population of Melos and sell off its women and children simply for refusing to become subjects of Athens. The common people were numerically dominant in the navy, which they used to pursue their own interests in the form of work as rowers and in the hundreds of overseas administrative positions. Furthermore, they used the income from empire to fund payment for officeholding. This is the position set out by the anti-democratic pamphlet known whose anonymous author is often called the Old Oligarch. This writer (also called pseudo-Xenophon) produced several comments critical of democracy, such as:
- Democratic rule acts in the benefit of smaller self-interested factions, rather than the entire polis.
- Collectivizing political responsibility lends itself to both dishonest practices and scapegoating individuals when measures become unpopular.
- By being inclusive, opponents to the system become naturally included within the democratic framework, meaning democracy itself will generate few opponents, despite its flaws.
- A democratic Athens with an imperial policy will spread the desire for democracy outside of the polis.
- The democratic government depends on the control of resources, which requires military power and material exploitation.
- The values of freedom of equality include non-citizens more than it should.
- By blurring the distinction between the natural and political world, democracy leads the powerful to act immorally and outside their own best interest.
Aristotle also wrote about what he considered to be a better form of government than democracy. Rather than any citizen partaking with an equal share in the rule, he thought that those who were more virtuous should have greater power in governance.
A case can be made that discriminatory lines came to be drawn more sharply under Athenian democracy than before or elsewhere, in particular in relation to women and slaves, as well as in the line between citizens and non-citizens. By so strongly validating one role, that of the male citizen, it has been argued that democracy compromised the status of those who did not share it.
- Originally, a male would be a citizen if his father was a citizen, Under Pericles, in 450 BC, restrictions were tightened so that a citizen had to be born to an Athenian father and an Athenian mother. So Metroxenoi, those with foreign mothers, were now to be excluded. These mixed marriages were also heavily penalized by the time of Demosthenes. Many Athenians prominent earlier in the century would have lost citizenship had this law applied to them: Cleisthenes, the founder of democracy, had a non-Athenian mother, and the mothers of Cimon and Themistocles were not Greek at all, but Thracian.
- Likewise the status of women seems lower in Athens than in many Greek cities. In Sparta, women competed in public exercise – so in Aristophanes's Lysistrata the Athenian women admire the tanned, muscular bodies of their Spartan counterparts – and women could own property in their own right, as they could not at Athens. Misogyny was by no means an Athenian invention, but it has been claimed that Athens had worse misogyny than other states at the time.
- Slavery was more widespread at Athens than in other Greek cities. Indeed, the extensive use of imported non-Greeks ("barbarians") as chattel slaves seems to have been an Athenian development. This triggers the paradoxical question: Was democracy "based on" slavery? It does seem clear that possession of slaves allowed even poorer Athenians — owning a few slaves was by no means equated with wealth — to devote more of their time to political life. But whether democracy depended on this extra time is impossible to say. The breadth of slave ownership also meant that the leisure of the rich (the small minority who were actually free of the need to work) rested less than it would have on the exploitation of their less well-off fellow citizens. Working for wages was clearly regarded as subjection to the will of another, but at least debt servitude had been abolished at Athens (under the reforms of Solon at the start of the 6th century BC). Allowing a new kind of equality among citizens opened the way to democracy, which in turn called for a new means, chattel slavery, to at least partially equalise the availability of leisure between rich and poor. In the absence of reliable statistics, all these connections remain speculative. However, as Cornelius Castoriadis pointed out, other societies also kept slaves but did not develop democracy. Even with respect to slavery, it is speculated that Athenian fathers had originally been able to register offspring conceived with slave women for citizenship.
Since the 19th century, the Athenian version of democracy has been seen by one group as a goal yet to be achieved by modern societies. They want representative democracy to be added to or even replaced by direct democracy in the Athenian way, perhaps by utilizing electronic democracy. Another group, on the other hand, considers that, since many Athenians were not allowed to participate in its government, Athenian democracy was not a democracy at all. "[C]omparisons with Athens will continue to be made as long as societies keep striving to realize democracy under modern conditions and their successes and failures are discussed."
Greek philosopher and activist Takis Fotopoulos has argued that “the final failure, of Athenian democracy was not due, as it is usually asserted by its critics, to the innate contradictions of democracy itself but, on the contrary, to the fact that the Athenian democracy never matured to become an inclusive democracy. This cannot be adequately explained by simply referring to the immature ‘objective’ conditions, the low development of productive forces and so on—important as may be—because the same objective conditions prevailed at that time in many other places all over the Mediterranean, let alone the rest of Greece, but democracy flourished only in Athens” .
Since the middle of the 20th century, most countries have claimed to be democratic, regardless of the actual composition of their governments. Yet after the demise of Athenian democracy few looked upon it as a good form of government. No legitimation of that rule was formulated to counter the negative accounts of Plato and Aristotle, who saw it as the rule of the poor, who plundered the rich. Democracy came to be viewed as a "collective tyranny". "Well into the 18th century democracy was consistently condemned." Sometimes, mixed constitutions evolved with democratic elements, but "it definitely did not mean self-rule by citizens".
It would be misleading to say that the tradition of Athenian democracy was an important part of the 18th-century revolutionaries' intellectual background. The classical example that inspired the American and French revolutionaries, as well as English radicals, was Rome rather than Greece, and, in the age of Cicero and Caesar, Rome was a republic but not a democracy. Thus, the Founding Fathers of the United States who met in Philadelphia in 1787 did not set up a Council of the Areopagos, but a Senate, that, eventually, met on the Capitol. Following Rousseau (1712–1778), "democracy came to be associated with popular sovereignty instead of popular participation in the exercise of power".
Several German philosophers and poets took delight in what they saw as the fullness of life in ancient Athens, and not long afterwards "English liberals put forward a new argument in favor of the Athenians". In opposition, thinkers such as Samuel Johnson were worried about the ignorance of democratic decision-making bodies, but "Macaulay and John Stuart Mill and George Grote saw the great strength of the Athenian democracy in the high level of cultivation that citizens enjoyed, and called for improvements in the educational system of Britain that would make possible a shared civic consciousness parallel to that achieved by the ancient Athenians".
George Grote claimed in his History of Greece (1846–1856) that "Athenian democracy was neither the tyranny of the poor, nor the rule of the mob". He argued that only by giving every citizen the vote would people ensure that the state would be run in the general interest.
Later, and until the end of World War Il, democracy became dissociated from its ancient frame of reference. After that, it was not just one of the many possible ways in which political rule could be organised. Instead, it became the only possible political system in an egalitarian society.
References and sources
- Robinson, Eric W. (1997). The First Democracies: Early Popular Government Outside Athens. Historia - Einzelschriften. Stuttgart, Germany: Franz Steiner Verlag. ISBN 978-3515069519.
- Robinson, Eric W. (2011). Democracy beyond Athens: Popular Government in the Greek Classical Age. Cambridge, England: Cambridge University Press. ISBN 978-0521843317.
- Thorley, John (2005). Athenian Democracy. Lancaster Pamphlets in Ancient History. Routledge. p. 74. ISBN 978-1-13-479335-8.
- "Ancient Greek civilization - The reforms of Cleisthenes". Encyclopedia Britannica. Retrieved 10 March 2021.
- Raaflaub, Kurt A. (2007). "The Breakthrough of Demokratia in Mid-Fifth-Century Athens". In Raaflaub, Kurt A.; Ober, Josiah; Wallace, Robert (eds.). Origins of Democracy in Ancient Greece. Berkeley: University of California Press. p. 112.
- Xenophon, Anabasis 4.4.15.
- Clarke, PB. and Foweraker, Encyclopedia of Democratic Thought. Routledge, 2003, p. 196.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.10.
- Farrar, C., The Origins of Democratic Thinking: The Invention of Politics in Classical Athens, CUP Archive, 25 Aug 1989, p.7.
- "Draconian laws | Definition & Facts". Encyclopedia Britannica. Retrieved 5 May 2021.
- Thorley, John (2004). Athenian democracy (2nd ed.). London, New York: Routledge. ISBN 0-203-62256-1. OCLC 174145266.
- Encyclopædia Britannica, Areopagus.
- "Solon's laws | Greek history". Encyclopedia Britannica. Retrieved 5 May 2021.
- Farrar, C., The Origins of Democratic Thinking: The Invention of Politics in Classical Athens, CUP Archive, 25 Aug 1989, p.21.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.25.
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 55–56
- Blackwell, Christopher. "The Development of Athenian Democracy". Dēmos: Classical Athenian Democracy. Stoa. Retrieved 4 May 2016.
- "The Final End of Athenian Democracy".
- Habicht, C., Athens from Alexander to Antony, Harvard University Press, 1997, p. 42.
- Green, P., Alexander to Actium: The Historical Evolution of the Hellenistic Age, University of California Press, 1993, p.29.
- A Companion to Greek Studies, CUP Archive, p. 447.
- Cartledge, P, Garnsey, P. and Gruen, ES., Hellenistic Constructs: Essays in Culture, History, and Historiography, University of California Press, 1997, Ch. 5.
- Habicht, passim
- Rothchild, JA., Introduction to Athenian Democracy of the Fifth and Fourth Centuries BCE.
- Dixon, MD., Late Classical and Early Hellenistic Corinth: 338–196 BC, Routledge, 2014, p. 44.
- Kamen, D., Status in Classical Athens, Princeton University Press, 2013 p. 9.
- agathe.gr: The Unenfranchised II – Slaves and Resident Aliens
- agathe.gr: The Unenfranchised I – Women
- Lape, Susan (2010). Race and citizen identity in the classical Athenian democracy. Cambridge: Cambridge University Press. ISBN 978-0-511-67676-5. OCLC 798549142.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.59.
- Cohen D. and Gagarin, M., The Cambridge Companion to Ancient Greek Law Cambridge University Press, 2005, p. 278.
- Sinclair, RK.,Democracy and Participation in Athens, Cambridge University Press, 30 Aug 1991, pp. 25–26.
- Pritchard, David (2014). "The Position of Attic Women in Democratic Athens" (PDF). Greece and Rome. 61.2.
- "The Internet Classics Archive | The Athenian Constitution by Aristotle". classics.mit.edu. Retrieved 5 May 2021.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.32.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.57.
- Thorley, J., Athenian Democracy, Routledge, 2005, p 33–34.
- Manville, PB., The Origins of Citizenship in Ancient Athens, Princeton University Press, 2014 p. 182.
- Aristophanes Acharnians 17–22.
- Aristoph. Ekklesiazousai 378-9
- Terry Buckley, Aspects of Greek History: A Source-Based Approach, Routledge, 2006, p. 98.
- "Boule: ancient Greek council". Encyclopædia Britannica.
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 31–32
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 30–31.
- Hignett, History of the Athenian Constitution, 238
- Hignett, History of the Athenian Constitution, 241
- Dover, KJ., Greek Popular Morality in the Time of Plato and Aristotle, Hackett Publishing, 1994, p.23.
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 36–38.
- MacDowell, DM., The Law in Classical Athens, Cornell University Press, 1978, p.36.
- Bertoch, MJ., The Greeks had a jury for it, ABA Journal, October, 1971, Vol. 57, p.1013.
- Arnason, JP., Raaflaub, KA. and Wagner, P., The Greek Polis and the Invention of Democracy: A Politico-cultural Transformation and Its Interpretations, John Wiley & Sons, 2013' p. 167.
- Rhodes, PJ., A History of the Classical Greek World: 478 – 323 BC, John Wiley & Sons, 2011, p. 235.
- MacDowell, DM., The Law in Classical Athens, Cornell University Press, 1978, p.250.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.60.
- Cohen D. and Gagarin, M., The Cambridge Companion to Ancient Greek Law Cambridge University Press, 2005, p. 130.
- "Funeral Oration", Thucydides II.40, trans. Rex Warner (1954).
- Goldhill, S., 2004, The Good Citizen, in Love, Sex & Tragedy: Why Classics Matters. John Murray, London, 179-94.
- Anthamatten, Eric (12 June 2017). "Trump and the True Meaning of 'Idiot'". The New York Times. ISSN 0362-4331. Retrieved 26 June 2017.
- Parker, Walter C. (January 2005). "Teaching Against Idiocy". Phi Delta Kappan. 86 (5): 344. doi:10.1177/003172170508600504. S2CID 144893136. ERIC EJ709337.
- Sparkes, A.W. (1988). "Idiots, Ancient and Modern". Australian Journal of Political Science. 23 (1): 101–102. doi:10.1080/00323268808402051.
- see Idiot#Etymology
- Benn, Stanley (2006). "Democracy". In Borchert, Donald M. (ed.). Encyclopedia of Philosophy. 2 (2nd ed.). Detroit: Macmillan Reference USA. pp. 699–703 – via Gale Virtual Reference Library.
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 8–9.
- Sinclair, RK., Democracy and Participation in Athens, Cambridge University Press, 30 Aug 1991, pp. 1–2.
- Encyclopædia Britannica: archon
- Thorley, J., Athenian Democracy, Routledge, 2005, p. 55.
- Thorley, J., Athenian Democracy, Routledge, 2005, p.29.
- Thorley, J., Athenian Democracy, Routledge, 2005, pp. 42–43.
- Samons, L., What's Wrong with Democracy?: From Athenian Practice to American Worship, University of California Press, 2004, pp. 44–45.
- Raaflaub, Kurt A., Ober, Josiah and Wallace Robert W., Origins of Democracy in Ancient Greece, University of California Press, 2007 p. 182.
- Tangian, Andranik (2008). "A mathematical model of Athenian democracy". Social Choice and Welfare. 31 (4): 537–572. doi:10.1007/s00355-008-0295-y. S2CID 7112590.
- Tangian, Andranik (2020). "Chapter 1 Athenian democracy″ and ″Chapter 6 Direct democracy". Analytical theory of democracy. Vols. 1 and 2. Studies in Choice and Welfare. Cham, Switzerland: Springer. pp. 3–43, 263–315. doi:10.1007/978-3-030-39691-6. ISBN 978-3-030-39690-9.
- Cartledge, Paul (July 2006). "Ostracism: selection and de-selection in ancient Greece". History & Policy. United Kingdom: History & Policy. Archived from the original on 16 April 2010. Retrieved 9 December 2010.
- Samons, L., What's Wrong with Democracy?: From Athenian Practice to American Worship, University of California Press, 2004, p. 6.
- Ober, J., Political Dissent in Democratic Athens: Intellectual Critics of Popular Rule, Princeton University Press, 2001, pp. 54 & 78–79.
- Kagan, D. (2013). The Fall of the Athenian Empire. Cornell University Press. p. 108. ISBN 9780801467264.
- Hobden, F. and Tuplin, C., Xenophon: Ethical Principles and Historical Enquiry, BRILL, 2012, pp. 196–199.
- Samons, L., What's Wrong with Democracy?: From Athenian Practice to American Worship, University of California Press, 2004, p. 12 & 195.
- Beck, H., Companion to Ancient Greek Government, John Wiley & Sons, 2013, p. 103.
- Adamidis, Vasileios (2019). "Manifestations of Populism in late 5th Century Athens". New Studies in History and Law: 11–28.
- Ober, J., Political Dissent in Democratic Athens: Intellectual Critics of Popular Rule, Princeton University Press, 2001, p. 43.
- Beck, H., Companion to Ancient Greek Government, John Wiley & Sons, 2013, p.107.
- Hansen, MH., The Athenian Democracy in the Age of Demosthenes: Structure, Principles, and Ideology, University of Oklahoma Press, 1991, p.53.
- Just, R., Women in Athenian Law and Life, Routledge, 2008, p. 15.
- Rodriguez, JP., The Historical Encyclopedia of World Slavery, Volume 7, ABC-CLIO, 1997, pp. 312–314.
- Grafton, A., Most, GA. and Settis, S., The Classical Tradition, Harvard University Press, 2010, p.259.
- Fotopoulos Takis, Towards An Inclusive Democracy, Cassell/Continuum, 1997, p.194"
- Grafton, A., Most, G.A. and Settis, S., The Classical Tradition, Harvard University Press, 2010, pp. 256–259.
- Hansen, M.H., The Tradition of Ancient Greek Democracy and Its Importance for Modern Democracy, Kgl. Danske Videnskabernes Selskab, 2005, p. 10.
- Roberts, J., in Euben, J.P., et al., Athenian Political Thought and the Reconstruction of American Democracy', Cornell University Press, 1994, p. 96.
- Vlassopoulos, K., Politics Antiquity and Its Legacy, Oxford University Press, 2009.
- Habicht, Christian (1997). Athens from Alexander to Antony. Harvard. ISBN 0-674-05111-4.
- Hansen, M.H. (1987). The Athenian Democracy in the age of Demosthenes. Oxford. ISBN 978-0-8061-3143-6.
- Hignett, Charles (1962). A History of the Athenian Constitution. Oxford. ISBN 0-19-814213-7.
- Manville, B.; Ober, Josiah (2003). A company of citizens : what the world's first democracy teaches leaders about creating great organizations. Boston.
- Meier C. 1998, Athens: a portrait of the city in its Golden Age (translated by R. and R. Kimber). New York
- Ober, Josiah (1989). Mass and Elite in Democratic Athens: Rhetoric, Ideology and the Power of the People. Princeton.
- Ober, Josiah; Hendrick, C. (1996). Demokratia: a conversation on democracies, ancient and modern. Princeton.
- Rhodes, P.J. (2004). Athenian democracy. Edinburgh.
- Sinclair, R. K. (1988). Democracy and Participation in Athens. Cambridge University Press.
- World History Encyclopedia – Athenian Democracy
- Ewbank, N. The Nature of Athenian Democracy, Clio History Journal, 2009. | https://wiki-offline.jakearchibald.com/wiki/Athenian_democracy | 21 |
19 | Marx asserted that the key to understanding human culture and history was the struggle between the classes. He used the term class to refer to a group of people within society who share the same social and economic status (Marx K. and Engels F. 1945). According to Marx, class struggles have occurred in every form of society, no matter what its economic structure, or mode of production: slavery, feudalism, or capitalism. In each of these kinds of societies, a minority of people own or control the means of production, such as land, raw materials, tools and machines, labour, and money.
This minority constitutes the ruling class. The vast majority of people own and control very little. They mainly own their own capacity to work. The ruling class uses its economic power to exploit workers by appropriating their surplus labour. In other words, workers are compelled to labour not merely to meet their own needs but also those of the exploiting ruling class.
As a result, workers become alienated from the fruits of their labour (Marx K. and Engels F. 1945).
Marx perceived a class struggle raging between the bourgeoisies or capitalists who controlled the means of production, and the proletariat, or industrial workers. In their view, the bourgeoisie appropriated wealth from the proletariat by paying low wages and keeping the profits from sales and technological innovation for themselves. The central focus of Marx’s economic theory is the labour theory of value. According to Marx, the value of a good is determined by the quantity of labour required to produce it. The labour theory of value is in direct contrast to capitalist assumptions, which hold that productive value is a function of labour plus three additional factors; land (raw materials), capital and management (such as machinery and tools) generally placed a part in the production of goods. Since capital is nothing more than “stored-up labour” (that is, labour that had been used in inventing and constructing machines, tools and assembly lines) the only value capital contributes is determined by the proportion of labour required to eventually replace it. Marx valued capital less (R.H. Popkin and A. Stroll, 1989).
Since only human labour contributes to the value of a product, the total value of a commodity is equal to the total wage cost involved in its production. Within a capitalist system, however, the cost of a good always exceeds paid wages. The reason for this is that the employer, by virtue of his superior economic position, is able to obtain the full services of workers without paying them fully for the value of their productivity. Wage costs, in other words, are always less than the value of goods produced.
Marx called the difference between the two surplus values; it represents the value created by the labourer but appropriated by the employer. Since the ownership of a factory or business firm could not itself contribute to the value of production, any surplus value generated by a business manager represented the illegitimate appropriation of wealth by the bourgeoisie from the proletariat. Surplus value (profit), in other words, is a measure of the exploitation in society. This resulted into class consciousness which led to the people retaliating with their masters (R.H. Popkin and A. Stroll, 1989).
The newly revised minimum wage has been an issue to most investors and companies. This issue has been a source of concern to ignite the fire. Today almost all companies or institutions are complaining to adhere to the demand. This has resulted in civil protests to implement the revised policy of the general workers; hence this is a clear manifestation of the class theory orchestrated by Karl Marx. According to Marx “class consciousness” refers to the workers’ general resentment and feeling of being systematically cheated by the boss, where any aggressive action from complaining to industrial sabotage is viewed as evidence. Class consciousness is essentially the interests of a class becoming its recognized goals (G. Lukacs, 1971). These interests, for those who accept Marx’s analysis, are objective; they accrue to a class because of its real situation and can be found there by all who seriously look. Rather than indicating simply what people want, “interest” refers to those generalized means which increase their ability to get what they want, and includes such things as money, power, ease, and structural reform or its absence.
Whether they know it or not, the higher wages, improved working conditions, job security, inexpensive consumer goods, etc., that most workers say they want are only to be had through such mediation. Moreover, the reference is not only to the present, but to what people will come to want under other and better conditions. Hence, the aptness of C. Wright Mill’s description of Marxian interests as “long run, general, and rational interests.”The most long runs and rational interest of the working class lies in overturning the exploitative relations which keep them, individually and collectively, from getting what they want. Becoming class-conscious in this sense is obviously based on the recognition of belonging to a group which has similar grievances and aspirations, and a correct appreciation of the group’s relevant life conditions (G. Lukacs, 1971).
The realization of their grievances and their aspirations today, most Zambians have ganged up together to fight for better salaries and improved working conditions. This has even solicited some chaos and anarchy in some places, for instance the killing of the Chinese nation at Mamba quariaries due to the ill treatment triggered the majority Zambians to team up and voice out.
For Marx, life itself is the hard school in which the workers learn to be class-conscious, and he clearly believes they possess the qualities requisite to learning this lesson. In so far as people share the same circumstances, work in identical factories, live in similar neighbourhoods, etc., they are inclined to see things-the most important ones at least-in the same way. They cannot know more than what their life presents them with nor differently from what their life permits.
Indeed the life style of the people of Zambia is a good preaching sermon to unite them fight for a common cause due to the low wages given by investors. It is easy to distinguish the life style due to its commonality in their living standard. The standard of living is determined by the income received at the end of the day. The standard of living is quite below poverty datum line to the majority Zambians.
The inevitable outcome would be a revolution in which the proletariat, taking advantage of strikes, elections, and, if necessary, violence, would displace the bourgeoisie as the ruling class. A political revolution was essential, in Marx’s view, because the state is the central instrument of capitalist society. Rather than the proletariat’s conditions serving as a barrier to such rational thinking, Marx believes the reverse is the case. The very extremity of their situation, the very extent of their suffering and deprivation, makes the task of calculating advantages relatively an easy one. As part of this, the one-sided struggle of the working class-according to Engels, “the defeats even more than the victories”-further exposes the true nature of the system. The reality to be understood stands out in harsh relief, rendering errors of judgment increasingly difficult to make. The workers’ much discussed alienation simply does not extend to their ability to calculate advantages, in the matter is regarded as a passing and essentially superficial phenomenon.
Marx maintained that “the abstraction of all humanity, even the semblance of humanity” is “practically complete in the full blown proletariat.” A loophole is reserved for purposive activity, which is the individual’s ability to grasp the nature of what he wants to transform and to direct his energies accordingly. Marx held that productive activity is always purposive, and that this is one of the main features which distinguish human beings from animals. Class consciousness is the result of such purposive activity with the self as object, of workers using their reasoning powers on themselves and their life conditions. It follows necessarily from what they are, both as calculating human beings and as workers caught up in an inhuman situation. The workers are also prompted in their search for socialist meaning by their needs as individuals.
For Marx, society produces people who have needs for whatever, broadly speaking, fulfils their powers in the state in which these latter have been fashioned by society. These needs are invariably felt as wants, and since that which fulfils an individual’s powers includes by extension the conditions for such fulfilment, he soon comes to want the means of his own transformation; for capitalist conditions alone cannot secure for workers, even extremely alienated workers, what they want. Job security, social equality, and uninterrupted improvement in living conditions, for example, are simply impossibilities within the capitalist framework. Hence, even before they recognize their class interests, workers are driven by their needs in ways which serve to satisfy these interests (Wright Mills C, 1962). And, as planned action-based on a full appreciation of what these interests are-is the most effective means of proceeding, needs provide what is possibly the greatest boost to becoming class-conscious.
Both critics and defenders of Marx alike have sought to explain the failure of the working class to assume its historic role by tampering with his account of capitalist conditions. Thus, his critics assert that the lot of the workers has improved, that the middle class has not disappeared, etc., and, at the extreme, that these conditions were never really as bad as Marx claimed (Wolpe’s H. 1970). Indeed even in our daily life people are willing to continue working despite the merger salaries for them to continue earning a living which is quite retrogressive to well being of the people in general. This has made the work very difficult for the government, though the government is pushing for better salaries of its people to ensure the minimum wage is implemented. If it was not conditions which failed Marx, it could only have been the workers.
More precisely, the great majority of workers were not able to attain class consciousness in conditions that were more or less ideal for them to do so. Marx’s error, an error which has had a far-ranging effect on the history of socialist thought and practice, is that he advances from the workers’ conditions of life to class consciousness in a single bound; the various psychological mediations united in class consciousness are treated as one. The severity of these conditions, the pressures he saw coming from material needs, and his belief that workers never lose their ability to calculate advantages made the eventual result certain and a detailed analysis of the steps involved unnecessary. Class consciousness is a more complex phenomenon-and, hence, more fraught with possibilities for failure-than Marx and most other socialists have believed.
With the extra hundred years of hindsight, one can see that what Marx treated as a relatively direct, if not easy, transition is neither. Progress from the workers’ conditions to class consciousness involves not one but many steps, each of which constitutes a real problem of achievement for some section of the working class (Nicholaus M. 1969). First, workers must recognize that they have interests. Second, they must be able to see their interests as individuals in their interests as members of a class. Third, they must be able to distinguish what Marx considers their main interests as workers from other less important economic interests.
Fourth, they must believe that their class interests come prior to their interests as members of a particular nation, religion, race, etc. Fifth, they must truly hate their capitalist exploiters. Sixth, they must have an idea, however vague, that their situation could be qualitatively improved. Seventh, they must believe that they themselves, through some means or other, can help bring about this improvement. Eighth, they must believe that Marx’s strategy, or that advocated by Marxist leaders, offers the best means for achieving their aims. And, ninth, having arrived at all the foregoing, they must not be afraid to act when the time comes. These steps are not only conceptually distinct, but they constitute the real difficulties which have kept the mass of the proletariat in all capitalist countries and in all periods from becoming class-conscious.
Microsoft ® Encarta ® 2009. © 1993-2008 Microsoft Corporation. All rights reserved. Wright Mills C. 1962, The Marxists, New York, p.115.
Lukacs G. 1971, History and Class Consciousness, trans. Rodney Livingstone ;Cambridge Mass.
Marx K. and Engels F. 1945, The Communist Manifesto, trans. Samuel Moore, Chicago.
Nicholaus M. 1969, “The Unknown Marx,” The New Left Reader; Carl Oglesby, New York.
Popkin R. H. And Stroll A. 1989, Philosophy, Heinmann, Made Simple Books.
Wolpe’s H. 1970 “Some Problems Concerning Revolutionary Consciousness,” The Socialist Register; London, Miliband R. and Saville J. | https://customuniversitypapers.com/2019/12/19/elitist-approach-essay/ | 21 |
15 | Most of the people feel comfortable with the concept of covalent and ionic bonds, yet they are not sure about what actually hydrogen bonds are, how their formation takes place, and why they are important.
What do you mean by Hydrogen Bond?
Hydrogen bonds refer to the electromagnetic attractions between 2 atoms or the attraction between
The +ve and -ve poles of charged atoms. These bonds fragile (easily broken) and weak, but are responsible for many significant properties of things like DNA and water.
Some of its key points include:
Hydrogen bonds can be formed between atoms in a molecule or between 2 isolated/separate molecules.
The hydrogen bond is much weak than a covalent bond or an ionic bond, but much stronger
Than Van der Waals forces.
A Hydrogen bond plays a crucial role in producing several unique water properties and also a significant role in biochemistry.
A hydrogen bond is an attractive (dipole-dipole) interaction type between the electronegative atoms and the hydrogen atoms which are bonded to other electronegative atoms.
Hydrogen bonds take place between the molecules or in some parts of a single molecule.
A hydrogen bond van is stronger than van der Waals forces, but it is weaker than an ionic or a covalent bond.
It is approximately 1/20 (5%) of covalent bond’s strength which is formed between hydrogen and oxygen (O-H). Nonetheless, surprisingly this weak bond is powerful enough to withstand even a small fluctuation in temperature.
Examples of Hydrogen Bonds
You’ll find Hydrogen bonds in nucleic acids between water molecules and “Base Pairs”. Similar type of bond can also be formed between carbon and hydrogen atoms of different “chloroform molecules”, between nitrogen and hydrogen atoms of adjacent molecules of ammonia, between recurring subunits in polymer like nylon, and between O and H in acetylacetone. A lot of organic molecules are dependent and answerable to hydrogen bonds. Hydrogen bond:
- Arrange polypeptides in secondary structures, like a beta sheet and alpha helix
- Hold 2 strands of DNA together
- Bind each other’s transcription factors
- Assist attaching Transcription Factors (TF) in the direction of DNA
- Support of antigen-antibody interaction or antigen-antibody binding
Hydrogen Bonds: Formation and Properties
Hydrogen bonds are an electromagnetic attraction between the polar molecules where hydrogen element is attached to large atoms like O or N. It is electron sharing, just like in a covalent bond as discussed earlier also.
Like as you can see in figure 2 of the water molecule, how large is O as compared to hydrogen!
Hydrogen atoms are not having the force needed to pull electrons at a distance from larger atoms, so when the hydrogen atoms are inside a covalent bond with a larger atom, the electron lasts a little more time circulating around the larger atom than the smaller hydrogen. It’s positively empty!
As expected, electrons carry negative charge with them, so in whatever direction the electrons will move, the negative charge will follow accordingly.
As a consequence of this uneven or the unequal sharing is the polar atom. It’s an atom which consists of one end which features negative charge and another end features a positive charge.
Now having a polar molecule, it means that it will act similar to a magnet. The positive point of one atom is attracted by the negative point of another atom. So, as a result, there will be a cluster of polar atoms being lined up, throughout.
All these are polar water molecules, lining up end to end, also forming hydrogen bonds. Now, hydrogen bonds are pretty weak. no shared electron have been there, so breaking a hydrogen bond is that much easy. But, these bonds accounts important properties of molecules, within phenomenon is known as water tension. | https://www.w3spoint.com/hydrogen-bonds | 21 |
24 | Reproductive rights are legal rights and freedoms relating to reproduction and reproductive health that vary amongst countries around the world. The World Health Organization defines reproductive rights as follows:
Reproductive rights rest on the recognition of the basic right of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. They also include the right of all to make decisions concerning reproduction free of discrimination, coercion and violence.
|Rights by beneficiary|
|Other groups of rights|
Women's reproductive rights may include some or all of the following: the abortion-rights movements; birth control; freedom from coerced sterilization and contraception; the right to access good-quality reproductive healthcare; and the right to education and access in order to make free and informed reproductive choices. Reproductive rights may also include the right to receive education about sexually transmitted infections and other aspects of sexuality, right to menstrual health and protection from practices such as female genital mutilation (FGM).
Reproductive rights began to develop as a subset of human rights at the United Nation's 1968 International Conference on Human Rights. The resulting non-binding Proclamation of Tehran was the first international document to recognize one of these rights when it stated that: "Parents have a basic human right to determine freely and responsibly the number and the spacing of their children." Women’s sexual, gynecological, and mental health issues were not a priority of the United Nations until its Decade of Women (1975–1985) brought them to the forefront. States, though, have been slow in incorporating these rights in internationally legally binding instruments. Thus, while some of these rights have already been recognized in hard law, that is, in legally binding international human rights instruments, others have been mentioned only in non binding recommendations and, therefore, have at best the status of soft law in international law, while a further group is yet to be accepted by the international community and therefore remains at the level of advocacy.
Reproductive rights are a subset of sexual and reproductive health and rights.
Proclamation of Tehran
In 1945, the United Nations Charter included the obligation "to promote... universal respect for, and observance of, human rights and fundamental freedoms for all without discrimination as to race, sex, language, or religion". However, the Charter did not define these rights. Three years later, the UN adopted the Universal Declaration of Human Rights (UDHR), the first international legal document to delineate human rights; the UDHR does not mention reproductive rights. Reproductive rights began to appear as a subset of human rights in the 1968 Proclamation of Tehran, which states: "Parents have a basic human right to determine freely and responsibly the number and the spacing of their children".
This right was affirmed by the UN General Assembly in the 1969 Declaration on Social Progress and Development which states "The family as a basic unit of society and the natural environment for the growth and well-being of all its members, particularly children and youth, should be assisted and protected so that it may fully assume its responsibilities within the community. Parents have the exclusive right to determine freely and responsibly the number and spacing of their children." The 1975 UN International Women's Year Conference echoed the Proclamation of Tehran.
Cairo Programme of Action
The twenty-year "Cairo Programme of Action" was adopted in 1994 at the International Conference on Population and Development (ICPD) in Cairo. The non-binding Programme of Action asserted that governments have a responsibility to meet individuals' reproductive needs, rather than demographic targets. It recommended that family planning services be provided in the context of other reproductive health services, including services for healthy and safe childbirth, care for sexually transmitted infections, and post-abortion care. The ICPD also addressed issues such as violence against women, sex trafficking, and adolescent health. The Cairo Program is the first international policy document to define reproductive health, stating:
Reproductive health is a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity, in all matters relating to the reproductive system and its functions and processes. Reproductive health therefore implies that people are able to have a satisfying and safe sex life and that they have the capability to reproduce and the freedom to decide if, when and how often to do so. Implicit in this last condition are the right of men and women to be informed [about] and to have access to safe, effective, affordable and acceptable methods of family planning of their choice, as well as other methods for regulation of fertility which are not against the law, and the right of access to appropriate health-care services that will enable women to go safely through pregnancy and childbirth and provide couples with the best chance of having a healthy infant [para. 72].
Unlike previous population conferences, a wide range of interests from grassroots to government level were represented in Cairo. 179 nations attended the ICPD and overall eleven thousand representatives from governments, NGOs, international agencies and citizen activists participated. The ICPD did not address the far-reaching implications of the HIV/AIDS epidemic. In 1999, recommendations at the ICPD+5 were expanded to include commitment to AIDS education, research, and prevention of mother-to-child transmission, as well as to the development of vaccines and microbicides.
The Cairo Programme of Action was adopted by 184 UN member states. Nevertheless, many Latin American and Islamic states made formal reservations to the programme, in particular, to its concept of reproductive rights and sexual freedom, to its treatment of abortion, and to its potential incompatibility with Islamic law.
Implementation of the Cairo Programme of Action varies considerably from country to country. In many countries, post-ICPD tensions emerged as the human rights-based approach was implemented. Since the ICPD, many countries have broadened their reproductive health programs and attempted to integrate maternal and child health services with family planning. More attention is paid to adolescent health and the consequences of unsafe abortion. Lara Knudsen observes that the ICPD succeeded in getting feminist language into governments' and population agencies' literature, but in many countries the underlying concepts are not widely put into practice. In two preparatory meetings for the ICPD+10 in Asia and Latin America, the United States, under the George W. Bush Administration, was the only nation opposing the ICPD's Programme of Action.
The 1995 Fourth World Conference on Women in Beijing, in its non-binding Declaration and Platform for Action, supported the Cairo Programme's definition of reproductive health, but established a broader context of reproductive rights:
The human rights of women include their right to have control over and decide freely and responsibly on matters related to their sexuality, including sexual and reproductive health, free of coercion, discrimination and violence. Equal relationships between women and men in matters of sexual relations and reproduction, including full respect for the integrity of the person, require mutual respect, consent and shared responsibility for sexual behavior and its consequences [para. 96].
The Beijing Platform demarcated twelve interrelated critical areas of the human rights of women that require advocacy. The Platform framed women's reproductive rights as "indivisible, universal and inalienable human rights." The platform for the 1995 Fourth World Conference on Women included a section that denounced gender-based violence and included forced sterilization as a human rights violation. However, the international community at large has not confirmed that women have a right to reproductive healthcare and in ensuing years since the 1995 conference, countries have proposed language to weaken reproductive and sexual rights. This conference also referenced for the first time indigenous rights and women’s rights at the same time, combining them into one category needing specific representation. Reproductive rights are highly politicized, making it difficult to enact legislation.
The Yogyakarta Principles on the Application of International Human Rights Law in relation to Sexual Orientation and Gender Identity, proposed by a group of experts in November 2006 but not yet incorporated by States in international law, declares in its Preamble that "the international community has recognized the rights of persons to decide freely and responsibly on matters related to their sexuality, including sexual and reproductive health, free from coercion, discrimination, and violence." In relation to reproductive health, Principle 9 on "The Right to Treatment with Humanity while in Detention" requires that "States shall... [p]rovide adequate access to medical care and counseling appropriate to the needs of those in custody, recognizing any particular needs of persons on the basis of their sexual orientation and gender identity, including with regard to reproductive health, access to HIV/AIDS information and therapy and access to hormonal or other therapy as well as to gender-reassignment treatments where desired." Nonetheless, African, Caribbean and Islamic Countries, as well as the Russian Federation, have objected to the use of these principles as Human Rights standards.
State abuses against reproductive rights have happened both under right-wing and left-wing governments. Such abuses include attempts to forcefully increase the birth rate – one of the most notorious natalist policies of the 20th century was that which occurred in communist Romania in the period of 1967–1990 during communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people – as well as attempts to decrease the fertility rate – China's one child policy (1978–2015). State mandated forced marriage was also practiced by authoritarian governments as a way to meet population targets: the Khmer Rouge regime in Cambodia systematically forced people into marriages, in order to increase the population and continue the revolution. Some governments have implemented eugenic policies of forced sterilizations of 'undesirable' population groups. Such policies were carried out against ethnic minorities in Europe and North America in the 20th century, and more recently in Latin America against the Indigenous population in the 1990s; in Peru, President Alberto Fujimori (in office from 1990 to 2000) has been accused of genocide and crimes against humanity as a result of a sterilization program put in place by his administration targeting indigenous people (mainly the Quechuas and the Aymaras).
Prohibition of forced sterilization and forced abortion
Article 39 – Forced abortion and forced sterilisation
- Parties shall take the necessary legislative or other measures to ensure that the following intentional conducts are criminalised:
- a performing an abortion on a woman without her prior and informed consent;
- b performing surgery which has the purpose or effect of terminating a woman’s capacity to naturally reproduce without her prior and informed consent or understanding of the procedure
Human rights have been used as a framework to analyze and gauge abuses, especially for coercive or oppressive governmental policies. The framing of reproductive (human) rights and population control programs are split along race and class lines, with white, western women predominately focused on abortion access (especially during the second wave feminism of the 1970-1980s), silencing women of color in the Global South or marginalized women in the Global North (black and indigenous women, prisoners, welfare recipients) who were subjected to forced sterilization or contraceptive usage campaigns. The hemisphere divide has also been framed as Global North feminists advocating for women’s bodily autonomy and political rights, while Global South women advocate for basic needs through poverty reduction and equality in the economy.
This divide between first world versus third world women established as feminists focused on women’s issues (from the first world largely promoting sexual liberation) versus women focused on political issues (from the third world often opposing dictatorships and policies). In Latin America, this is complicated as feminists tend to align with first world ideals of feminism (sexual/reproductive rights, violence against women, domestic violence) and reject religious institutions such as the Catholic Church and Evangelicals, which attempt to control women’s reproduction. On the other side, human rights advocates are often aligned with religious institutions that are specifically combating political violence, instead of focusing on issues of individual bodily autonomy.
The debate regarding whether women should have complete autonomous control over their bodies has been espoused by the United Nations and individual countries, but many of those same countries fail to implement these human rights for their female citizens. This shortfall may be partly due to the delay of including women-specific issues in the human rights framework. However, multiple human rights documents and declarations specifically proclaim reproductive rights of women, including the ability to make their own reproductive healthcare decisions regarding family planning, including: the UN Declaration of Human Rights (1948), The Convention on the Elimination of All Forms of Discrimination Against Women (1979), the U.N.’s Millennium Development Goals, and the new Sustainable Development Goals, which are focused on integrating universal reproductive healthcare access into national family planning programs. Unfortunately, the 2007 Declaration on the Rights of Indigenous Peoples, did not address indigenous women’s reproductive or maternal healthcare rights or access.
Since most existing legally binding international human rights instruments do not explicitly mention sexual and reproductive rights, a broad coalition of NGOs, civil servants, and experts working in international organizations have been promoting a reinterpretation of those instruments to link the realization of the already internationally recognized human rights with the realization of reproductive rights. An example of this linkage is provided by the 1994 Cairo Programme of Action:
Reproductive rights embrace certain human rights that are already recognized in national laws, international human rights documents and other relevant United Nations consensus documents. These rights rest on the recognition of the basic right of all couples and individuals to decide freely and responsibly the number, spacing and timing of their children and to have the information and means to do so, and the right to attain the highest standard of sexual and reproductive health. It also includes the right of all to make decisions concerning reproduction free of discrimination, coercion and violence as expressed in human rights documents. In the exercise of this right, they should take into account the needs of their living and future children and their responsibilities towards the community.
Similarly, Amnesty International has argued that the realisation of reproductive rights is linked with the realisation of a series of recognised human rights, including the right to health, the right to freedom from discrimination, the right to privacy, and the right not to be subjected to torture or ill-treatment.
The World Health Organization states that:
Sexual and reproductive health and rights encompass efforts to eliminate preventable maternal and neonatal mortality and morbidity, to ensure quality sexual and reproductive health services, including contraceptive services, and to address sexually transmitted infections (STI) and cervical cancer, violence against women and girls, and sexual and reproductive health needs of adolescents. Universal access to sexual and reproductive health is essential not only to achieve sustainable development but also to ensure that this new framework speaks to the needs and aspirations of people around the world and leads to realisation of their health and human rights.
However, not all states have accepted the inclusion of reproductive rights in the body of internationally recognized human rights. At the Cairo Conference, several states made formal reservations either to the concept of reproductive rights or to its specific content. Ecuador, for instance, stated that:
With regard to the Programme of Action of the Cairo International Conference on Population and Development and in accordance with the provisions of the Constitution and laws of Ecuador and the norms of international law, the delegation of Ecuador reaffirms, inter alia, the following principles embodied in its Constitution: the inviolability of life, the protection of children from the moment of conception, freedom of conscience and religion, the protection of the family as the fundamental unit of society, responsible paternity, the right of parents to bring up their children and the formulation of population and development plans by the Government in accordance with the principles of respect for sovereignty. Accordingly, the delegation of Ecuador enters a reservation with respect to all terms such as "regulation of fertility", "interruption of pregnancy", "reproductive health", "reproductive rights" and "unwanted children", which in one way or another, within the context of the Programme of Action, could involve abortion.
Similar reservations were made by Argentina, Dominican Republic, El Salvador, Honduras, Malta, Nicaragua, Paraguay, Peru and the Holy See. Islamic Countries, such as Brunei, Djibouti, Iran, Jordan, Kuwait, Libya, Syria, United Arab Emirates, and Yemen made broad reservations against any element of the programme that could be interpreted as contrary to the Sharia. Guatemala even questioned whether the conference could legally proclaim new human rights.
|Part of a series on|
The United Nations Population Fund (UNFPA) and the World Health Organization (WHO) advocate for reproductive rights with a primary emphasis on women's rights. In this respect the UN and WHO focus on a range of issues from access to family planning services, sex education, menopause, and the reduction of obstetric fistula, to the relationship between reproductive health and economic status.
The reproductive rights of women are advanced in the context of the right to freedom from discrimination and the social and economic status of women. The group Development Alternatives with Women for a New Era (DAWN) explained the link in the following statement:
Control over reproduction is a basic need and a basic right for all women. Linked as it is to women's health and social status, as well as the powerful social structures of religion, state control and administrative inertia, and private profit, it is from the perspective of poor women that this right can best be understood and affirmed. Women know that childbearing is a social, not a purely personal, phenomenon; nor do we deny that world population trends are likely to exert considerable pressure on resources and institutions by the end of this century. But our bodies have become a pawn in the struggles among states, religions, male heads of households, and private corporations. Programs that do not take the interests of women into account are unlikely to succeed...
Women's reproductive rights have long retained key issue status in the debate on overpopulation.
"The only ray of hope I can see – and it's not much – is that wherever women are put in control of their lives, both politically and socially; where medical facilities allow them to deal with birth control and where their husbands allow them to make those decisions, birth rate falls. Women don't want to have 12 kids of whom nine will die." David Attenborough
According to OHCHR: "Women’s sexual and reproductive health is related to multiple human rights, including the right to life, the right to be free from torture, the right to health, the right to privacy, the right to education, and the prohibition of discrimination".
Attempts have been made to analyse the socioeconomic conditions that affect the realisation of a woman's reproductive rights. The term reproductive justice has been used to describe these broader social and economic issues. Proponents of reproductive justice argue that while the right to legalized abortion and contraception applies to everyone, these choices are only meaningful to those with resources, and that there is a growing gap between access and affordability.
|Part of a series on|
Men's reproductive rights have been claimed by various organizations, both for issues of reproductive health, and other rights related to sexual reproduction.
Recently men's reproductive right with regards to paternity have become subject of debate in the U.S. The term "male abortion" was coined by Melanie McCulley, a South Carolina attorney, in a 1998 article. The theory begins with the premise that when a woman becomes pregnant she has the option of abortion, adoption, or parenthood. A man, however, has none of those options, but will still be affected by the woman's decision. It argues, in the context of legally recognized gender equality, that in the earliest stages of pregnancy the putative (alleged) father should have the right to relinquish all future parental rights and financial responsibility, leaving the informed mother with the same three options. This concept has been supported by a former president of the feminist organization National Organization for Women, attorney Karen DeCrow. The feminist argument for male reproductive choice contends that the uneven ability to choose experienced by men and women in regards to parenthood is evidence of a state-enforced coercion favoring traditional sex roles.
In 2006, the National Center for Men brought a case in the US, Dubay v. Wells (dubbed by some "Roe v. Wade for men"), that argued that in the event of an unplanned pregnancy, when an unmarried woman informs a man that she is pregnant by him, he should have an opportunity to give up all paternity rights and responsibilities. Supporters argue that this would allow the woman time to make an informed decision and give men the same reproductive rights as women. In its dismissal of the case, the U.S. Court of Appeals (Sixth Circuit) stated that "the Fourteenth Amendment does not deny to [the] State the power to treat different classes of persons in different ways."
Intersex and reproductive rights
Intersex, in humans and other animals, is a variation in sex characteristics including chromosomes, gonads, or genitals that do not allow an individual to be distinctly identified as male or female. Such variation may involve genital ambiguity, and combinations of chromosomal genotype and sexual phenotype other than XY-male and XX-female. Intersex persons are often subjected to involuntary "sex normalizing" surgical and hormonal treatments in infancy and childhood, often also including sterilization.
UN agencies have begun to take note. On 1 February 2013, Juan E Mendés, the UN Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, issued a statement condemning non-consensual surgical intervention on intersex people. His report stated, "Children who are born with atypical sex characteristics are often subject to irreversible sex assignment, involuntary sterilization, involuntary genital normalizing surgery, performed without their informed consent, or that of their parents, "in an attempt to fix their sex", leaving them with permanent, irreversible infertility and causing severe mental suffering". In May 2014, the World Health Organization issued a joint statement on Eliminating forced, coercive and otherwise involuntary sterilization, An interagency statement with the OHCHR, UN Women, UNAIDS, UNDP, UNFPA and UNICEF. The report references the involuntary surgical "sex-normalising or other procedures" on "intersex persons". It questions the medical necessity of such treatments, patients' ability to consent, and a weak evidence base. The report recommends a range of guiding principles to prevent compulsory sterilization in medical treatment, including ensuring patient autonomy in decision-making, ensuring non-discrimination, accountability and access to remedies.
Youth rights and access
In many jurisdictions minors require parental consent or parental notification in order to access various reproductive services, such as contraception, abortion, gynecological consultations, testing for STDs etc. The requirement that minors have parental consent/notification for testing for HIV/AIDS is especially controversial, particularly in areas where the disease is endemic, and it is a sensitive subject. Balancing minors' rights versus parental rights is considered an ethical problem in medicine and law, and there have been many court cases on this issue in the US. An important concept recognized since 1989 by the Convention on the Rights of the Child is that of the evolving capacities of a minor, namely that minors should, in accordance with their maturity and level of understanding, be involved in decisions that affect them.
Youth are often denied equal access to reproductive health services because health workers view adolescent sexual activity as unacceptable, or see sex education as the responsibility of parents. Providers of reproductive health have little accountability to youth clients, a primary factor in denying youth access to reproductive health care. In many countries, regardless of legislation, minors are denied even the most basic reproductive care, if they are not accompanied by parents: in India, for instance, in 2017, a 17-year-old girl who was rejected by her family due to her pregnancy, was also rejected by hospitals and gave birth in the street. In recent years the lack of reproductive rights for adolescents has been a concern of international organizations, such as UNFPA.
Mandatory involvement of parents in cases where the minor has sufficient maturity to understand their situation is considered by health organization as a violation of minor's rights and detrimental to their health. The World Health Organization has criticized parental consent/notification laws:
Discrimination in health care settings takes many forms and is often manifested when an individual or group is denied access to health care services that are otherwise available to others. It can also occur through denial of services that are only needed by certain groups, such as women. Examples include specific individuals or groups being subjected to physical and verbal abuse or violence; involuntary treatment; breaches of confidentiality and/or denial of autonomous decision-making, such as the requirement of consent to treatment by parents, spouses or guardians; and lack of free and informed consent. ... Laws and policies must respect the principles of autonomy in health care decision-making; guarantee free and informed consent, privacy and confidentiality; prohibit mandatory HIV testing; prohibit screening procedures that are not of benefit to the individual or the public; and ban involuntary treatment and mandatory third-party authorization and notification requirements.
According to UNICEF: "When dealing with sexual and reproductive health, the obligation to inform parents and obtain their consent becomes a significant barrier with consequences for adolescents’ lives and for public health in general." One specific issue which is seen as a form of hypocrisy of legislators is that of having a higher age of medical consent for the purpose of reproductive and sexual health than the age of sexual consent – in such cases the law allows youth to engage in sexual activity, but does not allow them to consent to medical procedures that may arise from being sexually active; UNICEF states that "On sexual and reproductive health matters, the minimum age of medical consent should never be higher than the age of sexual consent."
Youth sexual education in Uganda is relatively low. Comprehensive sex education is not generally taught in schools; even if it was, the majority of young people do not stay in school after the age of fifteen, so information would be limited regardless.
Africa experiences high rates of unintended pregnancy, along with high rates of HIV/AIDS. Young women aged 15–24 are eight times more likely to have HIV/AIDS than young men. Sub-Saharan Africa is the world region most affected by HIV/AIDS, with approximately 25 million people living with HIV in 2015. Sub-Saharan Africa accounts for two-thirds of the global total of new HIV infections.
Attempted abortions and unsafe abortions are a risk for youth in Africa. On average, there are 2.4 million unsafe abortions in East Africa, 1.8 million in Western Africa, over 900,000 in Middle Africa, and over 100,000 in Southern Africa each year.
In Uganda, abortion is illegal except to save the mother's life. However, 78% of teenagers report knowing someone who has had an abortion and the police do not always prosecute everyone who has an abortion. An estimated 22% of all maternal deaths in the area stem from illegal, unsafe abortions.
Sweden has the highest percentage of lifetime contraceptive use, with 96% of its inhabitants claiming to have used birth control at some point in their life. Sweden also has a high self-reported rate of postcoital pill use. A 2007 anonymous survey of Swedish 18-year-olds showed that three out of four youth were sexually active, with 5% reporting having had an abortion and 4% reporting the contraction of an STI.
In the European Union, reproductive rights are protected through the European Convention on Human Rights and its jurisprudence, as well as the Convention on preventing and combating violence against women and domestic violence (the Istanbul Convention). However, these rights are denied or restricted by the laws, policies and practices of member states. In fact, some countries criminalize medical staff, have stricter regulations than the international norm or exclude legal abortion and contraception from public health insurance. A study conducted by Policy Departments, at the request of the European Parliament Committee on Women’s Rights and Gender Equality, recommends the EU to strengthen the legal framework on equal access to sexual and reproductive health goods and services.
Latin America has come to international attention due to its harsh anti-abortion laws. Latin America is home to some of the few countries of the world with a complete ban on abortion, without an exception for saving maternal life. In some of these countries, particularity in Central America, the enforcement of such laws is very aggressive: El Salvador and Nicaragua have drawn international attention for strong enforcement of their complete bans on abortion. In 2017, Chile relaxed its total ban, allowing abortion to be performed when the woman’s life is in danger, when a fetus is unviable, or in cases of rape.
In Ecuador, education and class play a large role in the definition of which young women become pregnant and which do not – 50% of young women who are illiterate get pregnant, compared to 11% of girls with secondary education. The same is true for poorer individuals – 28% become impregnated while only 11% of young women in wealthier households do. Furthermore, access to reproductive rights, including contraceptives, are limited, due to age and the perception of female morality. Health care providers often discuss contraception theoretically, not as a device to be used on a regular basis. Decisions concerning sexual activity often involve secrecy and taboos, as well as a lack of access to accurate information. Even more telling, young women have much easier access to maternal healthcare than they do to contraceptive help, which helps explain high pregnancy rates in the region.
Rates of adolescent pregnancy in Latin America number over a million each year.
Among sexually experienced teenagers, 78% of teenage females and 85% of teenage males used contraception the first time they had sex; 86% and 93% of these same females and males, respectively, reported using contraception the last time they had sex. The male condom is the most commonly used method during first sex, although 54% of young women in the United States rely upon the pill.
Young people in the U.S. are no more sexually active than individuals in other developed countries, but they are significantly less knowledgeable about contraception and safe sex practices. As of 2006, only twenty states required sex education in schools – of these, only ten required information about contraception. On the whole, less than 10% of American students receive sex education that includes topical coverage of abortion, homosexuality, relationships, pregnancy, and STI prevention. Abstinence-only education was used throughout much of the United States in the 1990s and early 2000s. Based upon the moral principle that sex outside of marriage is unacceptable, the programs often misled students about their rights to have sex, the consequences, and prevention of pregnancy and STIs.
Abortion in the United States is legal since the United States Supreme Court decision Roe v. Wade which decriminalised abortion nationwide in 1973, and established a minimal period during which abortion is legal (with more or fewer restrictions throughout the pregnancy). That basic framework, modified in Planned Parenthood v. Casey (1992), remains nominally in place, although the effective availability of abortion varies significantly from state to state, as many counties have no abortion providers. Planned Parenthood v. Casey held that a law cannot place legal restrictions imposing an undue burden for "the purpose or effect of placing a substantial obstacle in the path of a woman seeking an abortion of a nonviable fetus." Abortion is a controversial political issue, and regular attempts to restrict it occur in most states. One such case, originating in Texas, led to the Supreme Court case of Whole Woman's Health v. Hellerstedt (2016) in which several Texas restrictions were struck down.
Lack of knowledge about rights
One of the reasons why reproductive rights are poor in many places, is that the vast majority of the population does not know what the law is. Not only are ordinary people uninformed, but so are medical doctors. A study in Brazil on medical doctors found considerable ignorance and misunderstanding of the law on abortion (which is severely restricted, but not completely illegal). In Ghana, abortion, while restricted, is permitted on several grounds, but only 3% of pregnant women and 6% of those seeking an abortion were aware of the legal status of abortion. In Nepal, abortion was legalized in 2002, but a study in 2009 found that only half of women knew that abortion was legalized. Many people also do not understand the laws on sexual violence: in Hungary, where marital rape was made illegal in 1997, in a study in 2006, 62% of people did not know that marital rape was a crime. The United Nations Development Programme states that, in order to advance gender justice, "Women must know their rights and be able to access legal systems", and the 1993 UN Declaration on the Elimination of Violence Against Women states at Art. 4 (d) [...] "States should also inform women of their rights in seeking redress through such mechanisms".
Gender equality and violence against women
Addressing issues of gender-based violence is crucial for attaining reproductive rights. The United Nations Population Fund refers to "Equality and equity for men and women, to enable individuals to make free and informed choices in all spheres of life, free from discrimination based on gender" and "Sexual and reproductive security, including freedom from sexual violence and coercion, and the right to privacy," as part of achieving reproductive rights, and states that the right to liberty and security of the person which is fundamental to reproductive rights obliges states to:
- Take measures to prevent, punish and eradicate all forms of gender-based violence
- Eliminate female genital mutilation/cutting
- Gender and Reproductive Rights (GRR) aims to promote and protect human rights and gender equality as they relate to sexual and reproductive health by developing strategies and mechanisms for promoting gender equity and equality and human rights in the Departments global and national activities, as well as within the functioning and priority-setting of the Department itself.
- Violence against women violates women's rights to life, physical and mental integrity, to the highest attainable standard of health, to freedom from torture and it violates their sexual and reproductive rights.
One key issue for achieving reproductive rights is criminalization of sexual violence. If a woman is not protected from forced sexual intercourse, she is not protected from forced pregnancy, namely pregnancy from rape. In order for a woman to be able to have reproductive rights, she must have the right to choose with whom and when to reproduce; and first of all, decide whether, when, and under what circumstances to be sexually active. In many countries, these rights of women are not respected, because women do not have a choice in regard to their partner, with forced marriage and child marriage being common in parts of the world; and neither do they have any rights in regard to sexual activity, as many countries do not allow women to refuse to engage in sexual intercourse when they do not want to (because marital rape is not criminalized in those countries) or to engage in consensual sexual intercourse if they want to (because sex outside marriage is illegal in those countries). In addition to legal barriers, there are also social barriers, because in many countries a complete sexual subordination of a woman to her husband is expected (for instance, in one survey 74% of women in Mali said that a husband is justified to beat his wife if she refuses to have sex with him), while sexual/romantic relations disapproved by family members, or generally sex outside marriage, can result in serious violence, such as honor killings.
According to the CDC, "HIV stands for human immunodeficiency virus. It weakens a person’s immune system by destroying important cells that fight disease and infection. No effective cure exists for HIV. But with proper medical care, HIV can be controlled." HIV amelioration is an important aspect of reproductive rights because the virus can be transmitted from mother to child during pregnancy or birth, or via breast milk.
The WHO states that: "All women, including those with HIV, have the right "to decide freely and responsibly on the number and spacing of their children and to have access to the information, education and means to enable them to exercise these rights"". The reproductive rights of people living with HIV, and their health, are very important. The link between HIV and reproductive rights exists in regard to four main issues:
- prevention of unwanted pregnancy
- help to plan wanted pregnancy
- healthcare during and after pregnancy
- access to abortion services
Child and forced marriage
The WHO states that the reproductive rights and health of girls in child marriages are negatively affected. The UNPF calls child marriage a "human rights violation" and states that in developing countries, one in every three girls is married before reaching age 18, and one in nine is married under age 15. A forced marriage is a marriage in which one or more of the parties is married without his or her consent or against his or her will. The Istanbul convention, the first legally binding instrument in Europe in the field of violence against women and domestic violence, requires countries which ratify it to prohibit forced marriage (Article 37) and to ensure that forced marriages can be easily voided without further victimization (Article 32).
Sexual violence in armed conflict
Sexual violence in armed conflict is sexual violence committed by combatants during armed conflict, war, or military occupation often as spoils of war; but sometimes, particularly in ethnic conflict, the phenomenon has broader sociological motives. It often includes gang rape. Rape is often used as a tactic of war and a threat to international security. Sexual violence in armed conflict is a violation of reproductive rights, and often leads to forced pregnancy and sexually transmitted infections. Such sexual violations affect mostly women and girls, but rape of men can also occur, such as in Democratic Republic of the Congo.
Maternal death is defined by the World Health Organization (WHO) as "the death of a woman while pregnant or within 42 days of termination of pregnancy, irrespective of the duration and site of the pregnancy, from any cause related to or aggravated by the pregnancy or its management but not from accidental or incidental causes." It is estimated that in 2015, about 303,000 women died during and following pregnancy and childbirth, and 99% of such deaths occur in developing countries.
Birth control, also known as contraception and fertility control, is a method or device used to prevent pregnancy. Birth control has been used since ancient times, but effective and safe methods of birth control only became available in the 20th century. Planning, making available, and using birth control is called family planning. Some cultures limit or discourage access to birth control because they consider it to be morally, religiously, or politically undesirable.
All birth control methods meet opposition, especially religious opposition, in some parts of the world. Opposition does not only target modern methods, but also 'traditional' ones; for example, the Quiverfull movement, a conservative Christian ideology, encourages the maximization of procreation, and opposes all forms of birth control, including natural family planning.
According to a study by WHO and the Guttmacher Institute worldwide, 25 million unsafe abortions (45% of all abortions) occurred every year between 2010 and 2014. 97% of unsafe abortions occur in developing countries in Africa, Asia and Latin America. By contrast, most abortions that take place in Western and Northern Europe and North America are safe.
The Committee on the Elimination of Discrimination against Women considers the criminalization of abortion a "violations of women's sexual and reproductive health and rights" and a form of "gender-based violence"; paragraph 18 of its General recommendation No. 35 on gender-based violence against women, updating general recommendation No. 19 states that: "Violations of women's sexual and reproductive health and rights, such as forced sterilizations, forced abortion, forced pregnancy, criminalisation of abortion, denial or delay of safe abortion and post-abortion care, forced continuation of pregnancy, abuse and mistreatment of women and girls seeking sexual and reproductive health information, goods and services, are forms of gender based violence that, depending on the circumstances, may amount to torture or cruel, inhuman or degrading treatment." The same General Recommendation also urges countries at paragraph 31 to [...] "In particular, repeal: a) Provisions that allow, tolerate or condone forms of gender-based violence against women, including [...] legislation that criminalises abortion."
An article from the World Health Organization calls safe, legal abortion a "fundamental right of women, irrespective of where they live" and unsafe abortion a "silent pandemic". The article states "ending the silent pandemic of unsafe abortion is an urgent public-health and human-rights imperative." It also states "access to safe abortion improves women's health, and vice versa, as documented in Romania during the regime of President Nicolae Ceaușescu" and "legalisation of abortion on request is a necessary but insufficient step toward improving women's health" citing that in some countries, such as India where abortion has been legal for decades, access to competent care remains restricted because of other barriers. WHO's Global Strategy on Reproductive Health, adopted by the World Health Assembly in May 2004, noted: "As a preventable cause of maternal mortality and morbidity, unsafe abortion must be dealt with as part of the MDG on improving maternal health and other international development goals and targets." The WHO's Development and Research Training in Human Reproduction (HRP), whose research concerns people's sexual and reproductive health and lives, has an overall strategy to combat unsafe abortion that comprises four inter-related activities:
- to collate, synthesize and generate scientifically sound evidence on unsafe abortion prevalence and practices;
- to develop improved technologies and implement interventions to make abortion safer;
- to translate evidence into norms, tools and guidelines;
- and to assist in the development of programmes and policies that reduce unsafe abortion and improve access to safe abortion and high quality post-abortion care
The UN has estimated in 2017 that repealing anti-abortion laws would save the lives of nearly 50,000 women a year. 209,519 abortions take place in England and Wales alone. Unsafe abortions take place primarily in countries where abortion is illegal, but also occur in countries where it is legal. Despite its legal status, an abortion is de facto hardly optional for women due to most doctors being conscientious objectors. Other reasons include the lack of knowledge that abortions are legal, lower socioeconomic backgrounds and spatial disparities.[United States-centric] These practical applications have raised some concern; the UN in its 2017 resolution on Intensification of efforts to prevent and eliminate all forms of violence against women and girls: domestic violence urged states to guarantee access to "safe abortion where such services are permitted by national law". In 2008, Human Rights Watch stated that "In fact, even where abortion is permitted by law, women often have severely limited access to safe abortion services because of lack of proper regulation, health services, or political will" and estimated that "Approximately 13 percent of maternal deaths worldwide are attributable to unsafe abortion—between 68,000 and 78,000 deaths annually."
The Maputo Protocol, which was adopted by the African Union in the form of a protocol to the African Charter on Human and Peoples' Rights, states at Article 14 (Health and Reproductive Rights) that: "(2). States Parties shall take all appropriate measures to: [...] c) protect the reproductive rights of women by authorising medical abortion in cases of sexual assault, rape, incest, and where the continued pregnancy endangers the mental and physical health of the mother or the life of the mother or the foetus." The Maputo Protocol is the first international treaty to recognize abortion, under certain conditions, as a woman's human right.
The General comment No. 36 (2018) on article 6 of the International Covenant on Civil and Political Rights, on the right to life, adopted by the Human Rights Committee in 2018, defines, for the first time ever, a human right to abortion – in certain circumstances (however these UN general comments are considered soft law, and, as such, not legally binding).
Although States parties may adopt measures designed to regulate voluntary terminations of pregnancy, such measures must not result in violation of the right to life of a pregnant woman or girl, or her other rights under the Covenant. Thus, restrictions on the ability of women or girls to seek abortion must not, inter alia, jeopardize their lives, subject them to physical or mental pain or suffering which violates article 7, discriminate against them or arbitrarily interfere with their privacy. States parties must provide safe, legal and effective access to abortion where the life and health of the pregnant woman or girl is at risk, and where carrying a pregnancy to term would cause the pregnant woman or girl substantial pain or suffering, most notably where the pregnancy is the result of rape or incest or is not viable. In addition, States parties may not regulate pregnancy or abortion in all other cases in a manner that runs contrary to their duty to ensure that women and girls do not have to undertake unsafe abortions, and they should revise their abortion laws accordingly. For example, they should not take measures such as criminalizing pregnancies by unmarried women or apply criminal sanctions against women and girls undergoing abortion or against medical service providers assisting them in doing so, since taking such measures compel women and girls to resort to unsafe abortion. States parties should not introduce new barriers and should remove existing barriers that deny effective access by women and girls to safe and legal abortion, including barriers caused as a result of the exercise of conscientious objection by individual medical providers.
When negotiating the Cairo Programme of Action at the 1994 International Conference on Population and Development (ICPD), the issue was so contentious that delegates eventually decided to omit any recommendation to legalize abortion, instead advising governments to provide proper post-abortion care and to invest in programs that will decrease the number of unwanted pregnancies.
On 18 April 2008 the Parliamentary Assembly of the Council of Europe, a group comprising members from 47 European countries, adopted a resolution calling for the decriminalization of abortion within reasonable gestational limits and guaranteed access to safe abortion procedures. The nonbinding resolution was passed on 16 April by a vote of 102 to 69.
During and after the ICPD, some interested parties attempted to interpret the term "reproductive health" in the sense that it implies abortion as a means of family planning or, indeed, a right to abortion. These interpretations, however, do not reflect the consensus reached at the Conference. For the European Union, where legislation on abortion is certainly less restrictive than elsewhere, the Council Presidency has clearly stated that the Council's commitment to promote "reproductive health" did not include the promotion of abortion. Likewise, the European Commission, in response to a question from a Member of the European Parliament, clarified:
The term reproductive health was defined by the United Nations (UN) in 1994 at the Cairo International Conference on Population and Development. All Member States of the Union endorsed the Programme of Action adopted at Cairo. The Union has never adopted an alternative definition of 'reproductive health' to that given in the Programme of Action, which makes no reference to abortion.
Let us get a false issue off the table: the US does not seek to establish a new international right to abortion, and we do not believe that abortion should be encouraged as a method of family planning.
Some years later, the position of the U.S. Administration in this debate was reconfirmed by U.S. Ambassador to the UN, Ellen Sauerbrey, when she stated at a meeting of the UN Commission on the Status of Women that: "nongovernmental organizations are attempting to assert that Beijing in some way creates or contributes to the creation of an internationally recognized fundamental right to abortion". She added: "There is no fundamental right to abortion. And yet it keeps coming up largely driven by NGOs trying to hijack the term and trying to make it into a definition".
Collaborative research from the Institute of Development Studies states that "access to safe abortion is a matter of human rights, democracy and public health, and the denial of such access is a major cause of death and impairment, with significant costs to [international] development". The research highlights the inequities of access to safe abortion both globally and nationally and emphasises the importance of global and national movements for reform to address this. The shift by campaigners of reproductive rights from an issue-based agenda (the right to abortion), to safe, legal abortion not only as a human right, but bound up with democratic and citizenship rights, has been an important way of reframing the abortion debate and reproductive justice agenda.
Meanwhile, the European Court of Human Rights complicated the question even more through a landmark judgment (case of A. B. and C. v. Ireland), in which it is stated that the denial of abortion for health and/or well-being reasons is an interference with an individuals right to respect for private and family life under Article 8 of the European Convention on Human Rights, an interference which in some cases can be justified.
A desire to achieve certain population targets has resulted throughout history in severely abusive practices, in cases where governments ignored human rights and enacted aggressive demographic policies. In the 20th century, several authoritarian governments have sought either to increase or to decrease the births rates, often through forceful intervention. One of the most notorious natalist policies is that which occurred in communist Romania in the period of 1967–1990 during communist leader Nicolae Ceaușescu, who adopted a very aggressive natalist policy which included outlawing abortion and contraception, routine pregnancy tests for women, taxes on childlessness, and legal discrimination against childless people. Ceaușescu's policy resulted in over 9,000 women who died due to illegal abortions, large numbers of children put into Romanian orphanages by parents who could not cope with raising them, street children in the 1990s (when many orphanages were closed and the children ended on the streets), and overcrowding in homes and schools. The irony of Ceaușescu's aggressive natalist policy was a generation that may not have been born would eventually lead the Romanian Revolution which would overthrow and have him executed.
In stark opposition with Ceaușescu's natalist policy was China's one-child policy, in effect from 1978 to 2015, which included abuses such as forced abortions. This policy has also been deemed responsible for the common practice of sex selective abortion which led to an imbalanced sex ratio in the country.
From the 1970s to 1980s, tension grew between women's health activists who advance women's reproductive rights as part of a human rights-based approach on the one hand, and population control advocates on the other. At the 1984 UN World Population Conference in Mexico City population control policies came under attack from women's health advocates who argued that the policies' narrow focus led to coercion and decreased quality of care, and that these policies ignored the varied social and cultural contexts in which family planning was provided in developing countries. In the 1980s the HIV/AIDS epidemic forced a broader discussion of sex into the public discourse in many countries, leading to more emphasis on reproductive health issues beyond reducing fertility. The growing opposition to the narrow population control focus led to a significant departure in the early 1990s from past population control policies. In the United States, abortion opponents have begun to foment conspiracy theories about reproductive rights advocates, accusing them of advancing a racist agenda of eugenics, and of trying to reduce the African American birth rate in the U.S.
Female genital mutilation
Female genital mutilation (FGM) is defined as "all procedures that involve partial or total removal of the external female genitalia, or other injury to the female genital organs for non-medical reasons." The procedure has no health benefits, and can cause severe bleeding and problems urinating, cysts, infections, and complications in childbirth and increased risk of newborn deaths. It is performed for traditional, cultural or religious reasons in many parts of the world, especially in Africa. The Istanbul Convention prohibits FGM (Article 38).
Bride kidnapping or buying and reproductive slavery
Bride kidnapping or marriage by abduction, is the practice whereby a woman or girl is abducted for the purpose of a forced marriage. Bride kidnapping has been practiced historically in many parts of the world, and it continues to occur today in some places, especially in Central Asia and the Caucasus, in countries such as Kyrgyzstan, Tajikistan, Kazakhstan, Turkmenistan, Uzbekistan and Armenia, as well as in Ethiopia. Bride kidnapping is often preceded or followed by rape (which may result in pregnancy), in order to force the marriage – a practice also supported by "marry-your-rapist law" (laws regarding sexual violence, abduction or similar acts, whereby the perpetrator avoids prosecution or punishment if he marries the victim). Abducting of women may happen on an individual scale or on a mass scale. Raptio is a Latin term referring to the large-scale abduction of women, usually for marriage or sexual slavery, particularly during wartime.
Bride price, also called bridewealth, is money, property, or other form of wealth paid by a groom or his family to the parents of the woman he marries. The practice of bride price sometimes leads to parents selling young daughters into marriage and to trafficking. Bride price is common across Africa. Such forced marriages often lead to sexual violence, and forced pregnancy. In northern Ghana, for example, the payment of bride price signifies a woman's requirement to bear children, and women using birth control are at risks of threats and coercion.
The 1956 Supplementary Convention on the Abolition of Slavery, the Slave Trade, and Institutions and Practices Similar to Slavery defines "institutions and practices similar to slavery" to include:
c) Any institution or practice whereby:
- (i) A woman, without the right to refuse, is promised or given in marriage on payment of a consideration in money or in kind to her parents, guardian, family or any other person or group; or
- (ii) The husband of a woman, his family, or his clan, has the right to transfer her to another person for value received or otherwise; or
- (iii) A woman on the death of her husband is liable to be inherited by another person;
Laws in many countries and states require sperm donors to be either anonymous or known to the recipient, or the laws restrict the number of children each donor may father. Although many donors choose to remain anonymous, new technologies such as the Internet and DNA technology have opened up new avenues for those wishing to know more about the biological father, siblings and half-siblings.
Ethnic minority women
In Peru, President Alberto Fujimori (in office from 1990 to 2000) has been accused of genocide and crimes against humanity as a result of the Programa Nacional de Población, a sterilization program put in place by his administration. During his presidency, Fujimori put in place a program of forced sterilizations against indigenous people (mainly the Quechuas and the Aymaras), in the name of a "public health plan", presented on 28 July 1995.
During the 20th century, forced sterilization of Roma women in European countries, especially in former Communist countries, was practiced, and there are allegations that these practices continue unofficially in some countries, such as Czech Republic, Bulgaria, Hungary and Romania. In V. C. vs. Slovakia, the European Court for Human Rights ruled in favor of a Roma woman who was the victim of forced sterilization in a state hospital in Slovakia in 2000.
Forced sterilization in the United States was practiced starting with the 19th century. The United States during the Progressive era, ca. 1890 to 1920, was the first country to concertedly undertake compulsory sterilization programs for the purpose of eugenics. Thomas C. Leonard, professor at Princeton University, describes American eugenics and sterilization as ultimately rooted in economic arguments and further as a central element of Progressivism alongside wage controls, restricted immigration, and the introduction of pension programs. The heads of the programs were avid proponents of eugenics and frequently argued for their programs which achieved some success nationwide mainly in the first half of the 20th Century.
Compulsory sterilization has been practiced historically in parts of Canada. Two Canadian provinces (Alberta and British Columbia) performed compulsory sterilization programs in the 20th century with eugenic aims. Canadian compulsory sterilization operated via the same overall mechanisms of institutionalization, judgment, and surgery as the American system. However, one notable difference is in the treatment of non-insane criminals. Canadian legislation never allowed for punitive sterilization of inmates.
The Sexual Sterilization Act of Alberta was enacted in 1928 and repealed in 1972. In 1995, Leilani Muir sued the Province of Alberta for forcing her to be sterilized against her will and without her permission in 1959. Since Muir’s case, the Alberta government has apologized for the forced sterilization of over 2,800 people. Nearly 850 Albertans who were sterilized under the Sexual Sterilization Act were awarded CA$142 million in damages.
Roman Catholic Church
The Catholic Church is opposed to artificial contraception, abortion, and sexual intercourse outside marriage. This belief dates back to the first centuries of Christianity. While Roman Catholicism is not the only religion with such views, its religious doctrine is very powerful in influencing countries where most of the population is Catholic, and the few countries of the world with complete bans on abortion are mostly Catholic-majority countries, and in Europe strict restrictions on abortion exist in the Catholic majority countries of Malta (complete ban), Andorra, San Marino, Liechtenstein and to a lesser extent Poland and Monaco.
Some of the countries of Central America, notably El Salvador, have also come to international attention due to very forceful enforcement of the anti-abortion laws. El Salvador has received repeated criticism from the UN. The Office of the UN High Commissioner for Human Rights (OHCHR) named the law "one of the most draconian abortion laws in the world", and urged liberalization, and Zeid bin Ra'ad, the United Nations High Commissioner for Human Rights, stated that he was "appalled that as a result of El Salvador’s absolute prohibition on abortion, women are being punished for apparent miscarriages and other obstetric emergencies, accused and convicted of having induced termination of pregnancy".
Criticism surrounds certain forms of anti-abortion activism. Anti-abortion violence is a serious issue in some parts of the world, especially in North America. It is recognized as single-issue terrorism. Numerous organizations have also recognized anti-abortion extremism as a form of Christian terrorism.
Incidents include vandalism, arson, and bombings of abortion clinics, such as those committed by Eric Rudolph (1996–98), and murders or attempted murders of physicians and clinic staff, as committed by James Kopp (1998), Paul Jennings Hill (1994), Scott Roeder (2009), Michael F. Griffin (1993), and Peter James Knight (2001). Since 1978, in the US, anti-abortion violence includes at least 11 murders of medical staff, 26 attempted murders, 42 bombings, and 187 arsons.
Some opponents of legalized abortion view the term "reproductive rights" as a euphemism to sway emotions in favor of abortion. National Right to Life has referred to "reproductive rights" as a "fudge term" and "the code word for abortion rights."
- Cook, Rebecca J.; Fathalla, Mahmoud F. (1996). "Advancing Reproductive Rights Beyond Cairo and Beijing". International Family Planning Perspectives. 22 (3): 115–21. doi:10.2307/2950752. JSTOR 2950752.
- "Gender and reproductive rights". WHO.int. Archived from the original on 26 July 2009. Retrieved 29 August 2010.
- Amnesty International USA (2007). "Stop Violence Against Women: Reproductive rights". SVAW. Amnesty International USA. Archived from the original on 20 January 2008. Retrieved 8 December 2007.
- "Tackling the taboo of menstrual hygiene in the European Region". WHO.int. 8 November 2018. Archived from the original on 28 July 2019.
- Singh, Susheela (2018). "Inclusion of menstrual health in sexual and reproductive health and rights — Authors' reply". The Lancet Child & Adolescent Health. 2 (8): e19. doi:10.1016/S2352-4642(18)30219-0. PMID 30119725.
- Freedman, Lynn P.; Isaacs, Stephen L. (1993). "Human Rights and Reproductive Choice". Studies in Family Planning. 24 (1): 18–30. doi:10.2307/2939211. JSTOR 2939211. PMID 8475521.
- "Template". Nocirc.org. Retrieved 19 August 2017.
- "Proclamation of Teheran". International Conference on Human Rights. 1968. Archived from the original on 17 October 2007. Retrieved 8 November 2007.
- Dorkenoo, Efua. (1995). Cutting the rose : female genital mutilation : the practice and its prevention. Minority Rights Publications. ISBN 1873194609. OCLC 905780971.
- Center for Reproductive Rights, International Legal Program, Establishing International Reproductive Rights Norms: Theory for Change, US CONG. REC. 108th CONG. 1 Sess. E2534 E2547 (Rep. Smith) (8 December 2003):
We have been leaders in bringing arguments for a woman's right to choose abortion within the rubric of international human rights. However, there is no binding hard norm that recognizes women's right to terminate a pregnancy. (...) While there are hard norms prohibiting sex discrimination that apply to girl adolescents, these are problematic since they must be applied to a substantive right (i.e., the right to health) and the substantive reproductive rights of adolescents are not `hard' (yet!). There are no hard norms on age discrimination that would protect adolescents' ability to exercise their rights to reproductive health, sexual education, or reproductive decisionmaking. In addition, there are no hard norms prohibiting discrimination based on marital status, which is often an issue with respect to unmarried adolescents' access to reproductive health services and information. The soft norms support the idea that the hard norms apply to adolescents under 18. They also fill in the substantive gaps in the hard norms with respect to reproductive health services and information as well as adolescents' reproductive autonomy. (...) There are no hard norms in international human rights law that directly address HIV/AIDS directly. At the same time, a number of human rights bodies have developed soft norms to secure rights that are rendered vulnerable by the HIV/AIDS epidemic. (...) Practices with implications for women's reproductive rights in relation to HIV/AIDS are still not fully covered under existing international law, although soft norms have addressed them to some extent. (...) There is a lack of explicit prohibition of mandatory testing of HIV-positive pregnant women under international law. (...) None of the global human rights treaties explicitly prohibit child marriage and no treaty prescribes an appropriate minimum age for marriage. The onus of specifying a minimum age at marriage rests with the states' parties to these treaties. (...) We have to rely extensively on soft norms that have evolved from the TMBs and that are contained in conference documents to assert that child marriage is a violation of fundamental human rights.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 1. ISBN 978-0-8265-1528-5.
- "Population Matters search on "reproductive rights"". Populationmatters.org/. Retrieved 19 August 2017.[dead link]
- "unhchr.ch". Unhchr.ch.
- "Fourth World Conference on Women, Beijing 1995". www.un.org. Retrieved 7 July 2020.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. pp. 5–6. ISBN 978-0-8265-1528-5.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 7. ISBN 978-0-8265-1528-5.
- "A/CONF.171/13: Report of the ICPD (94/10/18) (385k)". Un.org. Retrieved 19 August 2017.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 9. ISBN 978-0-8265-1528-5.
- Bunch, Charlotte; Fried, Susana (1996). "Beijing '95: Moving Women's Human Rights from Margin to Center". Signs: Journal of Women in Culture and Society. 22 (1): 200–4. doi:10.1086/495143. JSTOR 3175048. S2CID 144075825.
- Merry, S.E. (Editor M. Agosin) (2001). Women, Violence, and the Human Rights System. Women, Gender, and Human Rights: A Global Perspective. New Brunswick: Rutgers University Press. pp. 83–97.
- Nowicka, Wanda (2011). "Sexual and reproductive rights and the human rights agenda: controversial and contested". Reproductive Health Matters. 19 (38): 119–128. doi:10.1016/s0968-8080(11)38574-6. ISSN 0968-8080. PMID 22118146. S2CID 206112752.
- Rousseau, Stephanie; Morales Hudon, Anahi (2019). INDIGENOUS WOMEN'S MOVEMENTS IN LATIN AMERICA : gender and ethnicity in peru, mexico, and bolivia. PALGRAVE MACMILLAN. ISBN 978-1349957194. OCLC 1047563400.
- Solinger, Rickie (27 February 2013). Reproductive politics : what everyone needs to know. ISBN 9780199811458. OCLC 830323649.
- "About the Yogyakarta Principles". Yogyakartaprinciples.org. Archived from the original on 4 March 2016. Retrieved 19 August 2017.
- International Service for Human Rights, Majority of GA Third Committee unable to accept report on the human right to sexual education Archived 15 May 2013 at the Wayback Machine
- "The Yogyakarta Principles" Preamble and Principle 9. The Rights to Treatment with Humanity While in Detention
- United Nations General Assembly, Official Records, Third Committee, Summary record of the 29th meeting held in New York, on Monday, 25 October 2010, at 3 p.m Archived 27 September 2012 at the Wayback Machine. For instance, Malawi, speaking on behalf of all African States, argued that the Yogyakarta Principles were "controversial and unrecognized," while the representative of the Russian Federation said that they "had not been agreed to at the intergovernmental level, and which therefore could not be considered as authoritative expressions of the opinion of the international community" (para. 9, 23).
- Anderson, Natalae (22 September 2010). "Memorandum: Charging Forced Marriage as a Crime Against Humanity" (PDF). D.dccam.org. Retrieved 19 August 2017.
- "BBC NEWS – World – Americas – Mass sterilisation scandal shocks Peru". News.bbc.co.uk. 24 July 2002. Retrieved 19 August 2017.
- "Archived copy" (PDF). Archived from the original (PDF) on 4 March 2016. Retrieved 20 November 2015.CS1 maint: archived copy as title (link)
- "Archived copy". Archived from the original on 8 July 2016. Retrieved 26 September 2016.CS1 maint: archived copy as title (link)
- Wilson, K. (2017). "In the name of reproductive rights: race, neoliberalism and the embodied violence of population policies" (PDF). New Formations. 91 (91): 50–68. doi:10.3898/NEWF:91.03.2017. S2CID 148987919 – via JSTOR.
- Basu, A. (Editors C. R. a. K. McCann, Seung-kyung) (2000). Globalization of the Local/Localization of the Global: Mapping Transnational Women's Movements. In Feminist Theory Reader: Local and Global Perspectives. United Kingdom: Routledge. pp. 68–76.
- Mooney, Jadwiga E. Pieper (2009). The politics of motherhood maternity and women's rights in twentieth-century Chile. University of Pittsburgh Press. ISBN 9780822960430. OCLC 690336424.
- Bueno-Hansen, Pascha (2015). Feminist and human rights struggles in Peru : decolonizing transitional justice. Urbana: University of Illinois Press. ISBN 9780252097539. OCLC 1004369974.
- Kaplan, T. (Editor M. Agosin) (2001). Women's Rights as Human Rights: Women as Agents of Social Change. Women, Gender, and Human Rights: A Global Perspective. New Brunswick: Rutgers University Press. pp. 191–204.
- "Universal declaration of human rights". 28 May 2014. doi:10.18356/b0fc2dba-en. Cite journal requires
- Marsha A, Freeman; Christine, Chinkin; Beate, Rudolf (1 January 2012). "Violence Against Women". The UN Convention on the Elimination of All Forms of Discrimination Against Women. 1. doi:10.5422/fso/9780199565061.003.0019.
- "U.N. Millennium Development Goals".
- "U.N. Sustainable Development Goals".
- Murray, Christopher, J.L. (2015). "Shifting to Sustainable Development Goals – Implications for Global Health". New England Journal of Medicine. 373 (15): 1390–1393. doi:10.1056/NEJMp1510082. PMID 26376045.
- Bant, Astrid; Girard, Françoise (2008). "Sexuality, health, and human rights: self-identified priorities of indigenous women in Peru". Gender & Development. 16 (2): 247–256. doi:10.1080/13552070802120426. ISSN 1355-2074. S2CID 72449191.
- Amnesty International, Defenders of Sexual and Reproductive Rights Archived 2 October 2013 at the Wayback Machine; International Women’s Health Coalition and the United Nations, Campaign for an Inter-American Convention on Sexual and Reproductive Rights , Women's Health Collection, Abortion as a human right: possible strategies in unexplored territory. (Sexual Rights and Reproductive Rights), (2003); and Shanthi Dairiam, Applying the CEDAW Convention for the recognition of women's health rights, Arrows For Change, (2002). In this regard, the Center for Reproductive Rights has noted that:
Our goal is to ensure that governments worldwide guarantee women's reproductive rights out of an understanding that they are bound to do so. The two principal prerequisites for achieving this goal are: (1) the strengthening of international legal norms protecting reproductive rights; and (2) consistent and effective action on the part of civil society and the international community to enforce these norms. Each of these conditions, in turn, depends upon profound social change at the local, national and international (including regional) levels. (...) Ultimately, we must persuade governments to accept reproductive rights as binding norms. Again, our approach can move forward on several fronts, with interventions both at the national and international levels. Governments' recognition of reproductive rights norms may be indicated by their support for progressive language in international conference documents or by their adoption and implementation of appropriate national-level legislative and policy instruments. In order to counter opposition to an expansion of recognized reproductive rights norms, we have questioned the credibility of such reactionary yet influential international actors as the United States and the Holy See. Our activities to garner support for international protections of reproductive rights include: Lobbying government delegations at UN conferences and producing supporting analyses/materials; fostering alliances with members of civil society who may become influential on their national delegations to the UN; and preparing briefing papers and factsheets exposing the broad anti-woman agenda of our opposition.Center for Reproductive Rights, International Legal Program, Establishing International Reproductive Rights Norms: Theory for Change, US CONG. REC. 108th CONG. 1 Sess. E2534 E2547 (Rep. Smith) (8 December 2003)
- "[programme] Basis for action". Iisd.ca. Retrieved 17 February 2015.
- "WHO | Sexual and reproductive health and rights: a global development, health, and human rights priority". WHO. Retrieved 19 June 2019.
- United Nations, Report of the Fourth International Conference on Population and Development, Cario, 5 – 13 September 1994. Guatemala entered the following reservation:
Chapter VII: we enter a reservation on the whole chapter, for the General Assembly's mandate to the Conference does not extend to the creation or formulation of rights; this reservation therefore applies to all references in the document to "reproductive rights", "sexual rights", "reproductive health", "fertility regulation", "sexual health", "individuals", "sexual education and services for minors", "abortion in all its forms", "distribution of contraceptives" and "safe motherhood"
- "Sir David Attenborough on the roots of Climatic problems". The Independent.
- "OHCHR | Sexual and reproductive health and rights". www.ohchr.org. Retrieved 19 June 2019.
- "Women's History". Womenshistory.about.com. Retrieved 19 August 2017.
- Kirk, Okazawa-Rey 2004
- Best, Kim (Spring 1998). "Men's Reproductive Health Risks: Threats to men's fertility and reproductive health include disease, cancer and exposure to toxins". Network: 7–10. Retrieved 2 January 2008.
- McCulley Melanie G (1998). "The male abortion: the putative father's right to terminate his interests in and obligations to the unborn child". The Journal of Law and Policy. VII (1): 1–55. PMID 12666677.
- Young, Kathy (19 October 2000). "A man's right to choose". Salon.com. Retrieved 10 May 2011.
- Owens, Lisa Lucile (2013). "Coerced Parenthood as Family Policy: Feminism, the Moral Agency of Women, and Men's 'Right to Choose'". Alabama Civil Rights & Civil Liberties Law Review. 5: 1–33. SSRN 2439294.
- Traister, Rebecca. (13 March 2006). "Roe for men?" Salon.com. Retrieved 17 December 2007.
- "ROE vs. WADE… FOR MEN: Men's Center files pro-choice lawsuit in federal court". Nationalcenterformen.org.
- "U.S. Court of Appeals for the Sixth Circuit, case No. 06-11016" (PDF).
- Money, John; Ehrhardt, Anke A. (1972). Man & Woman Boy & Girl. Differentiation and dimorphism of gender identity from conception to maturity. USA: The Johns Hopkins University Press. ISBN 978-0-8018-1405-1.
- Domurat Dreger, Alice (2001). Hermaphrodites and the Medical Invention of Sex. USA: Harvard University Press. ISBN 978-0-674-00189-3.
- Resolution 1952/2013, Provision version, Children’s right to physical integrity, Council of Europe, 1 October 2013
- Involuntary or coerced sterilisation of intersex people in Australia, Australian Senate Community Affairs Committee, October 2013.
- It's time to defend intersex rights, Morgan Carpenter at Australian Broadcasting Corporation, 15 November 2013.
- Australian Parliament committee releases intersex rights report, Gay Star News, 28 October 2013.
- On the management of differences of sex development, Ethical issues relating to "intersexuality", Opinion No. 20/2012 Archived 20 June 2013 at the Wayback Machine, Swiss National Advisory Commission on Biomedical Ethics, November 2012.
- Report of the UN Special Rapporteur on Torture, Office of the UN High Commissioner for Human Rights, February 2013.
- WHO/UN interagency statement on involuntary or coerced sterilisation, Organisation Intersex International Australia, 30 May 2014.
- Eliminating forced, coercive and otherwise involuntary sterilization, An interagency statement, World Health Organization, May 2014.
- Organization, World Health. "World Health Organization – HIV and Adolescents from Guidance to Action". apps.who.int.
- Uy, Jocelyn R. "DOH backs bill allowing minor to get HIV, AIDS tests without parental consent". Newsinfo.inquirer.net.
- "Challenging parental consent laws to increase young people's access to vital HIV services – UNAIDS". Unaids.org.
- Maradiegue, Ann (2003). "Minor's Rights Versus Parental Rights: Review of Legal Issues in Adolescent Health Care". Journal of Midwifery & Women's Health. 48 (3): 170–177. doi:10.1016/S1526-9523(03)00070-9. PMID 12764301.
- "Sexual and Reproductive Rights of Young People: Autonomous decision making and confidential services" (PDF). International Planned Parenthood Federation. Retrieved 1 October 2017.
- Mugisha, Frederick (2009). "Chapter 42: HIV and AIDS, STIs and sexual health among young people". In Furlong, Andy (ed.). Handbook of Youth and Young Adulthood. Routledge. pp. 344–352. ISBN 978-0-415-44541-2.
- Lowry, Andrew (29 August 2017). "Homeless girl in India forced to give birth on street metres away from health centre: She was shivering and unable to lift and cuddle her infant". The Independent. Retrieved 1 October 2017.
- "Adolescent sexual and reproductive health – UNFPA – United Nations Population Fund". Unfpa.org.
- "Joint United Nations statement on ending discrimination in health care settings — Joint WHO/UN statement". World Health Organization. 27 June 2017. Retrieved 1 October 2017.
- Sedletzki, Vanessa (2016). "Legal minimum ages and the realization of adolescents' rights" (PDF). Unicef. Archived from the original (PDF) on 21 October 2020. Retrieved 12 October 2017.
- Lukale, Nelly (2012). "Sexual Reproductive Health and Rights for Young People in Africa". ARROWs for Change. 18 (2): 7–8.
- Knudson, Lara (2006). Reproductive Rights in a Global Context: South Africa, Uganda, Peru, Denmark, United States, Vietnam, Jordan. Nashville, TN: Vanderbilt University Press.[page needed]
- "HIV/AIDS Factsheet". World Health Organization. Retrieved 1 October 2017.
- De Irala, Jokin; Osorio, Alfonso; Carlos, Silvia; Lopez-Del Burgo, Cristina (2011). "Choice of birth control methods among European women and the role of partners and providers" (PDF). Contraception. 84 (6): 558–64. doi:10.1016/j.contraception.2011.04.004. hdl:10171/19110. PMID 22078183.
- Larsson, Margareta; Tydén, Tanja; Hanson, Ulf; Häggström-Nordin, Elisabet (2009). "Contraceptive use and associated factors among Swedish high school students". The European Journal of Contraception & Reproductive Health Care. 12 (2): 119–24. doi:10.1080/13625180701217026. PMID 17559009. S2CID 36601350.
- Anedda, Ludovica (2018). Sexual and reproductive health rights and the implication of conscientious objection : study (PDF). ISBN 978-92-846-2976-3.
- "Women's sexual and reproductive rights in Europe". Commissioner for Human Rights.
- "Chile abortion: Court approves easing total ban". BBC. 21 August 2017. Retrieved 1 October 2017.
- Freeman, Cordelia (29 August 2017). "Chile: the long road to abortion reform — After a fierce debate, one of the most restrictive reproductive laws in the world has been eased". The Independent. Retrieved 1 October 2017.
- Goicolea, Isabel (2010). "Adolescent Pregnancies in the Amazon Basin of Ecuador: A Rights and Gender Approach to Adolescents' Sexual and Reproductive Health". Global Health Action. 3: 1–11. doi:10.3402/gha.v3i0.5280. PMC 2893010. PMID 20596248.
- "Fact Sheet: Contraceptive Use in the United States". Guttmacher Institute. 4 August 2004. Retrieved 24 April 2013.
- Doan, Alesha (2007). Opposition and Intimidation: The Abortion Wars and Strategies of Political Harassment. University of Michigan Press. p. 57. ISBN 9780472069750.
- Casey, 505 U.S. at 877.
- "Strict Texas abortion law struck down". 27 June 2016 – via www.bbc.com.
- Goldman, Lisa A.; García, Sandra G.; Díaz, Juan; Yam, Eileen A. (15 November 2005). "Brazilian obstetrician-gynecologists and abortion: a survey of knowledge, opinions and practices". Reproductive Health. 2: 10. doi:10.1186/1742-4755-2-10. PMC 1308861. PMID 16288647.
- "Abortion in Ghana". 24 February 2016.
- "NEPAL: Only Half of Women Know Abortion is Legal – Inter Press Service". www.ipsnews.net.
- "Wayback Machine". 8 June 2011. Cite uses generic title (help)
- "The Eight Point Agenda: Practical, positive outcomes for girls and women in crisis" (PDF). Retrieved 10 June 2021.
- Assembly, United Nations General. "A/RES/48/104 – Declaration on the Elimination of Violence against Women – UN Documents: Gathering a body of global agreements". www.un-documents.net.
- "United Nations Population Fund | Supporting the Constellation of Reproductive Rights". UNFPA. Retrieved 17 February 2015.
- "United Nations Population Fund | State of World Population 2005". UNFPA. Retrieved 17 February 2015.
- "WHO | Gender and Reproductive Rights". Who.int. Retrieved 17 February 2015.
- "Sexual and reproductive rights | Amnesty International". Amnesty.org. 6 November 2007. Retrieved 17 February 2015.
- "WHO | Gender and human rights". Who.int. 31 January 2002. Retrieved 17 February 2015.
- "Bioline International Official Site (site up-dated regularly)". Bioline.org.br. 9 February 2015. Retrieved 17 February 2015.
- "Ethics: Honour Crimes". BBC. 1 January 1970. Retrieved 17 February 2015.
- "AIDSinfo". UNAIDS. Retrieved 4 March 2013.
- "HIV Basics | HIV/AIDS | CDC". Cdc.gov. 23 July 2018. Retrieved 5 October 2016.
- "WHO | Reproductive choices for women with HIV". Who.int. Retrieved 17 February 2015.
- "Child marriage – a threat to health". www.euro.who.int. 20 December 2012.
- "Child marriage – UNFPA – United Nations Population Fund". www.unfpa.org.
- "The Convention of Belem do Para and the Istanbul Convention : A response to violence against women worldwide" (PDF). Oas.org. Retrieved 20 November 2015.
- "Council of Europe Convention on preventing and combating violence against women and domestic violence". Retrieved 10 June 2021.
- "OHCHR | Rape: Weapon of war". www.ohchr.org. Retrieved 19 June 2019.
- Vijayan, Pillai; Ya-Chien, Wang; Arati, Maleku (2017). "Women, war, and reproductive health in developing countries". Social Work in Health Care. 56 (1): 28–44. doi:10.1080/00981389.2016.1240134. PMID 27754779. S2CID 3507352.
- Melhado, L (2010). "Rates of Sexual Violence Are High in Democratic Republic of the Congo". International Perspectives on Sexual and Reproductive Health. 36 (4): 210. JSTOR 41038670.
- Autesserre, Séverine (2012). "Dangerous Tales: Dominant Narratives on the Congo and their Unintended Consequences". African Affairs. 111 (443): 202–222. doi:10.1093/afraf/adr080.
- Country Comparison: Maternal Mortality Rate in The CIA World Factbook. Date of Information: 2010
- "WHO – Maternal mortality ratio (per 100 000 live births)". www.who.int.
- "Maternal mortality". World Health Organization.
- "Definition of Birth control". MedicineNet. Archived from the original on 6 August 2012. Retrieved 9 August 2012.
- Hanson, S.J.; Burke, Anne E. (21 December 2010). "Fertility control: contraception, sterilization, and abortion". In Hurt, K. Joseph; Guile, Matthew W.; Bienstock, Jessica L.; Fox, Harold E.; Wallach, Edward E. (eds.). The Johns Hopkins manual of gynecology and obstetrics (4th ed.). Philadelphia: Wolters Kluwer Health/Lippincott Williams & Wilkins. pp. 382–395. ISBN 978-1-60547-433-5.
- Oxford English Dictionary. Oxford University Press. June 2012.
- World Health Organization (WHO). "Family planning". Health topics. World Health Organization (WHO). Archived from the original on 18 March 2016. Retrieved 28 March 2016.
- Joyce, Kathryn (9 November 2006). "Arrows for the War". The Nation. ISSN 0027-8378. Retrieved 19 June 2019.
- "Worldwide, an estimated 25 million unsafe abortions occur each year". World Health Organization.
- Committee on the Elimination of Discrimination against Women (14 July 2017). "General recommendation No. 35 on gender-based violence against women, updating general recommendation No. 19" (PDF). UN Human Rights. Retrieved 23 October 2020.
- "WHO: Unsafe Abortion – The Preventable Pandemic". Archived from the original on 13 January 2010. Retrieved 16 January 2010.
- "WHO | Preventing unsafe abortion". World Health Organization. Retrieved 17 February 2015.
- "HRP | World Health Organization". World Health Organization. Retrieved 17 February 2015.
- United Nations News Service Section (27 September 2016). "UN News – Repealing anti-abortion laws would save the lives of nearly 50,000 women a year – UN experts". UN News Service Section.
- "Abortion Statistics, England and Wales: 2019" (PDF). 11 June 2020.
- Kirchgaessner, Stephanie; Duncan, Pamela; Nardelli, Alberto; Robineau, Delphine (11 March 2016). "Seven in 10 Italian gynaecologists refuse to carry out abortions". The Guardian.
- Milekic, Sven. "Doctors' Refusal to Perform Abortions Divides Croatia". Balkan Insight.
- Upadhyay, Ushma D.; Jones, Rachel K.; Weitz, Tracy A. (2013). "At What Cost? Payment for Abortion Care by U.S. Women". Women's Health Issues. 23 (3): 173–178. doi:10.1016/j.whi.2013.03.001. PMID 23660430.
- Bearak, Jonathan M.; Burke, Kristen L.; Jones, Rachel K. (2017). "Disparities and change over time in distance women would need to travel to have an abortion in the USA: a spatial analysis". The Lancet Public Health. 2 (11): 493–500. doi:10.1016/S2468-2667(17)30158-5. PMC 5943037. PMID 29253373.
- "Resolution adopted by the General Assembly on 19 December 2016: 71/170. Intensification of efforts to prevent and eliminate all forms of violence against women and girls: domestic violence". United Nations. 7 February 2017. Retrieved 23 October 2020.
- "Women's Human Rights: Abortion". Human Rights Watch.
- "Protocol to the African Charter on Human and Peoples' Rights on the Rights of Women in Africa / Legal Instruments / ACHPR". ACHPR. Retrieved 19 June 2019.
- "General Comment No. 2 on Article 14.1 (a), (b), (c) and (f) and Article 14. 2 (a) and (c) of the Protocol to the African Charter on Human and Peoples' Rights on the Rights of Women in Africa / Legal Instruments / ACHPR". ACHPR. Retrieved 19 June 2019.
- Grover, Leena; Keller, Helen (April 2012). "General Comments of the Human Rights Committee and their legitimacy". UN Human Rights Treaty Bodies: Law and Legitimacy. Retrieved 19 June 2019.
- Office of the High Commissioner for Human Rights (30 October 2018). "General comment No. 36 (2018) on article 6 of the International Covenant on Civil and Political Rights, on the right to life" (PDF). UN Human Rights. Retrieved 23 October 2020.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 6. ISBN 978-0-8265-1528-5.
- "Council of Europe Urges Member States to Decriminalize Abortion". Guttmacher.org. 18 April 2008. Retrieved 17 February 2015.
- European Parliament, 4 December 2003: Oral Question (H-0794/03) for Question Time at the part-session in December 2003 pursuant to Rule 43 of the Rules of Procedure by Dana Scallon to the Council. In the written record of that session, one reads: Posselt (PPE-DE): "Does the term 'reproductive health" include the promotion of abortion, yes or no?" – Antonione, Council: "No."
- European Parliament, 24 October 2002: Question no 86 by Dana Scallon (H-0670/02)
- Jyoti Shankar Singh, Creating a New Consensus on Population (London: Earthscan, 1998), 60
- Lederer, AP/San Francisco Chronicle, 1 March 2005
- Leopold, Reuters, 28 February 2005
- "Unsafe Abortion: A Development Issue". Institute of Development Studies (IDS) Bulletin. 39 (3). July 2009. Archived from the original on 5 January 2013.
- Kligman, Gail. "Political Demography: The Banning of Abortion in Ceausescu's Romania". In Ginsburg, Faye D.; Rapp, Rayna, eds. Conceiving the New World Order: The Global Politics of Reproduction. Berkeley, CA: University of California Press, 1995 :234–255. Unique Identifier : AIDSLINE KIE/49442.
- Levitt & Dubner, Steven & Stephen (2005). Freakonomics. 80 Strand, London WC2R ORL England: Penguin Group. p. 107. ISBN 9780141019017.CS1 maint: location (link)
- "China forced abortion photo sparks outrage – BBC News". BBC News. 14 June 2012. Retrieved 11 March 2017.
- Bulte, E.; Heerink, N.; Zhang, X. (2011). "China's one-child policy and 'the mystery of missing women': ethnic minorities and male-biased sex ratios". Oxford Bulletin of Economics and Statistics. 73 (1): 0305–9049. doi:10.1111/j.1468-0084.2010.00601.x. S2CID 145107264.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. p. 2. ISBN 978-0-8265-1528-5.
- Knudsen, Lara (2006). Reproductive Rights in a Global Context. Vanderbilt University Press. pp. 4–5. ISBN 978-0-8265-1528-5.
- Dewan, Shaila (26 February 2010). "To Court Blacks, Foes of Abortion Make Racial Case". New York Times. Retrieved 7 June 2010.
- "Female genital mutilation". World Health Organization.
- "Archived copy". Archived from the original on 31 May 2017. Retrieved 7 August 2017.CS1 maint: archived copy as title (link)
- "One in five girls and women kidnapped for marriage in Kyrgyzstan". Reuters. August 2017.
- Ash, Lucy (10 August 2010). "Chechen stolen brides 'exorcised'". BBC News.
- "Kidnapped. Raped. Married. The extraordinary rebellion of Ethiopia's". 17 March 2010.
- "Ethiopian girls fear forced marriage". 14 May 2006.
- Mellen, Ruby (March–April 2017). "The Rapist's Loophole: Marriage". Foreign Policy (223): 20.
- "Human rights groups ask NWFP Govt. To ban 'bride price' to curb women Trafficking. – Free Online Library".
- "Islands Business – PNG Police blame bride price for violence in marri…". 26 January 2013. Archived from the original on 26 January 2013.
- "Bride price practices in Africa". BBC News. 6 August 2015.
- Bawah, Ayaga Agula; Akweongo, Patricia; Simmons, Ruth; Phillips, James F. (1999). "Women's fears and men's anxieties: the impact of family planning on gender relations in Northern Ghana". Studies in Family Planning. 30 (1): 54–66. doi:10.1111/j.1728-4465.1999.00054.x. hdl:2027.42/73927. PMID 10216896. Pdf.
- "OHCHR | Supplementary Convention on the Abolition of Slavery".
- "Mass sterilization scandal shocks Peru". BBC News. 24 July 2002. Archived from the original on 30 June 2006. Retrieved 30 April 2006.
- "Czech regret over sterilisation". BBC News. 24 November 2009. Retrieved 17 February 2015.
- "PopDev" (PDF). popdev.hampshire.edu.
- Denysenko, Marina (12 March 2007). "Europe | Sterilised Roma accuse Czechs". BBC News. Retrieved 17 February 2015.
- "Kocáb draws attention to the forced sterilization of Romani women; most recent incident allegedly took place in 2007". Romea.cz. 21 July 2009. Retrieved 17 February 2015.
- Archived 1 March 2014 at the Wayback Machine
- Iredale, Rachel (2000). "Eugenics And Its Relevance To Contemporary Health Care". Nursing Ethics. 7 (3): 205–14. doi:10.1177/096973300000700303. PMID 10986944. S2CID 37888613.
- Leonard, Thomas C. (2005). "Retrospectives: Eugenics and Economics in the Progressive Era" (PDF). Journal of Economic Perspectives. 19 (4): 207–224. doi:10.1257/089533005775196642. Archived (PDF) from the original on 18 December 2016.
- Canadian Broadcasting Corporation (CBC) (9 November 1999). "Alberta Apologizes for Forced Sterilization". CBC News. Archived from the original on 23 November 2012. Retrieved 19 June 2013.
- Victims of sterilization finally get day in court. Lawrence Journal-World. December 23, 1996.
- "UN rights office urges el Salvador to reform 'draconian' abortion laws". 15 December 2017.
- "U.N. Calls on el Salvador to stop jailing women for abortion". Reuters. 18 November 2017.
- Watson, Katy (28 April 2015). "The mothers being criminalised in el Salvador". BBC News.
- "Gen 38:8–10 NIV – Then Judah said to Onan, "Sleep with – Bible Gateway". Bible Gateway. Retrieved 14 February 2016.
- "Contraception and Sterilization". Archived from the original on 24 November 2013.
- "Fr. Hardon Archives – The Catholic Tradition on the Morality of Contraception".
- "El Salvador: Rape survivor sentenced to 30 years in jail under extreme anti-abortion law". www.amnesty.org.
- "Jailed for a miscarriage". BBC News.
- Hutcherson, Kimberly. "A brief history of anti-abortion violence". CNN. Retrieved 10 July 2019.
- Jelen, Ted G (1998). "Abortion". Encyclopedia of Religion and Society. Walnut Creek, California: AltaMira Press.
- Smith, G. Davidson (Tim) (1998). "Single Issue Terrorism Commentary". Canadian Security Intelligence Service. Archived from the original on 14 July 2006. Retrieved 9 June 2006.
- Al-Khattar, Aref M. (2003). Religion and terrorism: an interfaith perspective. Greenwood Publishing Group. pp. 58–59. ISBN 9780275969233.
- Hoffman, Bruce (2006). Inside terrorism. Columbia University Press. p. 116. ISBN 9780231510462.
- Harmon, Christopher C. (2000). Terrorism today. Psychology Press. p. 42. ISBN 9780714649986.
- Juergensmeyer, Mark (2003). Terror in the mind of God: the global rise of religious violence. University of California Press. p. 4,19. ISBN 9780520240117.
- Bryant, Clifton D. (2003). Handbook of death & dying, Volume 1. SAGE. p. 243. ISBN 9780761925149.
- McAfee, Ward M. (2010). The Dialogue Comes of Age: Christian Encounters with Other Traditions. Fortress Press. p. 90. ISBN 9781451411157.
- Flint, Colin Robert (2006). Introduction to geopolitics. Psychology Press. p. 172. ISBN 9780203503768.
- Peoples, James; Bailey, Garrick (2008). Humanity: an introduction to cultural anthropology. Cengage. p. 371. ISBN 978-0495508748.
- Dolnik, Adam; Gunaratna, Rohan (2006). "On the Nature of Religious Terrorism". The politics of terrorism: a survey. Taylor & Francis. ISBN 9780203832011.
- The terrorism ahead: confronting transnational violence in the twenty-first century, Paul J. Smith, p 94
- Religion and Politics in America: The Rise of Christian Evangelists, Muhammad Arif Zakaullah, p 109
- Terrorism: An Investigator's Handbook, William E. Dyson, p 43
- Encyclopedia of terrorism, Cindy C. Combs, Martin W. Slann, p 13
- Armed for Life: The Army of God and Anti-Abortion Terror in the United States, Jennifer Jefferis, p 40
- "Threats of violence against US abortion clinics almost doubled in 2017, industry group says". The Independent. 7 May 2018. Retrieved 19 June 2019.
- "THE CHOICE "THAT DARE NOT SPEAK ITS NAME"". Nrlc.org. 2003. Archived from the original on 4 August 2013. Retrieved 19 August 2017. | https://worddisk.com/wiki/Reproductive_rights/ | 21 |
19 | Voting Rights Act of 1965
The Voting Rights Act of 1965 is a landmark piece of federal legislation in the United States that prohibits racial discrimination in voting. It was signed into law by President Lyndon B. Johnson during the height of the civil rights movement on August 6, 1965, and Congress later amended the Act five times to expand its protections. Designed to enforce the voting rights guaranteed by the Fourteenth and Fifteenth Amendments to the United States Constitution, the Act sought to secure the right to vote for racial minorities throughout the country, especially in the South. According to the U.S. Department of Justice, the Act is considered to be the most effective piece of federal civil rights legislation ever enacted in the country. It is also "one of the most far-reaching pieces of civil rights legislation in U.S. history."
|Long title||An Act to enforce the fifteenth amendment of the Constitution of the United States, and for other purposes.|
|Nicknames||Voting Rights Act|
|Enacted by||the 89th United States Congress|
|Effective||August 6, 1965|
|Statutes at Large||79 Stat. 437|
|Titles amended||Title 52—Voting and Elections|
|U.S.C. sections created|
|United States Supreme Court cases|
The act contains numerous provisions that regulate elections. The act's "general provisions" provide nationwide protections for voting rights. Section 2 is a general provision that prohibits every state and local government from imposing any voting law that results in discrimination against racial or language minorities. Other general provisions specifically outlaw literacy tests and similar devices that were historically used to disenfranchise racial minorities. The act also contains "special provisions" that apply to only certain jurisdictions. A core special provision is the Section 5 preclearance requirement, which prohibits certain jurisdictions from implementing any change affecting voting without receiving preapproval from the U.S. attorney general or the U.S. District Court for D.C. that the change does not discriminate against protected minorities. Another special provision requires jurisdictions containing significant language minority populations to provide bilingual ballots and other election materials.
Section 5 and most other special provisions apply to jurisdictions encompassed by the "coverage formula" prescribed in Section 4(b). The coverage formula was originally designed to encompass jurisdictions that engaged in egregious voting discrimination in 1965, and Congress updated the formula in 1970 and 1975. In Shelby County v. Holder (2013), the U.S. Supreme Court struck down the coverage formula as unconstitutional, reasoning that it was no longer responsive to current conditions. The court did not strike down Section 5, but without a coverage formula, Section 5 is unenforceable. The jurisdictions which had previously been covered by the coverage formula massively increased the rate of voter registration purges after the Shelby decision.
Research shows that the Act successfully and massively increased voter turnout and voter registrations, in particular among blacks. The Act has also been linked to concrete outcomes, such as greater public goods provision (such as public education) for areas with higher black population shares, and more members of Congress who vote for civil rights-related legislation.
As initially ratified, the United States Constitution granted each state complete discretion to determine voter qualifications for its residents.:50 After the Civil War, the three Reconstruction Amendments were ratified and limited this discretion. The Thirteenth Amendment (1865) prohibits slavery "except as a punishment for crime"; the Fourteenth Amendment (1868) grants citizenship to anyone "born or naturalized in the United States" and guarantees every person due process and equal protection rights; and the Fifteenth Amendment (1870) provides that "[t]he right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude." These Amendments also empower Congress to enforce their provisions through "appropriate legislation".
To enforce the Reconstruction Amendments, Congress passed the Enforcement Acts in the 1870s. The acts criminalized the obstruction of a citizen's voting rights and provided for federal supervision of the electoral process, including voter registration.:310 However, in 1875 the Supreme Court struck down parts of the legislation as unconstitutional in United States v. Cruikshank and United States v. Reese.:97 After the Reconstruction Era ended in 1877, enforcement of these laws became erratic, and in 1894, Congress repealed most of their provisions.:310
Southern states generally sought to disenfranchise racial minorities during and after Reconstruction. From 1868 to 1888, electoral fraud and violence throughout the South suppressed the African-American vote. From 1888 to 1908, Southern states legalized disenfranchisement by enacting Jim Crow laws; they amended their constitutions and passed legislation to impose various voting restrictions, including literacy tests, poll taxes, property-ownership requirements, moral character tests, requirements that voter registration applicants interpret particular documents, and grandfather clauses that allowed otherwise-ineligible persons to vote if their grandfathers voted (which excluded many African Americans whose grandfathers had been slaves or otherwise ineligible). During this period, the Supreme Court generally upheld efforts to discriminate against racial minorities. In Giles v. Harris (1903), the court held that regardless of the Fifteenth Amendment, the judiciary did not have the remedial power to force states to register racial minorities to vote.:100
Prior to the enactment of the Voting Rights Act of 1965 there were several efforts to stop the disenfranchisement of black voters by Southern states,. Besides the above-mentioned literacy tests and poll taxes other bureaucratic restrictions were used to deny them the right to vote. African Americans also "risked harassment, intimidation, economic reprisals, and physical violence when they tried to register or vote. As a result, very few African Americans were registered voters, and they had very little, if any, political power, either locally or nationally." In the 1950s the Civil Rights Movement increased pressure on the federal government to protect the voting rights of racial minorities. In 1957, Congress passed the first civil rights legislation since Reconstruction: the Civil Rights Act of 1957. This legislation authorized the attorney general to sue for injunctive relief on behalf of persons whose Fifteenth Amendment rights were denied, created the Civil Rights Division within the Department of Justice to enforce civil rights through litigation, and created the Commission on Civil Rights to investigate voting rights deprivations. Further protections were enacted in the Civil Rights Act of 1960, which allowed federal courts to appoint referees to conduct voter registration in jurisdictions that engaged in voting discrimination against racial minorities.
Although these acts helped empower courts to remedy violations of federal voting rights, strict legal standards made it difficult for the Department of Justice to successfully pursue litigation. For example, to win a discrimination lawsuit against a state that maintained a literacy test, the Department needed to prove that the rejected voter-registration applications of racial minorities were comparable to the accepted applications of whites. This involved comparing thousands of applications in each of the state's counties in a process that could last months. The Department's efforts were further hampered by resistance from local election officials, who would claim to have misplaced the voter registration records of racial minorities, remove registered racial minorities from the electoral rolls, and resign so that voter registration ceased. Moreover, the Department often needed to appeal lawsuits several times before the judiciary provided relief because many federal district court judges opposed racial minority suffrage. Thus, between 1957 and 1964, the African-American voter registration rate in the South increased only marginally even though the Department litigated 71 voting rights lawsuits.:514 Efforts to stop the disfranchisement by the Southern states had achieved only modest success overall and in some areas had proved almost entirely ineffectual, because the "Department of Justice's efforts to eliminate discriminatory election practices by litigation on a case-by-case basis had been unsuccessful in opening up the registration process; as soon as one discriminatory practice or procedure was proven to be unconstitutional and enjoined, a new one would be substituted in its place and litigation would have to commence anew."
Congress responded to rampant discrimination against racial minorities in public accommodations and government services by passing the Civil Rights Act of 1964. The act included some voting rights protections; it required registrars to equally administer literacy tests in writing to each voter and to accept applications that contained minor errors, and it created a rebuttable presumption that persons with a sixth-grade education were sufficiently literate to vote.:97 However, despite lobbying from civil rights leaders, the Act did not prohibit most forms of voting discrimination.:253 President Lyndon B. Johnson recognized this, and shortly after the 1964 elections in which Democrats gained overwhelming majorities in both chambers of Congress, he privately instructed Attorney General Nicholas Katzenbach to draft "the goddamndest, toughest voting rights act that you can".:48–50 However, Johnson did not publicly push for the legislation at the time; his advisers warned him of political costs for vigorously pursuing a voting rights bill so soon after Congress had passed the Civil Rights Act of 1964, and Johnson was concerned that championing voting rights would endanger his Great Society reforms by angering Southern Democrats in Congress.:47–48, 50–52
Following the 1964 elections, civil rights organizations such as the Southern Christian Leadership Conference (SCLC) and the Student Nonviolent Coordinating Committee (SNCC) pushed for federal action to protect the voting rights of racial minorities.:254–255 Their efforts culminated in protests in Alabama, particularly in the city of Selma, where County Sheriff Jim Clark's police force violently resisted African-American voter registration efforts. Speaking about the voting rights push in Selma, James Forman of SNCC said:
Our strategy, as usual, was to force the U.S. government to intervene in case there were arrests—and if they did not intervene, that inaction would once again prove the government was not on our side and thus intensify the development of a mass consciousness among blacks. Our slogan for this drive was "One Man, One Vote".:255
In January 1965, Martin Luther King Jr., James Bevel, and other civil rights leaders organized several peaceful demonstrations in Selma, which were violently attacked by police and white counter-protesters. Throughout January and February, these protests received national media coverage and drew attention to the issue of voting rights. King and other demonstrators were arrested during a march on February 1 for violating an anti-parade ordinance; this inspired similar marches in the following days, causing hundreds more to be arrested.:259–261 On February 4, civil rights leader Malcolm X gave a militant speech in Selma in which he said that many African Americans did not support King's nonviolent approach;:262 he later privately said that he wanted to frighten whites into supporting King.:69 The next day, King was released and a letter he wrote addressing voting rights, "Letter From A Selma Jail", appeared in The New York Times.:262
With the nation paying increasing attention to Selma and voting rights, President Johnson reversed his decision to delay voting rights legislation, and on February 6, he announced he would send a proposal to Congress.:69 However, he did not reveal the proposal's content or when it would come before Congress.:264
On February 18 in Marion, Alabama, state troopers violently broke up a nighttime voting-rights march during which officer James Bonard Fowler shot and killed young African-American protester Jimmie Lee Jackson, who was unarmed and protecting his mother.:265 Spurred by this event, and at the initiation of Bevel,:267:81–86 on March 7 SCLC and SNCC began the first of the Selma to Montgomery marches, in which Selma residents intended to march to Alabama's capital, Montgomery, to highlight voting rights issues and present Governor George Wallace with their grievances. On the first march, demonstrators were stopped by state and county police on horseback at the Edmund Pettus Bridge near Selma. The police shot tear gas into the crowd and trampled protesters. Televised footage of the scene, which became known as "Bloody Sunday", generated outrage across the country.:515 A second march was held on March 9, which became known as "Turnaround Tuesday". That evening, three white Unitarian ministers who participated in the march were attacked on the street and beaten with clubs by four Ku Klux Klan members. The worst injured was Reverend James Reeb from Boston, who died on Thursday, March 11.
In the wake of the events in Selma, President Johnson, addressing a televised joint session of Congress on March 15, called on legislators to enact expansive voting rights legislation. He concluded his speech with the words "we shall overcome", a major anthem of the civil rights movement.:278 The Voting Rights Act of 1965 was introduced in Congress two days later while civil rights leaders, now under the protection of federal troops, led a march of 25,000 people from Selma to Montgomery.:516:279, 282
Efforts to eliminate discriminatory election practices by litigation on a case-by-case basis by the United States Department of Justice had been unsuccessful and existing federal anti-discrimination laws were not sufficient to overcome the resistance by state officials to enforcement of the 15th Amendment. Against this backdrop Congress came to the conclusion that a new comprehensive federal bill was necessary to break the grip of state disfranchisement. The United States Supreme Court explained this in South Carolina v. Katzenbach (1966) with the following words:
In recent years, Congress has repeatedly tried to cope with the problem by facilitating case-by-case litigation against voting discrimination. The Civil Rights Act of 1957 authorized the Attorney General to seek injunctions against public and private interference with the right to vote on racial grounds. Perfecting amendments in the Civil Rights Act of 1960 permitted the joinder of States as parties defendant, gave the Attorney General access to local voting records, and authorized courts to register voters in areas of systematic discrimination. Title I of the Civil Rights Act of 1964 expedited the hearing of voting cases before three-judge courts and outlawed some of the tactics used to disqualify Negroes from voting in federal elections. Despite the earnest efforts of the Justice Department and of many federal judges, these new laws have done little to cure the problem of voting discrimination. [...] The previous legislation has proved ineffective for a number of reasons. Voting suits are unusually onerous to prepare, sometimes requiring as many as 6,000 man-hours spent combing through registration records in preparation for trial. Litigation has been exceedingly slow, in part because of the ample opportunities for delay afforded voting officials and others involved in the proceedings. Even when favorable decisions have finally been obtained, some of the States affected have merely switched to discriminatory devices not covered by the federal decrees, or have enacted difficult new tests designed to prolong the existing disparity between white and Negro registration. Alternatively, certain local officials have defied and evaded court orders or have simply closed their registration offices to freeze the voting rolls. The provision of the 1960 law authorizing registration by federal officers has had little impact on local maladministration, because of its procedural complexities.
In South Carolina v. Katzenbach (1966) the Supreme Court also held that Congress had the power the pass the Voting Rights Act of 1965 under its Enforcement Powers stemming from the Fifteenth Amendment:
Congress exercised its authority under the Fifteenth Amendment in an inventive manner when it enacted the Voting Rights Act of 1965. First: the measure prescribes remedies for voting discrimination which go into effect without any need for prior adjudication. This was clearly a legitimate response to the problem, for which there is ample precedent under other constitutional provisions. See Katzenbach v. McClung, 379 U. S. 294, 379 U. S. 302-304; United States v. Darby, 312 U. S. 100, 312 U. S. 120-121. Congress had found that case-by-case litigation was inadequate to combat widespread and persistent discrimination in voting, because of the inordinate amount of time and energy required to overcome the obstructionist tactics invariably encountered in these lawsuits. After enduring nearly a century of systematic resistance to the Fifteenth Amendment, Congress might well decide to shift the advantage of time and inertia from the perpetrators of the evil to its victims. [...] Second: the Act intentionally confines these remedies to a small number of States and political subdivisions which, in most instances, were familiar to Congress by name. This, too, was a permissible method of dealing with the problem. Congress had learned that substantial voting discrimination presently occurs in certain sections of the country, and it knew no way of accurately forecasting whether the evil might spread elsewhere in the future. In acceptable legislative fashion, Congress chose to limit its attention to the geographic areas where immediate action seemed necessary. See McGowan v. Maryland, 366 U. S. 420, 366 U. S. 427; Salsburg v. Maryland, 346 U. S. 545, 346 U. S. 550-554. The doctrine of the equality of States, invoked by South Carolina, does not bar this approach, for that doctrine applies only to the terms upon which States are admitted to the Union, and not to the remedies for local evils which have subsequently appeared. See Coyle v. Smith, 221 U. S. 559, and cases cited therein.
The Voting Rights Act of 1965 was introduced in Congress on March 17, 1965, as S. 1564, and it was jointly sponsored by Senate majority leader Mike Mansfield (D-MT) and Senate minority leader Everett Dirksen (R-IL), both of whom had worked with Attorney General Katzenbach to draft the bill's language.:49 Johnson worried that Southern Democrats would filibuster the legislation because they had opposed other civil rights efforts. He enlisted Dirksen to help gain Republican support. Dirksen did not originally intend to support voting rights legislation so soon after supporting the Civil Rights Act of 1964, but he expressed willingness to accept "revolutionary" legislation after learning about the police violence against marchers in Selma on Bloody Sunday.:95–96 Given Dirksen's key role in helping Katzenbach draft the legislation, it became known informally as the "Dirksenbach" bill.:96 After Mansfield and Dirksen introduced the bill, 64 additional senators agreed to cosponsor it,:150 with a total 46 Democratic and 20 Republican cosponsors.Although Democrats held two-thirds of the seats in both chambers of Congress after the 1964 Senate elections,
The bill contained several special provisions that targeted certain state and local governments: a "coverage formula" that determined which jurisdictions were subject to the Act's other special provisions ("covered jurisdictions"); a "preclearance" requirement that prohibited covered jurisdictions from implementing changes to their voting procedures without first receiving approval from the U.S. attorney general or the U.S. District Court for D.C. that the changes were not discriminatory; and the suspension of "tests or devices", such as literacy tests, in covered jurisdictions. The bill also authorized the assignment of federal examiners to register voters, and of federal observers to monitor elections, to covered jurisdictions that were found to have engaged in egregious discrimination. The bill set these special provisions to expire after five years.:319–320:520, 524:5–6
The scope of the coverage formula was a matter of contentious congressional debate. The coverage formula reached a jurisdiction if (1) the jurisdiction maintained a "test or device" on November 1, 1964 and (2) less than 50 percent of the jurisdiction's voting-age residents either were registered to vote on November 1, 1964 or cast a ballot in the November 1964 presidential election.:317 This formula reached few jurisdictions outside the Deep South. To appease legislators who felt that the bill unfairly targeted Southern jurisdictions, the bill included a general prohibition on racial discrimination in voting that applied nationwide.:1352 The bill also included provisions allowing a covered jurisdiction to "bail out" of coverage by proving in federal court that it had not used a "test or device" for a discriminatory purpose or with a discriminatory effect during the 5 years preceding its bailout request.:6 Additionally, the bill included a "bail in" provision under which federal courts could subject discriminatory non-covered jurisdictions to remedies contained in the special provisions.:2006–2007
The bill was first considered by the Senate Judiciary Committee, whose chair, Senator James Eastland (D-MS), opposed the legislation with several other Southern senators on the committee. To prevent the bill from dying in committee, Mansfield proposed a motion to require the Judiciary Committee to report the bill out of committee by April 9, which the Senate overwhelmingly passed by a vote of 67 to 13.:150 During the committee's consideration of the bill, Senator Ted Kennedy (D-MA) led an effort to amend the bill to prohibit poll taxes. Although the Twenty-fourth Amendment—which banned the use of poll taxes in federal elections— was ratified a year earlier, Johnson's administration and the bill's sponsors did not include a provision in the voting rights bill banning poll taxes in state elections because they feared courts would strike down the legislation as unconstitutional.:521:285 Additionally, by excluding poll taxes from the definition of "tests or devices", the coverage formula did not reach Texas or Arkansas, mitigating opposition from those two states' influential congressional delegations.:521 Nonetheless, with the support of liberal committee members, Kennedy's amendment to prohibit poll taxes passed by a 9-4 vote. In response, Dirksen offered an amendment that exempted from the coverage formula any state that had at least 60 percent of its eligible residents registered to vote or that had a voter turnout that surpassed the national average in the preceding presidential election. This amendment, which effectively exempted all states from coverage except Mississippi, passed during a committee meeting in which three liberal members were absent. Dirksen offered to drop the amendment if the poll tax ban were removed. Ultimately, the bill was reported out of committee on April 9 by a 12-4 vote without a recommendation.:152–153
On April 22, the full Senate started debating the bill. Dirksen spoke first on the bill's behalf, saying that "legislation is needed if the unequivocal mandate of the Fifteenth Amendment ... is to be enforced and made effective, and if the Declaration of Independence is to be made truly meaningful.":154 Senator Strom Thurmond (R-SC) retorted that the bill would lead to "despotism and tyranny", and Senator Sam Ervin (D-NC) argued that the bill was unconstitutional because it deprived states of their right under Article I, Section 2 of the Constitution to establish voter qualifications and because the bill's special provisions targeted only certain jurisdictions. On May 6, Ervin offered an amendment to abolish the coverage formula's automatic trigger and instead allow federal judges to appoint federal examiners to administer voter registration. This amendment overwhelmingly failed, with 42 Democrats and 22 Republicans voting against it.:154–156 After lengthy debate, Ted Kennedy's amendment to prohibit poll taxes also failed 49-45 on May 11. However, the Senate agreed to include a provision authorizing the attorney general to sue any jurisdiction, covered or non-covered, to challenge its use of poll taxes.:156–157:2 An amendment offered by Senator Robert F. Kennedy (D-NY) to enfranchise English-illiterate citizens who had attained at least a sixth-grade education in a non-English-speaking school also passed by 48-19. Southern legislators offered a series of amendments to weaken the bill, all of which failed.:159
On May 25, the Senate voted for cloture by a 70-30 vote, thus overcoming the threat of filibuster and limiting further debate on the bill. On May 26, the Senate passed the bill by a 77-19 vote (Democrats 47-16, Republicans 30-2); only senators representing Southern states voted against it.:161
House of Representatives
Emanuel Celler (D-NY), Chair of the House Judiciary Committee, introduced the Voting Rights Act in the House of Representatives on March 19, 1965, as H.R. 6400. The House Judiciary Committee was the first committee to consider the bill. The committee's ranking Republican, William McCulloch (R-OH), generally supported expanding voting rights, but he opposed both the poll tax ban and the coverage formula, and he led opposition to the bill in committee. The committee eventually approved the bill on May 12, but it did not file its committee report until June 1.:162 The bill included two amendments from subcommittee: a penalty for private persons who interfered with the right to vote and a prohibition of all poll taxes. The poll tax prohibition gained Speaker of the House John McCormack's support. The bill was next considered by the Rules Committee, whose chair, Howard W. Smith (D-VA), opposed the bill and delayed its consideration until June 24, when Celler initiated proceedings to have the bill discharged from committee. Under pressure from the bill's proponents, Smith allowed the bill to be released a week later, and the full House started debating the bill on July 6.:163
To defeat the Voting Rights Act, McCulloch introduced an alternative bill, H.R. 7896. It would have allowed the attorney general to appoint federal registrars after receiving 25 serious complaints of discrimination against a jurisdiction, and it would have imposed a nationwide ban on literacy tests for persons who could prove they attained a sixth-grade education. McCulloch's bill was co-sponsored by House minority leader Gerald Ford (R-MI) and supported by Southern Democrats as an alternative to the Voting Rights Act.:162–164 The Johnson administration viewed H.R. 7896 as a serious threat to passing the Voting Rights Act. However, support for H.R. 7896 dissipated after William M. Tuck (D-VA) publicly said he preferred H.R. 7896 because the Voting Rights Act would legitimately ensure that African Americans could vote. His statement alienated most supporters of H.R. 7896, and the bill failed on the House floor by a 171-248 vote on July 9. Later that night, the House passed the Voting Rights Act by a 333-85 vote (Democrats 221-61, Republicans 112-24).:163–165
The chambers appointed a conference committee to resolve differences between the House and Senate versions of the bill. A major contention concerned the poll tax provisions; the Senate version allowed the attorney general to sue states that used poll taxes to discriminate, while the House version outright banned all poll taxes. Initially, the committee members were stalemated. To help broker a compromise, Attorney General Katzenbach drafted legislative language explicitly asserting that poll taxes were unconstitutional and instructed the Department of Justice to sue the states that maintained poll taxes. To assuage concerns of liberal committee members that this provision was not strong enough, Katzenbach enlisted the help of Martin Luther King Jr., who gave his support to the compromise. King's endorsement ended the stalemate, and on July 29, the conference committee reported its version out of committee.:166–167 The House approved this conference report version of the bill on August 3 by a 328-74 vote (Democrats 217-54, Republicans 111-20), and the Senate passed it on August 4 by a 79-18 vote (Democrats 49-17, Republicans 30-1).:167 On August 6, President Johnson signed the Act into law with King, Rosa Parks, John Lewis, and other civil rights leaders in attendance at the signing ceremony.:168
Congress enacted major amendments to the Act in 1970, 1975, 1982, 1992, and 2006. Each amendment coincided with an impending expiration of some or all of the Act's special provisions. Originally set to expire by 1970, Congress repeatedly reauthorized the special provisions in recognition of continuing voting discrimination.:209–210:6–8 Congress extended the coverage formula and special provisions tied to it, such as the Section 5 preclearance requirement, for five years in 1970, seven years in 1975, and 25 years in both 1982 and 2006. In 1970 and 1975, Congress also expanded the reach of the coverage formula by supplementing it with new 1968 and 1972 trigger dates. Coverage was further enlarged in 1975 when Congress expanded the meaning of "tests or devices" to encompass any jurisdiction that provided English-only election information, such as ballots, if the jurisdiction had a single language minority group that constituted more than five percent of the jurisdiction's voting-age citizens. These expansions brought numerous jurisdictions into coverage, including many outside of the South. To ease the burdens of the reauthorized special provisions, Congress liberalized the bailout procedure in 1982 by allowing jurisdictions to escape coverage by complying with the Act and affirmatively acting to expand minority political participation.:523
In addition to reauthorizing the original special provisions and expanding coverage, Congress amended and added several other provisions to the Act. For instance, Congress expanded the original ban on "tests or devices" to apply nationwide in 1970, and in 1975, Congress made the ban permanent.:6–9 Separately, in 1975 Congress expanded the Act's scope to protect language minorities from voting discrimination. Congress defined "language minority" to mean "persons who are American Indian, Asian American, Alaskan Natives or of Spanish heritage." Congress amended various provisions, such as the preclearance requirement and Section 2's general prohibition of discriminatory voting laws, to prohibit discrimination against language minorities.:199 Congress also enacted a bilingual election requirement in Section 203, which requires election officials in certain jurisdictions with large numbers of English-illiterate language minorities to provide ballots and voting information in the language of the language minority group. Originally set to expire after 10 years, Congress reauthorized Section 203 in 1982 for seven years, expanded and reauthorized it in 1992 for 15 years, and reauthorized it in 2006 for 25 years.:19–21, 25, 49 The bilingual election requirements have remained controversial, with proponents arguing that bilingual assistance is necessary to enable recently naturalized citizens to vote and opponents arguing that the bilingual election requirements constitute costly unfunded mandates.:26
Several of the amendments responded to judicial rulings with which Congress disagreed. In 1982, Congress amended the Act to overturn the Supreme Court case Mobile v. Bolden (1980), which held that the general prohibition of voting discrimination prescribed in Section 2 prohibited only purposeful discrimination. Congress responded by expanding Section 2 to explicitly ban any voting practice that had a discriminatory effect, regardless of whether the practice was enacted or operated for a discriminatory purpose. The creation of this "results test" shifted the majority of vote dilution litigation brought under the Act from preclearance lawsuits to Section 2 lawsuits.:644–645 In 2006, Congress amended the Act to overturn two Supreme Court cases: Reno v. Bossier Parish School Board (2000), which interpreted the Section 5 preclearance requirement to prohibit only voting changes that were enacted or maintained for a "retrogressive" discriminatory purpose instead of any discriminatory purpose, and Georgia v. Ashcroft (2003), which established a broader test for determining whether a redistricting plan had an impermissible effect under Section 5 than assessing only whether a minority group could elect its preferred candidates.:207–208 Since the Supreme Court struck down the coverage formula as unconstitutional in Shelby County v. Holder (2013), several bills have been introduced in Congress to create a new coverage formula and amend various other provisions; none of these bills have passed.
The act contains two types of provisions: "general provisions", which apply nationwide, and "special provisions", which apply to only certain states and local governments.:1 Most provisions are designed to protect the voting rights of racial and language minorities. The term "language minority" means "persons who are American Indian, Asian American, Alaskan Natives or of Spanish heritage." The act's provisions have been colored by numerous judicial interpretations and congressional amendments.
General prohibition of discriminatory voting laws
Section 2 prohibits any jurisdiction from implementing a "voting qualification or prerequisite to voting, or standard, practice, or procedure ... in a manner which results in a denial or abridgement of the right ... to vote on account of race," color, or language minority status.:37 The Voting Rights Acts (VRA) Section 2 contains two separate protections against voter discrimination for laws which in contrast to Section 5 of the VRA are already in effect. The first protection is a prohibition of intentional discrimination based on race or color in voting. The second protection is a prohibition of election practices that result in the denial or abridgment of the right to vote based on race or color. If the violation of the second protection is intentional, then this violation is a also a violation of the Fifteenth Amendment. The Supreme Court has allowed private plaintiffs to sue to enforce these prohibitions.:138 In Mobile v. Bolden (1980), the Supreme Court held that as originally enacted in 1965, Section 2 simply restated the Fifteenth Amendment and thus prohibited only those voting laws that were intentionally enacted or maintained for a discriminatory purpose.:60–61 In 1982, Congress amended Section 2 to create a "results" test, which prohibits any voting law that has a discriminatory effect irrespective of whether the law was intentionally enacted or maintained for a discriminatory purpose.:3 The 1982 amendments stipulated that the results test does not guarantee protected minorities a right to proportional representation. In Thornburg v. Gingles (1986) the United States Supreme Court explained with respect to the 1982 amendment for section 2 that the "essence of a Section 2 claim is that a certain electoral law, practice, or structure interacts with social and historical conditions to cause an inequality in the opportunities enjoyed by black and white voters to elect their preferred representatives." The United States Department of Justices declared that section 2 is not only a permanent and nationwide applying prohibition against discrimination in voting to any voting standard, practice, or procedure that results in the denial or abridgement of the right of any citizen to vote on account of race, color, or membership in a language minority group, but also a prohibition for state and local officials to adopt or maintain voting laws or procedures that purposefully discriminate on the basis of race, color, or membership in a language minority group.
The United States Supreme Court expressed its views regarding Section 2 and its amendment from 1982 in Chisom v. Roemer (1991). Under the amended statute, proof of intent is no longer required to prove a § 2 violation. Now plaintiffs can prevail under § 2 by demonstrating that a challenged election practice has resulted in the denial or abridgement of the right to vote based on color or race. Congress not only incorporated the results test in the paragraph that formerly constituted the entire § 2, but also designated that paragraph as subsection (a) and added a new subsection (b) to make clear that an application of the results test requires an inquiry into "the totality of the circumstances." Section 2(a) adopts a results test, thus providing that proof of discriminatory intent is no longer necessary to establish any violation of the section. Section 2(b) provides guidance about how the results test is to be applied. There is a statutory framework to determine whether a jurisdiction's election law violates the general prohibition from Section 2 in its amended form:
Section 2 prohibits voting practices that “result in a denial or abridgment of the right * * * to vote on account of race or color [or language-minority status],” and it states that such a result “is established” if a jurisdiction’s “political processes * * * are not equally open” to members of such a group “in that [they] have less opportunity * * * to participate in the political process and to elect representatives of their choice.” 52 U.S.C. 10301. [...] Subsection (b) states in relevant part: A violation of subsection (a) is established if, based on the totality of circumstances, it is shown that the political processes leading to nomination or election in the State or political subdivision are not equally open to participation by members of a class of citizens protected by subsection (a) in that its members have less opportunity than other members of the electorate to participate in the political process and to elect representatives of their choice.
The Office of the Arizona Attorney general stated with respect to the framework to determine whether a jurisdiction's election law violates the general prohibition from Section 2 in its amended form and the reason for the adoption of Section 2 in its amended form:
To establish a violation of amended Section 2, the plaintiff must prove,“based on the totality of circumstances,” that the State’s “political processes” are “not equally open to participation by members” of a protected class, “in that its members have less opportunity than other members of the electorate to participate in the political process and to elect representatives of their choice.” § 10301(b). That is the “result” that amended Section 2 prohibits: “less opportunity than other members of the electorate,” viewing the State’s “political processes” as a whole. The new language was crafted as a compromise designed to eliminate the need for direct evidence of discriminatory intent, which is often difficult to obtain, but without embracing an unqualified “disparate impact” test that would invalidate many legitimate voting procedures. S. REP. NO. 97–417, at 28-29, 31-32, 99 (1982)
When determining whether a jurisdiction's election law violates the general prohibition from Section 2 of the VRA, courts have relied on factors enumerated in the Senate Judiciary Committee report associated with the 1982 amendments ("Senate Factors"), including:
- The history of official discrimination in the jurisdiction that affects the right to vote;
- The degree to which voting in the jurisdiction is racially polarized;
- The extent of the jurisdiction's use of majority vote requirements, unusually large electoral districts, prohibitions on bullet voting, and other devices that tend to enhance the opportunity for voting discrimination;
- Whether minority candidates are denied access to the jurisdiction's candidate slating processes, if any;
- The extent to which the jurisdiction's minorities are discriminated against in socioeconomic areas, such as education, employment, and health;
- Whether overt or subtle racial appeals in campaigns exist;
- The extent to which minority candidates have won elections;
- The degree that elected officials are unresponsive to the concerns of the minority group; and
- Whether the policy justification for the challenged law is tenuous.
The report indicates not all or a majority of these factors need to exist for an electoral device to result in discrimination, and it also indicates that this list is not exhaustive, allowing courts to consider additional evidence at their discretion.:344:28–29
Section 2 prohibits two types of discrimination: "vote denial", in which a person is denied the opportunity to cast a ballot or to have their vote properly counted, and "vote dilution",:2–6 in which the strength or effectiveness of a person's vote is diminished.:691–692 Most Section 2 litigation has concerned vote dilution, especially claims that a jurisdiction's redistricting plan or use of at-large/multimember elections prevents minority voters from casting sufficient votes to elect their preferred candidates.:708–709 An at-large election can dilute the votes cast by minority voters by allowing a cohesive majority group to win every legislative seat in the jurisdiction.:221 Redistricting plans can be gerrymandered to dilute votes cast by minorities by "packing" high numbers of minority voters into a small number of districts or "cracking" minority groups by placing small numbers of minority voters into a large number of districts.
In Thornburg v. Gingles (1986), the Supreme Court used the term "vote dilution through submergence" to describe claims that a jurisdiction's use of an at-large/multimember election system or gerrymandered redistricting plan diluted minority votes, and it established a legal framework for assessing such claims under Section 2. Under the Gingles test, plaintiffs must show the existence of three preconditions:
- The racial or language minority group "is sufficiently numerous and compact to form a majority in a single-member district";
- The minority group is "politically cohesive" (meaning its members tend to vote similarly); and
- The "majority votes sufficiently as a bloc to enable it ... usually to defeat the minority's preferred candidate.":50–51
The first precondition is known as the "compactness" requirement and concerns whether a majority-minority district can be created. The second and third preconditions are collectively known as the "racially polarized voting" or "racial bloc voting" requirement, and they concern whether the voting patterns of the different racial groups are different from each other. If a plaintiff proves these preconditions exist, then the plaintiff must additionally show, using the remaining Senate Factors and other evidence, that under the "totality of the circumstances", the jurisdiction's redistricting plan or use of at-large or multimember elections diminishes the ability of the minority group to elect candidates of its choice.:344–345
Subsequent litigation further defined the contours of these "vote dilution through submergence" claims. In Bartlett v. Strickland (2009), the Supreme Court held that the first Gingles precondition can be satisfied only if a district can be drawn in which the minority group comprises a majority of voting-age citizens. This means that plaintiffs cannot succeed on a submergence claim in jurisdictions where the size of the minority group, despite not being large enough to comprise a majority in a district, is large enough for its members to elect their preferred candidates with the help of "crossover" votes from some members of the majority group.:A2 In contrast, the Supreme Court has not addressed whether different protected minority groups can be aggregated to satisfy the Gingles preconditions as a coalition, and lower courts have split on the issue.
The Supreme Court provided additional guidance on the "totality of the circumstances" test in Johnson v. De Grandy (1994). The court emphasized that the existence of the three Gingles preconditions may be insufficient to prove liability for vote dilution through submergence if other factors weigh against such a determination, especially in lawsuits challenging redistricting plans. In particular, the court held that even where the three Gingles preconditions are satisfied, a jurisdiction is unlikely to be liable for vote dilution if its redistricting plan contains a number of majority-minority districts that is proportional to the minority group's population size. The decision thus clarified that Section 2 does not require jurisdictions to maximize the number of majority-minority districts. The opinion also distinguished the proportionality of majority-minority districts, which allows minorities to have a proportional opportunity to elect their candidates of choice, from the proportionality of election results, which Section 2 explicitly does not guarantee to minorities.:1013–1014
An issue regarding the third Gingles precondition remains unresolved. In Gingles, the Supreme Court split as to whether plaintiffs must prove that the majority racial group votes as a bloc specifically because its members are motivated to vote based on racial considerations and not other considerations that may overlap with race, such as party affiliation. A plurality of justices said that requiring such proof would violate Congress's intent to make Section 2 a "results" test, but Justice White maintained that the proof was necessary to show that an electoral scheme results in racial discrimination.:555–557 Since Gingles, lower courts have split on the issue.
Although most Section 2 litigation has involved claims of vote dilution through submergence,:708–709 courts also have addressed other types of vote dilution under this provision. In Holder v. Hall (1994), the Supreme Court held that claims that minority votes are diluted by the small size of a governing body, such as a one-person county commission, may not be brought under Section 2. A plurality of the court reasoned that no uniform, non-dilutive "benchmark" size for a governing body exists, making relief under Section 2 impossible. Another type of vote dilution may result from a jurisdiction's requirement that a candidate be elected by a majority vote. A majority-vote requirement may cause a minority group's candidate of choice, who would have won the election with a simple plurality of votes, to lose after a majority of voters unite behind another candidate in a runoff election. The Supreme Court has not addressed whether such claims may be brought under Section 2, and lower courts have reached different conclusions on the issue.
In addition to claims of vote dilution, courts have considered vote denial claims brought under Section 2. The Supreme Court, in Richardson v. Ramirez (1974), held that felony disenfranchisement laws cannot violate Section 2 because, among other reasons, Section 2 of the Fourteenth Amendment permits such laws.:756–757 A federal district court in Mississippi held that a "dual registration" system that requires a person to register to vote separately for state elections and local elections may violate Section 2 if the system has a racially disparate impact in light of the Senate Factors.:754 Starting in 2013, lower federal courts began to consider various challenges to voter ID laws brought under Section 2.
The act contains several specific prohibitions on conduct that may interfere with a person's ability to cast an effective vote. One of these prohibitions is prescribed in Section 201, which prohibits any jurisdiction from requiring a person to comply with any "test or device" to register to vote or cast a ballot. The term "test or device" is defined as literacy tests, educational or knowledge requirements, proof of good moral character, and requirements that a person be vouched for when voting. Before the Act's enactment, these devices were the primary tools used by jurisdictions to prevent racial minorities from voting. Originally, the Act suspended tests or devices temporarily in jurisdictions covered by the Section 4(b) coverage formula, but Congress subsequently expanded the prohibition to the entire country and made it permanent.:6–9 Relatedly, Section 202 prohibits jurisdictions from imposing any "durational residency requirement" that requires persons to have lived in the jurisdiction for more than 30 days before being eligible to vote in a presidential election.:353
Several further protections for voters are contained in Section 11. Section 11(a) prohibits any person acting under color of law from refusing or failing to allow a qualified person to vote or to count a qualified voter's ballot. Similarly, Section 11(b) prohibits any person from intimidating, harassing, or coercing another person for voting or attempting to vote. Two provisions in Section 11 address voter fraud: Section 11(c) prohibits people from knowingly submitting a false voter registration application to vote in a federal election, and Section 11(e) prohibits voting twice in a federal election.:360
Finally, under Section 208, a jurisdiction may not prevent anyone who is English-illiterate or has a disability from being accompanied into the ballot box by an assistant of the person's choice. The only exceptions are that the assistant may not be an agent of the person's employer or union.:221
Section 3(c) contains a "bail-in" or "pocket trigger" process by which jurisdictions that fall outside the coverage formula of Section 4(b) may become subject to preclearance. Under this provision, if a jurisdiction has racially discriminated against voters in violation of the Fourteenth or Fifteenth Amendments, a court may order the jurisdiction to have future changes to its election laws preapproved by the federal government.:2006–2007 Because courts have interpreted the Fourteenth and Fifteenth Amendments to prohibit only intentional discrimination, a court may bail in a jurisdiction only if the plaintiff proves that the jurisdiction enacted or operated a voting practice to purposely discriminate.:2009
Section 3(c) contains its own preclearance language and differs from Section 5 preclearance in several ways. Unlike Section 5 preclearance, which applies to a covered jurisdiction until such time as the jurisdiction may bail out of coverage under Section 4(a), bailed-in jurisdictions remain subject to preclearance for as long as the court orders. Moreover, the court may require the jurisdiction to preclear only particular types of voting changes. For example, the bail-in of New Mexico in 1984 applied for 10 years and required preclearance of only redistricting plans. This differs from Section 5 preclearance, which requires a covered jurisdiction to preclear all of its voting changes.:2009–2010
During the Act's early history, Section 3(c) was little used; no jurisdictions were bailed in until 1975. Between 1975 and 2013, 18 jurisdictions were bailed in, including 16 local governments and the states of Arkansas and New Mexico.:1a-2a Although the Supreme Court held the Section 4(b) coverage formula unconstitutional in Shelby County v. Holder (2013), it did not hold Section 3(c) unconstitutional. Therefore, jurisdictions may continue to be bailed-in and subjected to Section 3(c) preclearance. In the months following Shelby County, courts began to consider requests by the attorney general and other plaintiffs to bail in the states of Texas and North Carolina, and in January 2014 a federal court bailed in Evergreen, Alabama.
A more narrow bail-in process pertaining to federal observer certification is prescribed in Section 3(a). Under this provision, a federal court may certify a non-covered jurisdiction to receive federal observers if the court determines that the jurisdiction violated the voting rights guaranteed by the Fourteenth or Fifteenth Amendments. Jurisdictions certified to receive federal observers under Section 3(a) are not subject to preclearance.:236–237
Section 4(b) contains a "coverage formula" that determines which states and local governments may be subjected to the Act's other special provisions (except for the Section 203(c) bilingual election requirements, which fall under a different formula). Congress intended for the coverage formula to encompass the most pervasively discriminatory jurisdictions. A jurisdiction is covered by the formula if:
- As of November 1, 1964, 1968, or 1972, the jurisdiction used a "test or device" to restrict the opportunity to register and vote; and
- Less than half of the jurisdiction's eligible citizens were registered to vote on November 1, 1964, 1968, or 1972; or less than half of eligible citizens voted in the presidential election of November 1964, 1968, or 1972.
As originally enacted, the coverage formula contained only November 1964 triggering dates; subsequent revisions to the law supplemented it with the additional triggering dates of November 1968 and November 1972, which brought more jurisdictions into coverage. For purposes of the coverage formula, the term "test or device" includes the same four devices prohibited nationally by Section 201—literacy tests, educational or knowledge requirements, proof of good moral character, and requirements that a person be vouched for when voting—and one further device defined in Section 4(f)(3): in jurisdictions where more than five percent of the citizen voting age population are members of a single language minority group, any practice or requirement by which registration or election materials are provided only in English. The types of jurisdictions that the coverage formula applies to include states and "political subdivisions" of states.:207–208 Section 14(c)(2) defines "political subdivision" to mean any county, parish, or "other subdivision of a State which conducts registration for voting."
As Congress added new triggering dates to the coverage formula, new jurisdictions were brought into coverage. The 1965 coverage formula included the whole of Alabama, Alaska, Georgia, Louisiana, Mississippi, South Carolina, and Virginia; and some subdivisions (mostly counties) in Arizona, Hawaii, Idaho, and North Carolina. The 1968 coverage resulted in the partial coverage of Alaska, Arizona, California, Connecticut, Idaho, Maine, Massachusetts, New Hampshire, New York, and Wyoming. Connecticut, Idaho, Maine, Massachusetts, and Wyoming filed successful "bailout" lawsuits, as also provided by section 4. The 1972 coverage covered the whole of Alaska, Arizona, and Texas, and parts of California, Florida, Michigan, New York, North Carolina, and South Dakota.
The special provisions of the Act were initially due to expire in 1970, and Congress renewed them for another five years. In 1975, the Act's special provisions were extended for another seven years. In 1982, the coverage formula was extended again, this time for 25 years, but no changes were made to the coverage formula, and in 2006, the coverage formula was again extended for 25 years.
Throughout its history, the coverage formula remained controversial because it singled out certain jurisdictions for scrutiny, most of which were in the Deep South. In Shelby County v. Holder (2013), the Supreme Court declared the coverage formula unconstitutional because the criteria used were outdated and thus violated principles of equal state sovereignty and federalism. The other special provisions that are dependent on the coverage formula, such as the Section 5 preclearance requirement, remain valid law. However, without a valid coverage formula, these provisions are unenforceable.
Section 5 requires that covered jurisdictions receive federal approval, known as "preclearance", before implementing changes to their election laws. A covered jurisdiction has the burden of proving that the change does not have the purpose or effect of discriminating on the basis of race or language minority status; if the jurisdiction fails to meet this burden, the federal government will deny preclearance and the jurisdiction's change will not go into effect. The Supreme Court broadly interpreted Section 5's scope in Allen v. State Board of Election (1969), holding that any change in a jurisdiction's voting practices, even if minor, must be submitted for preclearance. The court also held that if a jurisdiction fails to have its voting change precleared, private plaintiffs may sue the jurisdiction in the plaintiff's local district court before a three-judge panel. In these Section 5 "enforcement actions", a court considers whether the jurisdiction made a covered voting change, and if so, whether the change had been precleared. If the jurisdiction improperly failed to obtain preclearance, the court will order the jurisdiction to obtain preclearance before implementing the change. However, the court may not consider the merits of whether the change should be approved.:128–129:556:23
Jurisdictions may seek preclearance through either an "administrative preclearance" process or a "judicial preclearance" process. If a jurisdiction seeks administrative preclearance, the attorney general will consider whether the proposed change has a discriminatory purpose or effect. After the jurisdiction submits the proposed change, the attorney general has 60 days to interpose an objection to it. The 60-day period may be extended an additional 60 days if the jurisdiction later submits additional information. If the attorney general interposes an objection, then the change is not precleared and may not be implemented.:90–92 The attorney general's decision is not subject to judicial review, but if the attorney general interposes an objection, the jurisdiction may independently seek judicial preclearance, and the court may disregard the attorney general's objection at its discretion.:559 If a jurisdiction seeks judicial preclearance, it must file a declaratory judgment action against the attorney general in the U.S. District Court for D.C. A three-judge panel will consider whether the voting change has a discriminatory purpose or effect, and the losing party may appeal directly to the Supreme Court. Private parties may intervene in judicial preclearance lawsuits.:476–477:90
In several cases, the Supreme Court has addressed the meaning of "discriminatory effect" and "discriminatory purpose" for Section 5 purposes. In Beer v. United States (1976), the court held that for a voting change to have a prohibited discriminatory effect, it must result in "retrogression" (backsliding). Under this standard, a voting change that causes discrimination, but does not result in more discrimination than before the change was made, cannot be denied preclearance for having a discriminatory effect.:283–284 For example, replacing a poll tax with an equally expensive voter registration fee is not a "retrogressive" change because it causes equal discrimination, not more.:695 Relying on the Senate report for the Act, the court reasoned that the retrogression standard was the correct interpretation of the term "discriminatory effect" because Section 5's purpose is " 'to insure that [the gains thus far achieved in minority political participation] shall not be destroyed through new [discriminatory] procedures' ".:140–141 The retrogression standard applies irrespective of whether the voting change allegedly causes vote denial or vote dilution.:311
In 2003, the Supreme Court held in Georgia v. Ashcroft that courts should not determine that a new redistricting plan has a retrogressive effect solely because the plan decreases the number of minority-majority districts. The court emphasized that judges should analyze various other factors under the "totality of the circumstances", such as whether the redistricting plan increases the number of "influence districts" in which a minority group is large enough to influence (but not decide) election outcomes. In 2006, Congress overturned this decision by amending Section 5 to explicitly state that "diminishing the ability [of a protected minority] to elect their preferred candidates of choice denies or abridges the right to vote within the meaning of" Section 5. Uncertainty remains as to what this language precisely means and how courts may interpret it.:551–552, 916
Before 2000, the "discriminatory purpose" prong of Section 5 was understood to mean any discriminatory purpose, which is the same standard used to determine whether discrimination is unconstitutional. In Reno v. Bossier Parish (Bossier Parish II) (2000), the Supreme Court extended the retrogression standard, holding that for a voting change to have a "discriminatory purpose" under Section 5, the change must have been implemented for a retrogressive purpose. Therefore, a voting change intended to discriminate against a protected minority was permissible under Section 5 so long as the change was not intended to increase existing discrimination.:277–278 This change significantly reduced the number of instances in which preclearance was denied based on discriminatory purpose. In 2006, Congress overturned Bossier Parish II by amending Section 5 to explicitly define "purpose" to mean "any discriminatory purpose.":199–200, 207
Federal examiners and observers
Until the 2006 amendments to the Act,:50 Section 6 allowed the appointment of "federal examiners" to oversee certain jurisdictions' voter registration functions. Federal examiners could be assigned to a covered jurisdiction if the attorney general certified that
- The Department of Justice received 20 or more meritorious complaints that the covered jurisdiction denied its residents the right to vote based on race or language minority status; or
- The assignment of federal examiners was otherwise necessary to enforce the voting rights guaranteed by the Fourteenth or Fifteenth Amendments.:235–236
Federal examiners had the authority to register voters, examine voter registration applications, and maintain voter rolls.:237 The goal of the federal examiner provision was to prevent jurisdictions from denying protected minorities the right to vote by engaging in discriminatory behavior in the voter registration process, such as refusing to register qualified applicants, purging qualified voters from the voter rolls, and limiting the hours during which persons could register. Federal examiners were used extensively in the years following the Act's enactment, but their importance waned over time; 1983 was the last year that a federal examiner registered a person to vote. In 2006, Congress repealed the provision.:238–239
Under the Act's original framework, in any jurisdiction certified for federal examiners, the attorney general could additionally require the appointment of "federal observers". By 2006, the federal examiner provision was used solely as a means to appoint federal observers.:239 When Congress repealed the federal examiner provision in 2006, Congress amended Section 8 to allow for the assignment of federal observers to jurisdictions that satisfied the same certification criteria that had been used to appoint federal examiners.:50
Federal observers are tasked with observing poll worker and voter conduct at polling places during an election and observing election officials tabulate the ballots.:248 The goal of the federal observer provision is to facilitate minority voter participation by deterring and documenting instances of discriminatory conduct in the election process, such as election officials denying qualified minority persons the right to cast a ballot, intimidation or harassment of voters on election day, or improper vote counting.:231–235 Discriminatory conduct that federal observers document may also serve as evidence in subsequent enforcement lawsuits.:233 Between 1965 and the Supreme Court's 2013 decision in Shelby County v. Holder to strike down the coverage formula, the attorney general certified 153 local governments across 11 states. Because of time and resource constraints, federal observers are not assigned to every certified jurisdiction for every election.:230 Separate provisions allow for a certified jurisdiction to "bail out" of its certification.
Under Section 4(a), a covered jurisdiction may seek exemption from coverage through a process called "bailout." To achieve an exemption, a covered jurisdiction must obtain a declaratory judgment from a three-judge panel of the District Court for D.C. that the jurisdiction is eligible to bail out. As originally enacted, a covered jurisdiction was eligible to bail out if it had not used a test or device with a discriminatory purpose or effect during the 5 years preceding its bailout request.:22, 33–34 Therefore, a jurisdiction that requested to bail out in 1967 would have needed to prove that it had not misused a test or device since at least 1962. Until 1970, this effectively required a covered jurisdiction to prove that it had not misused a test or device since before the Act was enacted five years earlier in 1965,:6 making it impossible for many covered jurisdictions to bail out.:27 However, Section 4(a) also prohibited covered jurisdictions from using tests or devices in any manner, discriminatory or otherwise; hence, under the original act, a covered jurisdiction would become eligible for bailout in 1970 by simply complying with this requirement. But in the course of amending the Act in 1970 and 1975 to extend the special provisions, Congress also extended the period of time that a covered jurisdiction must not have misused a test or device to 10 years and then to 17 years, respectively.:7, 9 These extensions continued the effect of requiring jurisdictions to prove that they had not misused a test or device since before the Act's enactment in 1965.
In 1982, Congress amended Section 4(a) to make bailout easier to achieve in two ways. First, Congress provided that if a state is covered, local governments in that state may bail out even if the state is ineligible to bail out. Second, Congress liberalized the eligibility criteria by replacing the 17-year requirement with a new standard, allowing a covered jurisdiction to bail out by proving that in the 10 years preceding its bailout request:
- The jurisdiction did not use a test or device with a discriminatory purpose or effect;
- No court determined that the jurisdiction denied or abridged the right to vote based on racial or language minority status;
- The jurisdiction complied with the preclearance requirement;
- The federal government did not assign federal examiners to the jurisdiction;
- The jurisdiction abolished discriminatory election practices; and
- The jurisdiction took affirmative steps to eliminate voter intimidation and expand voting opportunities for protected minorities.
Additionally, Congress required jurisdictions seeking bailout to produce evidence of minority registration and voting rates, including how these rates have changed over time and in comparison to the registration and voting rates of the majority. If the court determines that the covered jurisdiction is eligible for bailout, it will enter a declaratory judgment in the jurisdiction's favor. The court will retain jurisdiction for the following 10 years and may order the jurisdiction back into coverage if the jurisdiction subsequently engages in voting discrimination.:22–23
The 1982 amendment to the bailout eligibility standard went into effect on August 5, 1984. Between that date and 2013, 196 jurisdictions bailed out of coverage through 38 bailout actions; in each instance, the attorney general consented to the bailout request.:54 Between that date and 2009, all jurisdictions that bailed out were located in Virginia. In 2009, a municipal utility jurisdiction in Texas bailed out after the Supreme Court's opinion in Northwest Austin Municipal Utility District No. 1 v. Holder (2009), which held that local governments that do not register voters have the ability to bail out. After this ruling, jurisdictions succeeded in at least 20 bailout actions before the Supreme Court held in Shelby County v. Holder (2013) that the coverage formula was unconstitutional.:54
Separate provisions allow a covered jurisdiction that has been certified to receive federal observers to bail out of its certification alone. Under Section 13, the attorney general may terminate the certification of a jurisdiction if 1) more than 50 percent of the jurisdiction's minority voting age population is registered to vote, and 2) there is no longer reasonable cause to believe that residents may experience voting discrimination. Alternatively, the District Court for D.C. may order the certification terminated.:237, 239
Bilingual election requirements
Two provisions require certain jurisdictions to provide election materials to voters in multiple languages: Section 4(f)(4) and Section 203(c). A jurisdiction covered by either provision must provide all materials related to an election—such as voter registration materials, ballots, notices, and instructions—in the language of any applicable language minority group residing in the jurisdiction.:209 Language minority groups protected by these provisions include Asian Americans, Hispanics, Native Americans, and Native Alaskans. Congress enacted the provisions to break down language barriers and combat pervasive language discrimination against the protected groups.:200, 209
Section 4(f)(4) applies to any jurisdiction encompassed by the Section 4(b) coverage formula where more than five percent of the citizen voting age population are members of a single language minority group. Section 203(c) contains a formula that is separate from the Section 4(b) coverage formula, and therefore jurisdictions covered solely by 203(c) are not subject to the Act's other special provisions, such as preclearance. The Section 203(c) formula encompasses jurisdictions where the following conditions exist:
- A single language minority is present that has an English-illiteracy rate higher than the national average; and
- The number of "limited-English proficient" members of the language minority group is at least 10,000 voting-age citizens or large enough to comprise at least five percent of the jurisdiction's voting-age citizen population; or
- The jurisdiction is a political subdivision that contains an Indian reservation, and more than five percent of the jurisdiction's American Indian or Alaska Native voting-age citizens are members of a single language minority and are limited-English proficient.:223–224
Section 203(b) defines "limited-English proficient" as being "unable to speak or understand English adequately enough to participate in the electoral process".:223 Determinations as to which jurisdictions satisfy the Section 203(c) criteria occur once a decade following completion of the decennial census; at these times, new jurisdictions may come into coverage while others may have their coverage terminated. Additionally, under Section 203(d), a jurisdiction may "bail out" of Section 203(c) coverage by proving in federal court that no language minority group within the jurisdiction has an English illiteracy rate that is higher than the national illiteracy rate.:226 After the 2010 census, 150 jurisdictions across 25 states were covered under Section 203(c), including statewide coverage of California, Texas, and Florida.
After its enactment in 1965, the law immediately decreased racial discrimination in voting. The suspension of literacy tests and the assignments of federal examiners and observers allowed for high numbers of racial minorities to register to vote.:702 Nearly 250,000 African Americans registered in 1965, one-third of whom were registered by federal examiners. In covered jurisdictions, less than one-third (29.3 percent) of the African American population was registered in 1965; by 1967, this number increased to more than half (52.1 percent),:702 and a majority of African American residents became registered to vote in 9 of the 13 Southern states. Similar increases were seen in the number of African Americans elected to office: between 1965 and 1985, African Americans elected as state legislators in the 11 former Confederate states increased from 3 to 176.:112 Nationwide, the number of African American elected officials increased from 1,469 in 1970 to 4,912 in 1980.:919 By 2011, the number was approximately 10,500. Similarly, registration rates for language minority groups increased after Congress enacted the bilingual election requirements in 1975 and amended them in 1992. In 1973, the percent of Hispanics registered to vote was 34.9 percent; by 2006, that amount nearly doubled. The number of Asian Americans registered to vote in 1996 increased 58 percent by 2006.:233–235
After the Act's initial success in combating tactics designed to deny minorities access to the polls, the Act became predominately used as a tool to challenge racial vote dilution.:691 Starting in the 1970s, the attorney general commonly raised Section 5 objections to voting changes that decreased the effectiveness of racial minorities' votes, including discriminatory annexations, redistricting plans, and election methods such as at-large election systems, runoff election requirements, and prohibitions on bullet voting.:105–106 In total, 81 percent (2,541) of preclearance objections made between 1965 and 2006 were based on vote dilution.:102 Claims brought under Section 2 have also predominately concerned vote dilution.:708–709 Between the 1982 creation of the Section 2 results test and 2006, at least 331 Section 2 lawsuits resulted in published judicial opinions. In the 1980s, 60 percent of Section 2 lawsuits challenged at-large election systems; in the 1990s, 37.2 percent challenged at-large election systems and 38.5 percent challenged redistricting plans. Overall, plaintiffs succeeded in 37.2 percent of the 331 lawsuits, and they were more likely to succeed in lawsuits brought against covered jurisdictions.:654–656
By enfranchising racial minorities, the Act facilitated a political realignment of the Democratic and Republican parties. Between 1890 and 1965, minority disenfranchisement allowed conservative Southern Democrats to dominate Southern politics. After Johnson signed the Act into law, newly enfranchised racial minorities began to vote for liberal Democratic candidates throughout the South, and Southern white conservatives began to switch their party registration from Democrat to Republican en masse.:290 These dual trends caused the two parties to ideologically polarize, with the Democratic Party becoming more liberal and the Republican Party becoming more conservative.:290 The trends also created competition between the two parties,:290 which Republicans capitalized on by implementing the Southern strategy. Over the subsequent decades, the creation of majority-minority districts to remedy racial vote dilution claims also contributed to these developments. By packing liberal-leaning racial minorities into small numbers of majority-minority districts, large numbers of surrounding districts became more solidly white, conservative, and Republican. While this increased the elected representation of racial minorities as intended, it also decreased white Democratic representation and increased the representation of Republicans overall.:292 By the mid-1990s, these trends culminated in a political realignment: the Democratic Party and the Republican Party became more ideologically polarized and defined as liberal and conservative parties, respectively; and both parties came to compete for electoral success in the South,:294 with the Republican Party controlling most of Southern politics.:203
Research shows that the Act successfully and massively increased voter turnout and voter registration, in particular among blacks. The act has also been linked to concrete outcomes, such as greater public goods provision (such as public education) for areas with higher black population shares and more members of Congress who vote for civil rights-related legislation. A 2016 study in the American Journal of Political Science found "that members of Congress who represented jurisdictions subject to the preclearance requirement were substantially more supportive of civil rights-related legislation than legislators who did not represent covered jurisdictions." A 2013 Quarterly Journal of Economics study found that the Act boosted voter turnout and increases in public goods transfers from state governments to localities with higher black population. A 2018 study in The Journal of Politics found that Section 5 of the 1965 Voting Rights Act "increased black voter registration by 14–19 percentage points, white registration by 10–13 percentage points, and overall voter turnout by 10–19 percentage points. Additional results for Democratic vote share suggest that some of this overall increase in turnout may have come from reactionary whites." A 2019 study in the American Economic Journal found that preclearance substantially increased turnout among minorities, even as far as to 2012 (the year prior to the Supreme Court ruling ending preclearance). The study estimates that preclearance led to an increase in minority turnout of 17 percentage points. A 2020 study found that the jurisdictions which had previously been covered by preclearance massively increased the rate of voter registration purges after the 2013 United States Supreme Court Shelby County v. Holder decision in which the “coverage formula” in Section 4b of the VRA that determined which jurisdictions had to presubmit changes in their election policies for federal approval was struck down.
Voter eligibility provisions
Early in the Act's enforcement history, the Supreme Court addressed the constitutionality of several provisions relating to voter qualifications and prerequisites to voting. In Katzenbach v. Morgan (1966), the court upheld the constitutionality of Section 4(e). This section prohibits jurisdictions from administering literacy tests to citizens who attain a sixth-grade education in an American school in which the predominant language was Spanish, such as schools in Puerto Rico. Although the court had earlier held in Lassiter v. Northampton County Board of Elections (1959) that literacy tests did not violate the Fourteenth Amendment, in Morgan the court held that Congress could enforce Fourteenth Amendment rights—such as the right to vote—by prohibiting conduct it deemed to interfere with such rights, even if that conduct may not be independently unconstitutional.:405–406:652–656 After Congress created a nationwide ban on all literacy tests and similar devices in 1970 by enacting Section 201, the court upheld the ban as constitutional in Oregon v. Mitchell (1970).
Also in Oregon v. Mitchell, the Supreme Court addressed the constitutionality of various other provisions relating to voter qualifications and prerequisites to voting. The court upheld Section 202, which prohibits every state and local government from requiring people to live in their borders for longer than 30 days before allowing them to vote in a presidential election. Additionally, the court upheld the provision lowering the minimum voting age to 18 in federal elections, but it held that Congress exceeded its power by lowering the voting age to 18 in state elections; this precipitated the ratification of the Twenty-sixth Amendment the following year, which lowered the voting age in all elections to 18. The court was deeply divided in Oregon v. Mitchell, and a majority of justices did not agree on a rationale for the holding.:353:118–121
Section 2 results test
The constitutionality of Section 2, which contains a general prohibition on discriminatory voting laws, has not been definitively explained by the Supreme Court. As amended in 1982, Section 2 prohibits any voting practice that has a discriminatory effect, irrespective of whether the practice was enacted or is administered for the purpose of discriminating. This "results test" contrasts with the Fourteenth and Fifteenth Amendments, both of which directly prohibit only purposeful discrimination. Given this disparity, whether the Supreme Court would uphold the constitutionality of Section 2 as appropriate legislation passed to enforce the Fourteenth and Fifteenth Amendments, and under what rationale, remains unclear.:758–759
In Mississippi Republican Executive Opinion v. Brooks (1984), the Supreme Court summarily affirmed, without a written opinion, a lower court's decision that 1982 amendment to Section 2 is constitutional. Justice Rehnquist, joined by Chief Justice Burger, dissented from the opinion. They reasoned that the case presented complex constitutional issues that warranted a full hearing. When making later decisions, the Supreme Court is more likely to disregard a previous judgment if it lacks a written opinion, but for lower courts the Supreme Court's unwritten summary affirmances are as binding as are Supreme Court judgments with written opinions. Partially due to Brooks, the constitutionality of the Section 2 results test has since been unanimously upheld by lower courts.:759–760
The pending case Brnovich v. Democratic National Committee (2021) is expected to evaluate the applicability of Section 2 in the wake of the decision of Shelby. The case involves a challenge to a set of Arizona election laws and policies that the Democratic National Party asserted were discriminatory towards Hispanics and Native Americans under VRA's Section 2. While lower courts upheld the election laws, an en banc Ninth Circuit reversed the decision and found these laws to be in violation of Section 2. The question of Section 2's applicability is the crux of the case at the Supreme Court.
During oral arguments on March 2, 2021, Michael Garvin, an attorney representing the Arizona Republican party, was asked by justice Amy Coney Barrett what interest the party had in invalidating the Arizona voting restrictions, to which Garvin replied, "Because it puts us at a competitive disadvantage relative to Democrats."
Coverage formula and preclearance
The Supreme Court has upheld the constitutionality of the Section 5 preclearance requirement in three cases. The first case was South Carolina v. Katzenbach (1966), which was decided about five months after the Act's enactment. The court held that Section 5 constituted a valid use of Congress's power to enforce the Fifteenth Amendment, reasoning that "exceptional circumstances" of pervasive racial discrimination, combined with the inadequacy of case-by-case litigation in ending that discrimination, justified the preclearance requirement.:334–335:76 The court also upheld the constitutionality of the 1965 coverage formula, saying that it was "rational in both practice and theory" and that the bailout provision provided adequate relief for jurisdictions that may not deserve coverage.:330:76–77
The Supreme Court again upheld the preclearance requirement in City of Rome v. United States (1980). The court held that because Congress had explicit constitutional power to enforce the Reconstruction Amendments "by appropriate legislation", the Act did not violate principles of federalism. The court also explicitly upheld the "discriminatory effect" prong of Section 5, stating that even though the Fifteenth Amendment directly prohibited only intentional discrimination, Congress could constitutionally prohibit unintentional discrimination to mitigate the risk that jurisdictions may engage in intentional discrimination. Finally, the court upheld the 1975 extension of Section 5 because of the record of discrimination that continued to persist in the covered jurisdictions. The court further suggested that the temporary nature of the special provisions was relevant to Section 5's constitutionality.:77–78
The final case in which the Supreme Court upheld Section 5 was Lopez v. Monterey County (Lopez II) (1999). In Lopez II, the court reiterated its reasoning in Katzenbach and Rome, and it upheld as constitutional the requirement that covered local governments obtain preclearance before implementing voting changes that their parent state required them to implement, even if the parent state was not itself a covered jurisdiction.:78:447
The 2006 extension of Section 5 was challenged before the Supreme Court in Northwest Austin Municipal Utility District No. 1 v. Holder (2009). The lawsuit was brought by a municipal water district in Texas that elected members to a water board. The District wished to move a voting location from a private home to a public school, but that change was subject to preclearance because Texas was a covered jurisdiction. The District did not register voters, and thus it did not appear to qualify as a "political subdivision" eligible to bail out of coverage. Although the court indicated in dicta (a non-binding part of the court's opinion) that Section 5 presented difficult constitutional questions, it did not declare Section 5 unconstitutional; instead, it interpreted the law to allow any covered local government, including one that does not register voters, to obtain an exemption from preclearance if it meets the bailout requirements.
In a 5–4 decision in Shelby County v. Holder (2013), the Supreme Court struck down Section 4(b) as unconstitutional. The court reasoned that the coverage formula violates the constitutional principles of "equal sovereignty of the states" and federalism because its disparate treatment of the states is "based on 40 year-old facts having no logical relationship to the present day", which makes the formula unresponsive to current needs. The court did not strike down Section 5, but without Section 4(b), no jurisdiction may be subject to Section 5 preclearance unless Congress enacts a new coverage formula. After the decision, several states that were fully or partially covered—including Texas, Mississippi, North Carolina, and South Carolina—implemented laws that were previously denied preclearance. This prompted new legal challenges to these laws under other provisions unaffected by the court's decision, such as Section 2.:189–200 Research has shown that the coverage formula and the requirement of preclearance substantially increased turnout among racial minorities, even as far as the year before Shelby County. Some jurisdictions that had previously been covered by the coverage formula increased the rate of voter registration purges after Shelby County.
While Section 2 and Section 5 prohibit jurisdictions from drawing electoral districts that dilute the votes of protected minorities, the Supreme Court has held that in some instances, the Equal Protection Clause of the Fourteenth Amendment prevents jurisdictions from drawing district lines to favor protected minorities. The court first recognized the justiciability of affirmative "racial gerrymandering" claims in Shaw v. Reno (1993). In Miller v. Johnson (1995), the court explained that a redistricting plan is constitutionally suspect if the jurisdiction used race as the "predominant factor" in determining how to draw district lines. For race to "predominate", the jurisdiction must prioritize racial considerations over traditional redistricting principles, which include "compactness, contiguity, [and] respect for political subdivisions or communities defined by actual shared interests.":916:621 If a court concludes that racial considerations predominated, then the redistricting plan is considered "racially gerrymandered" and must be subjected to strict scrutiny, meaning that the redistricting plan will be upheld as constitutional only if it is narrowly tailored to advance a compelling state interest. In Bush v. Vera (1996),:983 a plurality of the Supreme Court assumed that complying with Section 2 or Section 5 constituted compelling interests, and lower courts have allowed only these two interests to justify racial gerrymandering.:877
- Help America Vote Act (HAVA)
- National Voter Registration Act of 1993 (NVRA)
- Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA)
- Voter suppression in the United States
- Women's suffrage in the United States
- In Gingles, the Supreme Court held that the Gingles test applies to claims that an at-large election scheme results in vote dilution. The court later held, in Growe v. Emison, 507 U.S. 25 (1993), that the Gingles test also applies to claims that a redistricting plan results in vote dilution through the arrangement of single-member districts.:1006
- The Courts of Appeals in the Fifth Circuit, Eleventh Circuit, and Ninth Circuit have either explicitly held that coalition suits are allowed under Section 2 or assumed that such suits are permissible, while those in the Sixth Circuit and Seventh Circuit have rejected such suits.:703
- Courts of Appeals in the Second Circuit and Fourth Circuit have held that such proof is not an absolute requirement for liability but is a relevant additional factor under the "totality of the circumstances" test. In contrast, the Fifth Circuit has held that such proof is a required component of the third precondition.:711–712
- The Court of Appeals for the Second Circuit held that challenges to majority-vote requirements under Section 2 are not cognizable, while the Eastern District of Arkansas held the opposite.:752–753
- The Supreme Court subsequently held that plaintiffs may alternatively bring Section 5 enforcement actions in state courts.:534
- "Public Law 91-285". June 22, 1970. Retrieved April 19, 2014.
- "Public Law 94-73". August 6, 1975. Retrieved April 19, 2014.
- "Public Law 97-205". June 29, 1982. Retrieved April 19, 2014.
- "Public Law 102-344". August 26, 1992. Retrieved April 19, 2014.
- "Public Law 109-246". July 27, 2006. Retrieved April 19, 2014.
- "Public Law 110-258". July 1, 2008. Retrieved April 19, 2014. (amending short title of P.L. 109-246)
- "History of Federal Voting Rights Laws: The Voting Rights Act of 1965". United States Department of Justice. July 28, 2017. Archived from the original on January 6, 2021. Retrieved January 6, 2021.
- "Voting Rights Act". National Voting Rights Museum and Institute. Retrieved May 23, 2014.
- One or more of the preceding sentences incorporates text from a work in the public domain: "Introduction to Federal Voting Rights Laws: The Effect of the Voting Rights Act". U.S. Department of Justice. June 19, 2009. Retrieved January 8, 2014.
- "Voting Rights Act of 1965". History.com. November 9, 2009. Archived from the original on January 24, 2021. Retrieved January 24, 2021.
- "About Section 5 of the Voting Rights Act". U.S. Department of Justice. Retrieved April 21, 2014.
- Shelby County v. Holder, No. 12-96, 570 U.S. ___ (2014)
- Howe, Amy (June 25, 2013). "Details on Shelby County v. Holder: In Plain English". SCOTUSBlog. Retrieved July 1, 2013.
- Feder, Catalina; Miller, Michael G. (June 1, 2020). "Voter Purges After Shelby: Part of Special Symposium on Election Sciences". American Politics Research. doi:10.1177/1532673x20916426. ISSN 1532-673X. S2CID 221131969. Archived from the original on January 5, 2021.
- Fresh, Adriane (February 23, 2018). "The Effect of the Voting Rights Act on Enfranchisement: Evidence from North Carolina". The Journal of Politics. 80 (2): 713–718. doi:10.1086/697592. S2CID 158668168.
- Ang, Desmond (2019). "Do 40-Year-Old Facts Still Matter? Long-Run Effects of Federal Oversight under the Voting Rights Act". American Economic Journal: Applied Economics. 11 (3): 1–53. doi:10.1257/app.20170572. ISSN 1945-7782.
- Schuit, Sophie; Rogowski, Jon C. (December 1, 2016). "Race, Representation, and the Voting Rights Act". American Journal of Political Science. 61 (3): 513–526. doi:10.1111/ajps.12284. ISSN 1540-5907.
- Cascio, Elizabeth U.; Washington, Ebonya (February 1, 2014). "Valuing the Vote: The Redistribution of Voting Rights and State Funds following the Voting Rights Act of 1965". The Quarterly Journal of Economics. 129 (1): 379–433. doi:10.1093/qje/qjt028. ISSN 0033-5533. S2CID 617854.
- United States Constitution art. I, sec. 2, cl. 1
- May, Gary (April 9, 2013). Bending Toward Justice: The Voting Rights Act and the Transformation of American Democracy (Kindle ed.). New York, NY: Basic Books. ISBN 978-0-465-01846-8.
- "Landmark Legislation: Thirteenth, Fourteenth, & Fifteenth Amendments". United States Senate. Retrieved June 25, 2015.
- One or more of the preceding sentences incorporates text from a work in the public domain: South Carolina v. Katzenbach, 383 U.S. 301 (1966)
- Issacharoff, Samuel; Karlan, Pamela S.; Pildes, Richard H. (2012). The Law of Democracy: Legal Structure of the Political Process (4th ed.). New York, NY: Foundation Press. ISBN 978-1-59941-935-0.
- Anderson, Elizabeth; Jones, Jeffery (September 2002). "Race, Voting Rights, and Segregation: Direct Disenfranchisement". The Geography of Race in the United States. University of Michigan. Retrieved August 3, 2013.
- "Voting Rights Act (1965)". Our Documents. Archived from the original on September 25, 2020. Retrieved March 13, 2021.
- "Public Law 88-352" (PDF). Title I. Retrieved October 19, 2013.
- "Major Features of the Civil Rights Act of 1964". CongressLink. Dirksen Congressional Center. Archived from the original on December 6, 2014. Retrieved March 26, 2015.
- Williams, Juan (2002). Eyes on the Prize: America's Civil Rights Years, 1954–1965. New York, NY: Penguin Books. ISBN 978-0-14-009653-8.
- Kryn, Randy (1989). "James L. Bevel: The Strategist of the 1960s Civil Rights Movement". In Garrow, David J. (ed.). We Shall Overcome: The Civil Rights Movement in the United States in the 1950s and 1960s. Brooklyn, NY: Carlson Publishing. ISBN 978-0-926019-02-7.
- Kryn, Randy. "Movement Revision Research Summary Regarding James Bevel". Chicago Freedom Movement. Middlebury College. Retrieved April 7, 2014.
- Fleming, John (March 6, 2005). "The Death of Jimmie Lee Jackson". The Anniston Star. Archived from the original on January 13, 2014. Retrieved March 16, 2015.
- Fager, Charles (July 1985). Selma, 1965: The March That Changed the South (2nd ed.). Boston, MA: Beacon Press. ISBN 978-0-8070-0405-0.
- The March to Montgomery ~ Civil Rights Movement Archive.
- Baumgartner, Neil (December 2012). "James Reeb". Jim Crow Museum of Racist Memorabilia. Ferris State University. Retrieved November 16, 2020.
- Wicker, Tom (March 15, 1965). "Johnson Urges Congress at Joint Session to Pass Law Insuring Negro Vote". The New York Times. Retrieved August 3, 2013.
- "South Carolina v. Katzenbach, 383 U.S. 301 (1966), at 313 and 314. Footnotes omitted". Justia US Supreme Court Center. March 7, 1966. Retrieved January 6, 2021.
- "South Carolina v. Katzenbach, 383 U.S. 301 (1966), at 327-329. Footnotes omitted". Justia US Supreme Court Center. March 7, 1966. Retrieved January 6, 2021.
- "Voting Rights Act". The Association of Centers for the Study of Congress. Retrieved May 29, 2016.
- Williamson, Richard A. (1984). "The 1982 Amendments to the Voting Rights Act: A Statutory Analysis of the Revised Bailout Provisions". Washington University Law Review. 62 (1). Retrieved August 29, 2013.
- Boyd, Thomas M.; Markman, Stephen J. (1983). "The 1982 Amendments to the Voting Rights Act: A Legislative History". Washington and Lee Law Review. 40 (4). Retrieved August 31, 2013.
- Voting Rights Act of 1965 § 3(c); 52 U.S.C. § 10302(c) (formerly 42 U.S.C. § 1973a(c))
- Crum, Travis (2010). "The Voting Rights Act's Secret Weapon: Pocket Trigger Litigation and Dynamic Preclearance". The Yale Law Journal. 119. Archived from the original on August 30, 2013. Retrieved August 27, 2013.
- "Senate Vote #67 in 1965: To Invoke Cloture and End Debate on S. 1564, the Voting Rights Act of 1965". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- "Senate Vote #78 in 1965: To Pass S. 1564, the Voting Rights Act of 1965". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- "House Vote #86 in 1965: To Recommit H.R. 6400, the 1965 Voting Rights Act, with Instructions to Substitute the Text of H.R. 7896 Prohibiting the Denial to Any Person of the Right to Register or to Vote Because of his Failure to Pay a Poll Tax or Any Other Such Tax, for the Language of the Committee Amendment". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- "House Vote #87 in 1965: To Pass H.R. 6400, the Voting Rights Act of 1965". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- "House Vote #107 in 1965: To Agree to Conference Report on S. 1564, the Voting Rights Act". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- "Senate Vote #178 in 1965: To Agree to Conference Report on S. 1564, the Voting Rights Act of 1965". govtrack.us. Civic Impulse, LLC. Retrieved October 14, 2013.
- Moholtra, Ajay (June 1, 2008). "Rosa Parks Early Life & Childhood". Rosa Park Facts.com. Retrieved April 1, 2015.
- One or more of the preceding sentences incorporates text from a work in the public domain: "Section 4 of the Voting Rights Act". U.S. Department of Justice. Retrieved June 25, 2013.
- Voting Rights Act of 1965 § 14(c)(3); 52 U.S.C. § 10310(c)(3) (formerly 42 U.S.C. § 1973l(c)(3))
- Tucker, James Thomas (2006). "Enfranchising Language Minority Citizens: The Bilingual Election Provisions of the Voting Rights Act" (PDF). New York University Journal of Legislation and Public Policy. 10. Retrieved January 3, 2014.
- This article incorporates public domain material from the Congressional Research Service document: Laney, Garrine P. "The Voting Rights Act of 1965, As Amended: Its History and Current Issues" (PDF). Retrieved September 15, 2017.
- Reno v. Bossier Parish School Board, 528 U.S. 320 (2000)
- Georgia v. Ashcroft, 539 U.S. 461 (2003)
- Persily, Nathaniel (2007). "The Promise and Pitfalls of the New Voting Rights Act". Yale Law Journal. 117 (2): 174–254. doi:10.2307/20455790. JSTOR 20455790. Archived from the original on September 26, 2013. Retrieved September 21, 2013.
- "Moving Forward on the VRAA". NAACP Legal Defense and Educational Fund, Inc. Retrieved April 19, 2014.
- "H.R. 885: Voting Rights Amendment Act of 2015". govtrack.us. Retrieved December 27, 2015.
- "Reps. Sensenbrenner and Conyers Reintroduce Bipartisan Voting Rights Amendment Act of 2017". Congressman Jim Sensenbrenner. Retrieved November 15, 2019.
- Staats, Elmer B. (February 6, 1978). "Voting Rights Act: Enforcement Needs Strengthening". Report of the Comptroller General of the United States (GGD-78-19). Retrieved October 27, 2013.
- Voting Rights Act of 1965 § 2; 52 U.S.C. § 10301 (formerly 42 U.S.C. § 1973)
- Millhiser, Ian (September 18, 2020). "Chief Justice Roberts's lifelong crusade against voting rights, explained". Vox.com. Vox.com. Archived from the original on December 19, 2020. Retrieved January 3, 2021.
- Millhiser, Ian (October 2, 2020). "The Supreme Court will hear a case that could destroy what remains of the Voting Rights Act". Vox.com. Vox.com. Archived from the original on December 16, 2020. Retrieved January 3, 2021.
- Soronen, Lisa (October 8, 2020). "Supreme Court to Decide Significant Voting Case". ncsl.org. The National Conference of State Legislatures. Archived from the original on January 3, 2021. Retrieved January 3, 2021.
- Tokaji, Daniel P. (2010). "Public Rights and Private Rights of Action: The Enforcement of Federal Election Laws" (PDF). Indiana Law Review. 44. Archived from the original (PDF) on March 12, 2020. Retrieved February 25, 2014.
- Allen v. State Bd. of Elections, 393 U.S. 544 (1969)
- Mobile v. Bolden, 446 U.S. 55 (1980)
- One or more of the preceding sentences incorporates text from a work in the public domain: "Section 2 of the Voting Rights Act". U.S. Department of Justice. Retrieved November 17, 2013.
- Berman, Ari (March 1, 2021). "Voting Rights: Republicans Are Trying to Kill What's Left of the Voting Rights Act". Mother Jones. Archived from the original on March 3, 2021. Retrieved March 6, 2021.
- Denniston, Lyle (August 13, 2015). "Constitution Check: Is another key part of the Voting Rights Act in trouble?". National Constitution Center. Archived from the original on January 11, 2021. Retrieved January 11, 2021.
- Mcdonald, Laughlin (1985). "The Attack on Voting Rights". Southern Changes. 7 (5). Archived from the original on October 14, 2016. Retrieved February 26, 2017.
- "Voting Rights Enforcement and Reauthorization: The Department of Justice's Record of Enforcing the Temporary Voting Rights Act Provisions" (PDF). U.S. Commission on Civil Rights. May 2006. Archived from the original (PDF) on July 9, 2017. Retrieved August 26, 2018.
- Mulroy, Steven J. (1998). "The Way Out: A Legal Standard for Imposing Alternative Electoral Systems as Voting Rights Remedies". Harvard Civil Rights-Civil Liberties Law Review. 33. SSRN 1907880.
- "Section 2 Of The Voting Rights Act". The United States Department of Justice. September 11, 2020. Archived from the original on December 9, 2021. Retrieved January 3, 2021.
- United States Department of Justice (December 7, 2020). "2020-12-07 Brief amicus curiae of United States in Brnovich v. Democratic National Committee (United States Supreme Court case Number 19-1257) and Arizona Republican Party v. Democratic National Committee (United States Supreme Court case number 19-1258), at pages 2-3" (PDF). United States Supreme Court. Archived from the original (PDF) on December 8, 2020. Retrieved January 11, 2021.
- "Chisom v. Roemer, 501 U.S. 380 (1991), at 394-395". Justia US Supreme Court Center. Retrieved January 11, 2021.
- United States Department of Justice (December 7, 2020). "2020-12-07 Brief amicus curiae of United States in Brnovich v. Democratic National Committee (United States Supreme Court case Number 19-1257) and Arizona Republican Party v. Democratic National Committee (United States Supreme Court case number 19-1258), at pages 2-3 and 11" (PDF). United States Supreme Court. Archived from the original (PDF) on December 8, 2020. Retrieved January 11, 2021.
- United States Department of Justice (December 7, 2020). "2020-12-07 Brief amicus curiae of United States in Brnovich v. Democratic National Committee (United States Supreme Court case Number 19-1257) and Arizona Republican Party v. Democratic National Committee (United States Supreme Court case number 19-1258), at pages 11 and 3" (PDF). United States Supreme Court. Archived from the original (PDF) on December 8, 2020. Retrieved January 11, 2021.
- "Brnovich v. Democratic National Committee". SCOTUSblog. Archived from the original on January 10, 2021. Retrieved January 10, 2021.
- Mark Brnovich; Oramel H. Skinner; Rusty D. Crandell; et al. (April 27, 2020). "2020-04-27 Petition for a Writ of Certiorari to the United States Court of Appeals for the Ninth Circuit in Brnovich v. Democratic National Committee, at page 5" (PDF). Office of the Arizona Attorney General. United States Supreme Court. Archived from the original (PDF) on November 29, 2020. Retrieved January 10, 2021.
- One or more of the preceding sentences incorporates text from a work in the public domain: Senate Report No. 97-417 (1982), reprinted in 1982 U.S.C.C.A.N. 177
- Stern, Mark Joseph (June 25, 2018). "Jurisprudence: Neil Gorsuch Declares War on the Voting Rights Act". Slate. Archived from the original on March 2, 2021. Retrieved March 6, 2021.
- Paige A. Epstein. "Addressing Minority Vote Dilution Through StateVoting Rights Acts. In: University of Chicago Public Law & LegalTheory Working Paper No. 47 (February 2014)". University of Chicago Law School Chicago Unbound. Archived from the original on March 6, 2021. Retrieved March 6, 2021.
- Tokaji, Daniel P. (2006). "The New Vote Denial: Where Election Reform Meets the Voting Rights Act". South Carolina Law Review. 57. SSRN 896786.
- Adams, Ross J. (1989). "Whose Vote Counts? Minority Vote Dilution and Election Rights". Journal of Urban and Contemporary Law. 35. Retrieved March 26, 2015.
- "The Role of Section 2 - Redistricting & Vote Dilution". Redrawing the Lines. NAACP Legal Defense Fund. Archived from the original on April 2, 2015. Retrieved August 4, 2015.
- Johnson v. De Grandy, 512 U.S. 997 (1994)
- Thornburg v. Gingles, 478 U.S. 30 (1986)
- Bartlett v. Strickland, 556 U.S. 1 (2009)
- Roseman, Brandon (2009). "Equal Opportunities Do Not Always Equate to Equal Representation: How Bartlett v. Strickland is a Regression in the Face of the Ongoing Civil Rights Movement". North Carolina Central Law Review. 32. Retrieved April 13, 2019.
- Barnes, Robert (March 10, 2009). "Supreme Court Restricts Voting Rights Act's Scope". The Washington Post. Retrieved April 21, 2014.
- Campos v. City of Baytown, 840 F.2d 1240 (5th Cir.), cert denied, 492 U.S. 905 (1989)
- Concerned Citizens v. Hardee County, 906 F.2d 524 (11th Cir. 1990)
- Badillo v. City of Stockton, 956 F.2d 884 (9th Cir. 1992)
- Nixon v. Kent County, 76 F.3d 1381 (6th Cir. 1996) (en banc)
- Frank v. Forest County, 336 F.3d 570 (7th Cir. 2003)
- Gerken, Heather K. (2001). "Understanding the Right to an Undiluted Vote". Harvard Law Review. 114 (6): 1663–1743. doi:10.2307/1342651. JSTOR 1342651. Retrieved November 20, 2013.
- Kosterlitz, Mary J. (1987). "Thornburg v. Gingles: The Supreme Court's New Test for Analyzing Minority Vote Dilution". Catholic University Law Review. 36. Retrieved April 13, 2019.
- Goosby v. Town of Hempstead, 180 F.3d 476 (2d Cir. 1999)
- Lewis v. Alamance County, 99 F.3d 600 (4th Cir. 1996)
- League of United Latin American Citizens v. Clements, 999 F.3d 831 (5th Cir.) (en banc), cert. denied, 510 U.S. 1071 (1994)
- "Reynolds v. Sims, 377 U.S. 533 (1964), at 555 and 561-562". Justia US Supreme Court Center. June 15, 1964. Retrieved January 5, 2021.
- Holder v. Hall, 512 U.S. 874 (1994)
- Guinier, Lani (1994). "(e)Racing Democracy: The Voting Rights Cases" (PDF). Harvard Law Review. 108. Retrieved November 24, 2013.
- Butts v. City of New York, 779 F.2d 141 (2d Cir. 1985)
- Jeffers v. Clinton, 740 F.Supp. 585 (E.D. Ark. 1990) (three-judge court)
- Richardson v. Ramirez, 418 U.S. 24 (1974)
- Mississippi State Chapter, Operation Push v. Allain, 674 F.Supp. 1245 (N.D. Miss. 1987)
- Sherman, Jon (November 11, 2013). "Three Strategies (So Far) to Strike Down Strict Voter ID Laws Under Section 2 of the Voting Rights Act". Fair Elections Legal Network. Archived from the original on November 8, 2014. Retrieved June 25, 2015.
- Voting Rights Act of 1965 § 201; 52 U.S.C. § 10501 (formerly 42 U.S.C. § 1973aa)
- Pitts, Michael J. (2008). "The Voting Rights Act and the Era of Maintenance". Alabama Law Review. 59. SSRN 1105115.
- Tokaji, Daniel P. (2006). "Intent and Its Alternatives: Defending the New Voting Rights Act" (PDF). Alabama Law Review. 58. Retrieved January 7, 2014.
- Brewster, Henry; Dubler, Grant; Klym, Peter (2013). "Election Law Violations". American Criminal Law Review. 50. Retrieved April 13, 2019. (Subscription required.)
- De Oliveira, Pedro (2009). "Same Day Voter Registration: Post-Crawford Reform to Address the Growing Burdens on Lower-Income Voters". Georgetown Journal on Poverty Law and Policy. 16. Retrieved April 13, 2019. (Subscription required.)
- "Section 3 of the Voting Rights Act". U.S. Department of Justice. Retrieved March 4, 2013.
- "Brief for the Federal Respondent, Shelby County v. Holder, 2013 United States Supreme Court Briefs No. 12-96" (PDF). U.S. Department of Justice. Retrieved December 8, 2013.
- "GOP Has Tough Choices on Voting Rights Act". Yahoo! News. Associated Press. July 4, 2013. Retrieved January 8, 2014.
- Schwinn, Steven D. (September 30, 2013). "Justice Department to Sue North Carolina over Vote Restrictions". Law Professor Blogs Network. Retrieved January 1, 2014.
- Liptak, Adam (January 14, 2014). "Judge Reinstates Some Federal Oversight of Voting Practices for an Alabama City". The New York Times. Archived from the original on February 24, 2021. Retrieved March 2, 2014.
- Tucker, James Thomas (2007). "The Power of Observation: The Role of Federal Observers Under the Voting Rights Act". Michigan Journal of Race and Law. 13. Retrieved April 13, 2019.
- Voting Rights Act § 14(c)(2); 52 U.S.C. § 10310(c)(2) (formerly 42 U.S.C. § 1973l(c)(2))
- Totenberg, Nina. "Supreme Court Weighs Future Of Voting Rights Act". National Public Radio. National Public Radio. Archived from the original on October 8, 2020. Retrieved March 13, 2021.
- Liptak, A. (June 25, 2013). "Supreme Court Invalidates Key Part of Voting Rights Act". The New York Times. Retrieved June 26, 2013.
- Von Drehle, David (June 25, 2013). "High Court Rolls Back the Voting Rights Act of 1965". Time. Retrieved June 25, 2013.
- Voting Rights Act of 1965 § 5; 52 U.S.C. § 10304 (formerly 42 U.S.C. § 1973c)
- Allen v. State Board of Elections, 393 U.S. 544 (1969)
- "What Must Be Submitted Under Section 5". U.S. Department of Justice. Retrieved November 30, 2013.
- Hathorn v. Lovorn, 457 U.S. 255 (1982)
- Lopez v. Monterey County (Lopez I), 519 U.S. 9 (1996)
- Posner, Mark A. (2006). "The Real Story Behind the Justice Department's Implementation of Section 5 of the VRA: Vigorous Enforcement, As Intended by Congress". Duke Journal of Constitutional Law & Public Policy. 1 (1). Retrieved November 30, 2013.
- Morris v. Gressette, 432 U.S. 491 (1977)
- Porto L., Brian (1998). "What Changes in Voting Practices or Procedures Must be Precleared Under § 5 of Voting Rights Act of 1965 (42 U.S.C.A. § 1973c)". American Law Reports Federal. 146.
- Beer v. United States, 425 U.S. 130 (1976)
- McCrary, Peyton; Seaman, Christopher; Valelly, Richard (2006). "The End of Preclearance As We Knew It: How the Supreme Court Transformed Section 5 of the Voting Rights Act". Michigan Journal of Race and Law. 11. SSRN 1913565.
- Kousser, J. Morgan (2008). "The Strange, Ironic Career of Section 5 of the Voting Rights Act, 1965–2007". Texas Law Review. 86. Archived from the original on December 19, 2013. Retrieved November 16, 2013. – via EBSCOhost (Subscription may be required or content may be available in libraries.)
- Voting Rights Act of 1965 § 5(b); 52 U.S.C. § 10304(b) (formerly 42 U.S.C. § 1973c(b))
- Voting Rights Act of 1965 § 5(c); 52 U.S.C. § 10304(c) (formerly 42 U.S.C. 1973c(c))
- "Wesberry v. Sanders, 376 U.S. 1 (1964), at 17-18". Justia US Supreme Court Center. February 17, 1964. Retrieved January 5, 2021.
- "About Federal Observers and Election Monitoring". U.S. Department of Justice. Retrieved January 3, 2014.
- Voting Rights Act of 1965 § 4(a); 52 U.S.C. § 10303(a)(1)(F) (formerly 42 U.S.C. § 1973b(a)(1)(F))
- Northwest Austin Municipal Utility District No. 1 v. Holder, 557 U.S. 193 (2009)
- Liptak, Adam (June 23, 2009). "Justices Let Stand a Central Provision of Voting Rights Act". The New York Times. Retrieved June 22, 2009.
- Voting Rights Act of 1965 § 4(f)(4); 52 U.S.C. § 10303(f)(4) (formerly 42 U.S.C. § 1973b(f)(4))
- Groves, Robert M. (October 13, 2011). "Voting Rights Act Amendments of 2006, Determinations Under Section 203" (PDF). Federal Register. 76 (198). Archived from the original (PDF) on January 23, 2014. Retrieved February 23, 2017.
- One or more of the preceding sentences incorporates text from a work in the public domain: "Voting Rights Act (1965): Document Info". Our Documents. Retrieved September 8, 2013.
- Grofman, Bernard; Handley, Lisa (February 1991). "The Impact of the Voting Rights Act on Black Representation in Southern State Legislatures" (PDF). Legislative Studies Quarterly. 16 (1): 111. doi:10.2307/439970. JSTOR 439970. Retrieved January 5, 2014.
- Eilperin, Juliet (August 22, 2013). "What's Changed for African Americans Since 1963, By the Numbers". The Washington Post. Retrieved January 5, 2014.
- Katz, Ellen; Aisenbrey, Margaret; Baldwin, Anna; Cheuse, Emma; Weisbrodt, Anna (2006). "Documenting Discrimination in Voting: Judicial Findings Under Section 2 of the Voting Rights Act". University of Michigan Journal of Law Reform. 39. SSRN 1029386.
- Pildes, Richard H. (April 2011). "Why the Center Does Not Hold: The Causes of Hyperpolarized Democracy in America". California Law Review. 99. SSRN 1646989.
- Boyd, James (May 17, 1970). "Nixon's Southern Strategy: 'It's All in the Charts'" (PDF). The New York Times. Retrieved August 2, 2008.
- Voting Rights Act of 1965 § 4(e); 52 U.S.C. § 10303(e) (formerly 42 U.S.C. § 1973b(e))
- Lassiter v. Northampton County Board of Elections, 360 U.S. 45 (1959)
- Buss, William G. (January 1998). "Federalism, Separation of Powers, and the Demise of the Religious Freedom Restoration Act". Iowa Law Review. 83. Retrieved January 7, 2014. (Subscription required.)
- Katzenbach v. Morgan, 384 U.S. 641 (1966)
- Oregon v. Mitchell, 400 U.S. 112 (1970)
- Mississippi Republican Executive Opinion v. Brooks, 469 U.S. 1002 (1984)
- Kamen, Al (November 14, 1984). "Court Backs Voting Plan". The Washington Post. Retrieved June 30, 2017.
- Chung, Andrew (February 24, 2021). "U.S. Supreme Court set to weigh Republican-backed voting restrictions". Reuters. Archived from the original on February 27, 2021. Retrieved February 28, 2021.
- "In Supreme Court, GOP attorney defends voting restrictions by saying they help Republicans win". NBC News.
- "Lawyer says eliminating voting restrictions would put Republicans at a 'competitive disadvantage'". www.washingtonpost.com.
- Weinberg, Abigail. "A GOP lawyer says the quiet part loud in SCOTUS voting rights case".
- South Carolina v. Katzenbach, 383 U.S. 301 (1966)
- Posner, Mark A. (2006). "Time is Still on Its Side: Why Congressional Reauthorization of Section 5 of the Voting Rights Act Represents a Congruent and Proportional Response to Our Nation's History of Discrimination in Voting" (PDF). New York University Journal of Legislation and Public Policy. 10. Retrieved December 14, 2013.
- City of Rome v. United States, 446 U.S. 156 (1980)
- Lopez v. Monterey County (Lopez II), 525 U.S. 266 (1999)
- Harper, Charlotte Marx (2000). "Lopez v. Monterey County: A Remedy Gone Too Far?". Baylor Law Review. 52. Retrieved May 24, 2014.
- Liptak, Adam (June 22, 2009). "Justices Retain Oversight by U.S. on Voting". The New York Times. Retrieved January 21, 2014.
- Bravin, Jess (June 23, 2009). "Supreme Court Avoids Voting-Rights Act Fight". The Wall Street Journal. Archived from the original on March 7, 2014. Retrieved May 19, 2017.
- Sean Sullivan (February 27, 2013). "Everything You Need to Know about the Supreme Court Voting Rights Act Case". The Washington Post. Archived from the original on February 28, 2012. Retrieved February 27, 2013.
- Wilson, McKenzie (2015). "Piercing the Umbrella: The Dangerous Paradox of Shelby County v. Holder". Seton Hall Legislative Journal. 39. Retrieved April 13, 2019.
- Feder, Catalina; Miller, Michael G. (2020). "Voter Purges After Shelby". American Politics Research. 48 (6): 687–692. doi:10.1177/1532673x20916426. ISSN 1532-673X. S2CID 221131969.
- Shaw v. Reno (Shaw I), 509 U.S. 630 (1993)
- Miller v. Johnson, 515 U.S. 900 (1995)
- Ebaugh, Nelson (1997). "Refining the Racial Gerrymandering Claim: Bush v. Vera". Tulsa Law Journal. 33 (2). Retrieved December 30, 2013.
- Bush v. Vera, 517 U.S. 952 (1996)
- Ansolabehere, Stephen; Persily, Nathaniel; Stewart, Charles III (2010). "Race, Region, and Vote Choice in the 2008 Election: Implications for the Future of the Voting Rights Act". Harvard Law Review. 123 (6): 1385–1436.
- Berman, Ari (2015). Give Us the Ballot: The Modern Struggle for Voting Rights in America. New York, NY: Farrar, Straus and Giroux. ISBN 978-0-3741-5827-9.
- Bullock, Charles S. III, Ronald Keith Gaddie, and Justin J. Wert, eds. (2016). The Rise and Fall of the Voting Rights Act by (University of Oklahoma Press; 240 pages) focus on period between the 2006 revision of the 1965 act and the invalidation of one of its key provisions in Shelby County v. Holder (2013).
- Davidson, Chandler (1984). Minority Vote Dilution. Washington, D.C.: Howard University Press. ISBN 978-0-88258-156-9.
- Davidson, Chandler (1994). Quiet Revolution in the South: The Impact of the Voting Rights Act, 1965–1990. Princeton, NJ: Princeton University Press. ISBN 978-0-691-02108-9.
- Finley, Keith M. (2008). Delaying the Dream: Southern Senators and the Fight Against Civil Rights, 1938–1965. Baton Rouge, LA: Louisiana State University Press. ISBN 978-0-8071-3345-3.
- Garrow, David J. (1978). Protest at Selma: Martin Luther King, Jr., and the Voting Rights Act of 1965. New Haven, CT: Yale University Press. ISBN 978-0-300-02498-2.
- Lawson, Steven F. (1976). Black Ballots: Voting Rights in the South, 1944–1969. New York, NY: Columbia University Press. ISBN 978-0-7391-0087-5.
- Smooth, Wendy (September 2006). "Intersectionality in electoral politics: a mess worth making". Politics & Gender. 2 (3): 400–414. doi:10.1017/S1743923X06261087. S2CID 145812097.
|Wikisource has original text related to this article:|
- Text of original Act and 1970, 1975, and 1982 amendments (PDF)
- Voting Rights Enforcement and Reauthorization: An Examination of the Act's Section 5 Preclearance Provision, U.S. Commission on Civil Rights
- Voting Rights Act: Past, Present, and Future, Justice Talking
- The Voting Rights Act of 1965: Background and Overview (PDF), Congressional Research Service
- "The Selma to Montgomery Voting Rights March: Shaking the Conscience of the Nation", a National Park Service Teaching with Historic Places lesson plan
- Voting Rights Act: Evidence of Continued Need: Hearing before the Subcommittee on the Constitution of the Committee on the Judiciary, House of Representatives, One Hundred Ninth Congress, Second Session, March 8, 2006, Vol. 1 Vol. 2 Vol. 3 Vol. 4
- The Great Society Congress | https://library.kiwix.org/wikipedia_en_top_maxi/A/Voting_Rights_Act_of_1965 | 21 |
22 | Cholera is an infection of the small intestine by some strains of the bacterium Vibrio cholerae. Symptoms may range from none, to mild, to severe. The classic symptom is large amounts of watery diarrhea that lasts a few days. Vomiting and muscle cramps may also occur. Diarrhea can be so severe that it leads within hours to severe dehydration and electrolyte imbalance. This may result in sunken eyes, cold skin, decreased skin elasticity, and wrinkling of the hands and feet. Dehydration can cause the skin to turn bluish. Symptoms start two hours to five days after exposure.
|A person with severe dehydration due to cholera causing sunken eyes and wrinkled hands and skin.|
|Symptoms||Large amounts of watery diarrhea, vomiting, muscle cramps|
|Complications||Dehydration, electrolyte imbalance|
|Usual onset||2 hours to 5 days after exposure|
|Causes||Vibrio cholerae spread by fecal-oral route|
|Risk factors||Poor sanitation, not enough clean drinking water, poverty|
|Diagnostic method||Stool test|
|Prevention||Improved sanitation, clean water, cholera vaccines|
|Treatment||Oral rehydration therapy, zinc supplementation, intravenous fluids, antibiotics|
|Frequency||3–5 million people a year|
Cholera is caused by a number of types of Vibrio cholerae, with some types producing more severe disease than others. It is spread mostly by unsafe water and unsafe food that has been contaminated with human feces containing the bacteria. Undercooked seafood is a common source. Humans are the only animal affected. Risk factors for the disease include poor sanitation, not enough clean drinking water, and poverty. There are concerns that rising sea levels will increase rates of disease. Cholera can be diagnosed by a stool test. A rapid dipstick test is available but is not as accurate.
Prevention methods against cholera include improved sanitation and access to clean water. Cholera vaccines that are given by mouth provide reasonable protection for about six months. They have the added benefit of protecting against another type of diarrhea caused by E. coli. The primary treatment is oral rehydration therapy—the replacement of fluids with slightly sweet and salty solutions. Rice-based solutions are preferred. Zinc supplementation is useful in children. In severe cases, intravenous fluids, such as Ringer's lactate, may be required, and antibiotics may be beneficial. Testing to see which antibiotic the cholera is susceptible to can help guide the choice.
Cholera affects an estimated 3–5 million people worldwide and causes 28,800–130,000 deaths a year. Although it is classified as a pandemic as of 2010, it is rare in the developed world. Children are mostly affected. Cholera occurs as both outbreaks and chronically in certain areas. Areas with an ongoing risk of disease include Africa and Southeast Asia. The risk of death among those affected is usually less than 5% but may be as high as 50%. No access to treatment results in a higher death rate. Descriptions of cholera are found as early as the 5th century BC in Sanskrit. The study of cholera in England by John Snow between 1849 and 1854 led to significant advances in the field of epidemiology. Seven large outbreaks have occurred over the last 200 years with millions of deaths.
Signs and symptoms
The primary symptoms of cholera are profuse diarrhea and vomiting of clear fluid. These symptoms usually start suddenly, half a day to five days after ingestion of the bacteria. The diarrhea is frequently described as "rice water" in nature and may have a fishy odor. An untreated person with cholera may produce 10 to 20 litres (3 to 5 US gal) of diarrhea a day. Severe cholera, without treatment, kills about half of affected individuals. If the severe diarrhea is not treated, it can result in life-threatening dehydration and electrolyte imbalances. Estimates of the ratio of asymptomatic to symptomatic infections have ranged from 3 to 100. Cholera has been nicknamed the "blue death" because a person's skin may turn bluish-gray from extreme loss of fluids.
Fever is rare and should raise suspicion for secondary infection. Patients can be lethargic, and might have sunken eyes, dry mouth, cold clammy skin, or wrinkled hands and feet. Kussmaul breathing, a deep and labored breathing pattern, can occur because of acidosis from stool bicarbonate losses and lactic acidosis associated with poor perfusion. Blood pressure drops due to dehydration, peripheral pulse is rapid and thready, and urine output decreases with time. Muscle cramping and weakness, altered consciousness, seizures, or even coma due to electrolyte imbalances are common, especially in children.
Transmission is usually through the fecal-oral route of contaminated food or water caused by poor sanitation. Most cholera cases in developed countries are a result of transmission by food, while in the developing world it is more often water. Food transmission can occur when people harvest seafood such as oysters in waters infected with sewage, as Vibrio cholerae accumulates in planktonic crustaceans and the oysters eat the zooplankton.
People infected with cholera often have diarrhea, and disease transmission may occur if this highly liquid stool, colloquially referred to as "rice-water", contaminates water used by others. A single diarrheal event can cause a one-million fold increase in numbers of V. cholerae in the environment. The source of the contamination is typically other cholera sufferers when their untreated diarrheal discharge is allowed to get into waterways, groundwater or drinking water supplies. Drinking any contaminated water and eating any foods washed in the water, as well as shellfish living in the affected waterway, can cause a person to contract an infection. Cholera is rarely spread directly from person to person.
V. cholerae also exists outside the human body in natural water sources, either by itself or through interacting with phytoplankton, zooplankton, or biotic and abiotic detritus. Drinking such water can also result in the disease, even without prior contamination through fecal matter. Selective pressures exist however in the aquatic environment that may reduce the virulence of V. cholerae. Specifically, animal models indicate that the transcriptional profile of the pathogen changes as it prepares to enter an aquatic environment. This transcriptional change results in a loss of ability of V. cholerae to be cultured on standard media, a phenotype referred to as 'viable but non-culturable' (VBNC) or more conservatively 'active but non-culturable' (ABNC). One study indicates that the culturability of V. cholerae drops 90% within 24 hours of entering the water, and furthermore that this loss in culturability is associated with a loss in virulence.
About 100 million bacteria must typically be ingested to cause cholera in a normal healthy adult. This dose, however, is less in those with lowered gastric acidity (for instance those using proton pump inhibitors). Children are also more susceptible, with two- to four-year-olds having the highest rates of infection. Individuals' susceptibility to cholera is also affected by their blood type, with those with type O blood being the most susceptible. Persons with lowered immunity, such as persons with AIDS or malnourished children, are more likely to experience a severe case if they become infected. Any individual, even a healthy adult in middle age, can experience a severe case, and each person's case should be measured by the loss of fluids, preferably in consultation with a professional health care provider.
The cystic fibrosis genetic mutation known as delta-F508 in humans has been said to maintain a selective heterozygous advantage: heterozygous carriers of the mutation (who are thus not affected by cystic fibrosis) are more resistant to V. cholerae infections. In this model, the genetic deficiency in the cystic fibrosis transmembrane conductance regulator channel proteins interferes with bacteria binding to the intestinal epithelium, thus reducing the effects of an infection.
When consumed, most bacteria do not survive the acidic conditions of the human stomach. The few surviving bacteria conserve their energy and stored nutrients during the passage through the stomach by shutting down protein production. When the surviving bacteria exit the stomach and reach the small intestine, they must propel themselves through the thick mucus that lines the small intestine to reach the intestinal walls where they can attach and thrive.
Once the cholera bacteria reach the intestinal wall, they no longer need the flagella to move. The bacteria stop producing the protein flagellin to conserve energy and nutrients by changing the mix of proteins which they express in response to the changed chemical surroundings. On reaching the intestinal wall, V. cholerae start producing the toxic proteins that give the infected person a watery diarrhea. This carries the multiplying new generations of V. cholerae bacteria out into the drinking water of the next host if proper sanitation measures are not in place.
The cholera toxin (CTX or CT) is an oligomeric complex made up of six protein subunits: a single copy of the A subunit (part A), and five copies of the B subunit (part B), connected by a disulfide bond. The five B subunits form a five-membered ring that binds to GM1 gangliosides on the surface of the intestinal epithelium cells. The A1 portion of the A subunit is an enzyme that ADP-ribosylates G proteins, while the A2 chain fits into the central pore of the B subunit ring. Upon binding, the complex is taken into the cell via receptor-mediated endocytosis. Once inside the cell, the disulfide bond is reduced, and the A1 subunit is freed to bind with a human partner protein called ADP-ribosylation factor 6 (Arf6). Binding exposes its active site, allowing it to permanently ribosylate the Gs alpha subunit of the heterotrimeric G protein. This results in constitutive cAMP production, which in turn leads to the secretion of water, sodium, potassium, and bicarbonate into the lumen of the small intestine and rapid dehydration. The gene encoding the cholera toxin was introduced into V. cholerae by horizontal gene transfer. Virulent strains of V. cholerae carry a variant of a temperate bacteriophage called CTXφ.
Microbiologists have studied the genetic mechanisms by which the V. cholerae bacteria turn off the production of some proteins and turn on the production of other proteins as they respond to the series of chemical environments they encounter, passing through the stomach, through the mucous layer of the small intestine, and on to the intestinal wall. Of particular interest have been the genetic mechanisms by which cholera bacteria turn on the protein production of the toxins that interact with host cell mechanisms to pump chloride ions into the small intestine, creating an ionic pressure which prevents sodium ions from entering the cell. The chloride and sodium ions create a salt-water environment in the small intestines, which through osmosis can pull up to six liters of water per day through the intestinal cells, creating the massive amounts of diarrhea. The host can become rapidly dehydrated unless an appropriate mixture of dilute salt water and sugar is taken to replace the blood's water and salts lost in the diarrhea.
By inserting separate, successive sections of V. cholerae DNA into the DNA of other bacteria, such as E. coli that would not naturally produce the protein toxins, researchers have investigated the mechanisms by which V. cholerae responds to the changing chemical environments of the stomach, mucous layers, and intestinal wall. Researchers have discovered a complex cascade of regulatory proteins controls expression of V. cholerae virulence determinants. In responding to the chemical environment at the intestinal wall, the V. cholerae bacteria produce the TcpP/TcpH proteins, which, together with the ToxR/ToxS proteins, activate the expression of the ToxT regulatory protein. ToxT then directly activates expression of virulence genes that produce the toxins, causing diarrhea in the infected person and allowing the bacteria to colonize the intestine. Current research aims at discovering "the signal that makes the cholera bacteria stop swimming and start to colonize (that is, adhere to the cells of) the small intestine."
Amplified fragment length polymorphism fingerprinting of the pandemic isolates of V. cholerae has revealed variation in the genetic structure. Two clusters have been identified: Cluster I and Cluster II. For the most part, Cluster I consists of strains from the 1960s and 1970s, while Cluster II largely contains strains from the 1980s and 1990s, based on the change in the clone structure. This grouping of strains is best seen in the strains from the African continent.
In many areas of the world, antibiotic resistance is increasing within cholera bacteria. In Bangladesh, for example, most cases are resistant to tetracycline, trimethoprim-sulfamethoxazole, and erythromycin. Rapid diagnostic assay methods are available for the identification of multi-drug resistant cases. New generation antimicrobials have been discovered which are effective against cholera bacteria in in vitro studies.
A rapid dipstick test is available to determine the presence of V. cholerae. In those samples that test positive, further testing should be done to determine antibiotic resistance. In epidemic situations, a clinical diagnosis may be made by taking a patient history and doing a brief examination. Treatment is usually started without or before confirmation by laboratory analysis.
Stool and swab samples collected in the acute stage of the disease, before antibiotics have been administered, are the most useful specimens for laboratory diagnosis. If an epidemic of cholera is suspected, the most common causative agent is V. cholerae O1. If V. cholerae serogroup O1 is not isolated, the laboratory should test for V. cholerae O139. However, if neither of these organisms is isolated, it is necessary to send stool specimens to a reference laboratory.
Infection with V. cholerae O139 should be reported and handled in the same manner as that caused by V. cholerae O1. The associated diarrheal illness should be referred to as cholera and must be reported in the United States.
The World Health Organization (WHO) recommends focusing on prevention, preparedness, and response to combat the spread of cholera. They also stress the importance of an effective surveillance system. Governments can play a role in all of these areas.
Although cholera may be life-threatening, prevention of the disease is normally straightforward if proper sanitation practices are followed. In developed countries, due to nearly universal advanced water treatment and sanitation practices present there, cholera is rare. For example, the last major outbreak of cholera in the United States occurred in 1910–1911. Cholera is mainly a risk in developing countries.
Effective sanitation practices, if instituted and adhered to in time, are usually sufficient to stop an epidemic. There are several points along the cholera transmission path at which its spread may be halted:
- Sterilization: Proper disposal and treatment of all materials that may have come into contact with cholera victims' feces (e.g., clothing, bedding, etc.) are essential. These should be sanitized by washing in hot water, using chlorine bleach if possible. Hands that touch cholera patients or their clothing, bedding, etc., should be thoroughly cleaned and disinfected with chlorinated water or other effective antimicrobial agents.
- Sewage and fecal sludge management: In cholera-affected areas, sewage and fecal sludge need to be treated and managed carefully in order to stop the spread of this disease via human excreta. Provision of sanitation and hygiene is an important preventative measure. Open defecation, release of untreated sewage, or dumping of fecal sludge from pit latrines or septic tanks into the environment need to be prevented. In many cholera affected zones, there is a low degree of sewage treatment. Therefore, the implementation of dry toilets that do not contribute to water pollution, as they do not flush with water, may be an interesting alternative to flush toilets.
- Sources: Warnings about possible cholera contamination should be posted around contaminated water sources with directions on how to decontaminate the water (boiling, chlorination etc.) for possible use.
- Water purification: All water used for drinking, washing, or cooking should be sterilized by either boiling, chlorination, ozone water treatment, ultraviolet light sterilization (e.g., by solar water disinfection), or antimicrobial filtration in any area where cholera may be present. Chlorination and boiling are often the least expensive and most effective means of halting transmission. Cloth filters or sari filtration, though very basic, have significantly reduced the occurrence of cholera when used in poor villages in Bangladesh that rely on untreated surface water. Better antimicrobial filters, like those present in advanced individual water treatment hiking kits, are most effective. Public health education and adherence to appropriate sanitation practices are of primary importance to help prevent and control transmission of cholera and other diseases.
Surveillance and prompt reporting allow for containing cholera epidemics rapidly. Cholera exists as a seasonal disease in many endemic countries, occurring annually mostly during rainy seasons. Surveillance systems can provide early alerts to outbreaks, therefore leading to coordinated response and assist in preparation of preparedness plans. Efficient surveillance systems can also improve the risk assessment for potential cholera outbreaks. Understanding the seasonality and location of outbreaks provides guidance for improving cholera control activities for the most vulnerable. For prevention to be effective, it is important that cases be reported to national health authorities.
A number of safe and effective oral vaccines for cholera are available. The World Health Organization (WHO) has three prequalified oral cholera vaccines (OCVs): Dukoral, Sanchol, and Euvichol. Dukoral, an orally administered, inactivated whole cell vaccine, has an overall efficacy of about 52% during the first year after being given and 62% in the second year, with minimal side effects. It is available in over 60 countries. However, it is not currently recommended by the Centers for Disease Control and Prevention (CDC) for most people traveling from the United States to endemic countries. The vaccine that the US Food and Drug Administration (FDA) recommends, Vaxchora, is an oral attenuated live vaccine, that is effective as a single dose.
One injectable vaccine was found to be effective for two to three years. The protective efficacy was 28% lower in children less than five years old. However, as of 2010, it has limited availability. Work is under way to investigate the role of mass vaccination. The WHO recommends immunization of high-risk groups, such as children and people with HIV, in countries where this disease is endemic. If people are immunized broadly, herd immunity results, with a decrease in the amount of contamination in the environment.
An effective and relatively cheap method to prevent the transmission of cholera is the use of a folded sari (a long cloth garment) to filter drinking water. In Bangladesh this practice was found to decrease rates of cholera by nearly half. It involves folding a sari four to eight times. Between uses the cloth should be rinsed in clean water and dried in the sun to kill any bacteria on it. A nylon cloth appears to work as well but is not as affordable.
Continued eating speeds the recovery of normal intestinal function. The WHO recommends this generally for cases of diarrhea no matter what the underlying cause. A CDC training manual specifically for cholera states: "Continue to breastfeed your baby if the baby has watery diarrhea, even when traveling to get treatment. Adults and older children should continue to eat frequently."
The most common error in caring for patients with cholera is to underestimate the speed and volume of fluids required. In most cases, cholera can be successfully treated with oral rehydration therapy (ORT), which is highly effective, safe, and simple to administer. Rice-based solutions are preferred to glucose-based ones due to greater efficiency. In severe cases with significant dehydration, intravenous rehydration may be necessary. Ringer's lactate is the preferred solution, often with added potassium. Large volumes and continued replacement until diarrhea has subsided may be needed. Ten percent of a person's body weight in fluid may need to be given in the first two to four hours. This method was first tried on a mass scale during the Bangladesh Liberation War, and was found to have much success. Despite widespread beliefs, fruit juices and commercial fizzy drinks like cola, are not ideal for rehydration of people with serious infections of the intestines, and their excessive sugar content may even harm water uptake.
If commercially produced oral rehydration solutions are too expensive or difficult to obtain, solutions can be made. One such recipe calls for 1 liter of boiled water, 1/2 teaspoon of salt, 6 teaspoons of sugar, and added mashed banana for potassium and to improve taste.
As there frequently is initially acidosis, the potassium level may be normal, even though large losses have occurred. As the dehydration is corrected, potassium levels may decrease rapidly, and thus need to be replaced. This may be done by consuming foods high in potassium, like bananas or coconut water.
Antibiotic treatments for one to three days shorten the course of the disease and reduce the severity of the symptoms. Use of antibiotics also reduces fluid requirements. People will recover without them, however, if sufficient hydration is maintained. The WHO only recommends antibiotics in those with severe dehydration.
Doxycycline is typically used first line, although some strains of V. cholerae have shown resistance. Testing for resistance during an outbreak can help determine appropriate future choices. Other antibiotics proven to be effective include cotrimoxazole, erythromycin, tetracycline, chloramphenicol, and furazolidone. Fluoroquinolones, such as ciprofloxacin, also may be used, but resistance has been reported.
In Bangladesh zinc supplementation reduced the duration and severity of diarrhea in children with cholera when given with antibiotics and rehydration therapy as needed. It reduced the length of disease by eight hours and the amount of diarrhea stool by 10%. Supplementation appears to be also effective in both treating and preventing infectious diarrhea due to other causes among children in the developing world.
For certain genetic strains of cholera, such as the one present during the 2010 epidemic in Haiti and the 2004 outbreak in India, death can occur within two hours of becoming ill.
Cholera affects an estimated 3–5 million people worldwide, and causes 58,000–130,000 deaths a year as of 2010. This occurs mainly in the developing world. In the early 1980s, death rates are believed to have been greater than three million a year. It is difficult to calculate exact numbers of cases, as many go unreported due to concerns that an outbreak may have a negative impact on the tourism of a country. Cholera remains both epidemic and endemic in many areas of the world. In October 2016, an outbreak of cholera began in war-ravaged Yemen. WHO called it "the worst cholera outbreak in the world".
Although much is known about the mechanisms behind the spread of cholera, this has not led to a full understanding of what makes cholera outbreaks happen in some places and not others. Lack of treatment of human feces and lack of treatment of drinking water greatly facilitate its spread, but bodies of water can serve as a reservoir, and seafood shipped long distances can spread the disease. Cholera was not known in the Americas for most of the 20th century, but it reappeared towards the end of that century.
The disease appears in the European literature as early as 1642, from the Dutch physician Jakob de Bondt's description it in his De Medicina Indorum. (The "Indorum" of the title refers to the East Indies. He also gave first European descriptions of other diseases.)
Early outbreaks in the Indian subcontinent are believed to have been the result of poor living conditions as well as the presence of pools of still water, both of which provide ideal conditions for cholera to thrive. The disease first spread by trade routes (land and sea) to Russia in 1817, later to the rest of Europe, and from Europe to North America and the rest of the world. Seven cholera pandemics have occurred in the past 200 years, with the seventh pandemic originating in Indonesia in 1961.
The first cholera pandemic occurred in the Bengal region of India, near Calcutta starting in 1817 through 1824. The disease dispersed from India to Southeast Asia, the Middle East, Europe, and Eastern Africa. The movement of British Army and Navy ships and personnel is believed to have contributed to the range of the pandemic, since the ships carried people with the disease to the shores of the Indian Ocean, from Africa to Indonesia, and north to China and Japan. The second pandemic lasted from 1826 to 1837 and particularly affected North America and Europe due to the result of advancements in transportation and global trade, and increased human migration, including soldiers. The third pandemic erupted in 1846, persisted until 1860, extended to North Africa, and reached South America, for the first time specifically affecting Brazil. The fourth pandemic lasted from 1863 to 1875 spread from India to Naples and Spain. The fifth pandemic was from 1881–1896 and started in India and spread to Europe, Asia, and South America. The sixth pandemic started 1899–1923. These epidemics were less fatal due to a greater understanding of the cholera bacteria. Egypt, the Arabian peninsula, Persia, India, and the Philippines were hit hardest during these epidemics, while other areas, like Germany in 1892 and Naples from 1910–1911, also experienced severe outbreaks. The seventh pandemic originated in 1961 in Indonesia and is marked by the emergence of a new strain, nicknamed El Tor, which still persists (as of 2018) in developing countries.
Since it became widespread in the 19th century, cholera has killed tens of millions of people. In Russia alone, between 1847 and 1851, more than one million people perished of the disease. It killed 150,000 Americans during the second pandemic. Between 1900 and 1920, perhaps eight million people died of cholera in India. Cholera became the first reportable disease in the United States due to the significant effects it had on health. John Snow, in England, was the first to identify the importance of contaminated water as its cause in 1854. Cholera is now no longer considered a pressing health threat in Europe and North America due to filtering and chlorination of water supplies, but still heavily affects populations in developing countries.
In the past, vessels flew a yellow quarantine flag if any crew members or passengers were suffering from cholera. No one aboard a vessel flying a yellow flag would be allowed ashore for an extended period, typically 30 to 40 days. In modern sets of international maritime signal flags, the quarantine flag is yellow and black.
Historically many different claimed remedies have existed in folklore. Many of the older remedies were based on the miasma theory. Some believed that abdominal chilling made one more susceptible and flannel and cholera belts were routine in army kits. In the 1854–1855 outbreak in Naples homeopathic camphor was used according to Hahnemann. T. J. Ritter's "Mother's Remedies" book lists tomato syrup as a home remedy from northern America. Elecampane was recommended in the United Kingdom according to William Thomas Fernie.
Cholera cases are much less frequent in developed countries where governments have helped to establish water sanitation practices and effective medical treatments. The United States, for example, used to have a severe cholera problem similar to those in some developing countries. There were three large cholera outbreaks in the 1800s, which can be attributed to Vibrio cholerae's spread through interior waterways like the Erie Canal and routes along the Eastern Seaboard. The island of Manhattan in New York City touched the Atlantic Ocean, where cholera collected just off the coast. At this time, New York City did not have as effective a sanitation system as it does today, so cholera was able to spread.
Cholera morbus is a historical term that was used to refer to gastroenteritis rather than specifically cholera.
The bacterium was isolated in 1854 by Italian anatomist Filippo Pacini, but its exact nature and his results were not widely known.
Spanish physician Jaume Ferran i Clua developed a cholera inoculation in 1885, the first to immunize humans against a bacterial disease.
Russian-Jewish bacteriologist Waldemar Haffkine developed the first cholera vaccine in July 1892.
One of the major contributions to fighting cholera was made by the physician and pioneer medical scientist John Snow (1813–1858), who in 1854 found a link between cholera and contaminated drinking water. Dr. Snow proposed a microbial origin for epidemic cholera in 1849. In his major "state of the art" review of 1855, he proposed a substantially complete and correct model for the cause of the disease. In two pioneering epidemiological field studies, he was able to demonstrate human sewage contamination was the most probable disease vector in two major epidemics in London in 1854. His model was not immediately accepted, but it was seen to be the more plausible, as medical microbiology developed over the next 30 years or so.
Cities in developed nations made massive investment in clean water supply and well-separated sewage treatment infrastructures between the mid-1850s and the 1900s. This eliminated the threat of cholera epidemics from the major developed cities in the world. In 1883, Robert Koch identified V. cholerae with a microscope as the bacillus causing the disease.
Hemendra Nath Chatterjee, a Bengali scientist, who first formulated and demonstrated the effectiveness of oral rehydration salt (ORS) for diarrhea. In his 1953 paper, published in The Lancet, he states that promethazine can stop vomiting during cholera and then oral rehydration is possible. The formulation of the fluid replacement solution was 4 g of sodium chloride, 25 g of glucose and 1000 ml of water.
Robert Allan Phillips, working at the US Naval Medical Research Unit Two in Southeast Asia, evaluated the pathophysiology of the disease using modern laboratory chemistry techniques and developed a protocol for rehydration. His research led the Lasker Foundation to award him its prize in 1967.
More recently, in 2002, Alam, et al., studied stool samples from patients at the International Centre for Diarrhoeal Disease in Dhaka, Bangladesh. From the various experiments they conducted, the researchers found a correlation between the passage of V. cholerae through the human digestive system and an increased infectivity state. Furthermore, the researchers found the bacterium creates a hyperinfected state where genes that control biosynthesis of amino acids, iron uptake systems, and formation of periplasmic nitrate reductase complexes were induced just before defecation. These induced characteristics allow the cholera vibrios to survive in the "rice water" stools, an environment of limited oxygen and iron, of patients with a cholera infection.
Society and culture
In many developing countries, cholera still reaches its victims through contaminated water sources, and countries without proper sanitation techniques have greater incidence of the disease. Governments can play a role in this. In 2008, for example, the Zimbabwean cholera outbreak was due partly to the government's role, according to a report from the James Baker Institute. The Haitian government's inability to provide safe drinking water after the 2010 earthquake led to an increase in cholera cases as well.
Similarly, South Africa's cholera outbreak was exacerbated by the government's policy of privatizing water programs. The wealthy elite of the country were able to afford safe water while others had to use water from cholera-infected rivers.
According to Rita R. Colwell of the James Baker Institute, if cholera does begin to spread, government preparedness is crucial. A government's ability to contain the disease before it extends to other areas can prevent a high death toll and the development of an epidemic or even pandemic. Effective disease surveillance can ensure that cholera outbreaks are recognized as soon as possible and dealt with appropriately. Oftentimes, this will allow public health programs to determine and control the cause of the cases, whether it is unsanitary water or seafood that have accumulated a lot of Vibrio cholerae specimens. Having an effective surveillance program contributes to a government's ability to prevent cholera from spreading. In the year 2000 in the state of Kerala in India, the Kottayam district was determined to be "Cholera-affected"; this pronouncement led to task forces that concentrated on educating citizens with 13,670 information sessions about human health. These task forces promoted the boiling of water to obtain safe water, and provided chlorine and oral rehydration salts. Ultimately, this helped to control the spread of the disease to other areas and minimize deaths. On the other hand, researchers have shown that most of the citizens infected during the 1991 cholera outbreak in Bangladesh lived in rural areas, and were not recognized by the government's surveillance program. This inhibited physicians' abilities to detect cholera cases early.
According to Colwell, the quality and inclusiveness of a country's health care system affects the control of cholera, as it did in the Zimbabwean cholera outbreak. While sanitation practices are important, when governments respond quickly and have readily available vaccines, the country will have a lower cholera death toll. Affordability of vaccines can be a problem; if the governments do not provide vaccinations, only the wealthy may be able to afford them and there will be a greater toll on the country's poor. The speed with which government leaders respond to cholera outbreaks is important.
Besides contributing to an effective or declining public health care system and water sanitation treatments, government can have indirect effects on cholera control and the effectiveness of a response to cholera. A country's government can impact its ability to prevent disease and control its spread. A speedy government response backed by a fully functioning health care system and financial resources can prevent cholera's spread. This limits cholera's ability to cause death, or at the very least a decline in education, as children are kept out of school to minimize the risk of infection.
- Tchaikovsky's death has traditionally been attributed to cholera, most probably contracted through drinking contaminated water several days earlier. Tchaikovsky's mother died of cholera, and his father became sick with cholera at this time but made a full recovery. Some scholars, however, including English musicologist and Tchaikovsky authority David Brown and biographer Anthony Holden, have theorized that his death was a suicide.
- 2010 Haiti cholera outbreak. Ten months after the 2010 earthquake, an outbreak swept over Haiti, traced to a United Nations base of peacekeepers from Nepal. This marks the worst cholera outbreak in recent history, as well as the best documented cholera outbreak in modern public health.
- Sadi Carnot, Physicist, a founder of thermodynamics (d. 1832)
- Charles X, King of France (d. 1836)
- James K. Polk, eleventh president of the United States (d. 1849)
- Carl von Clausewitz, Prussian soldier and German military theorist (d. 1831)
- Elliot Bovill, Chief Justice of the Straits Settlements (1893)
- "Cholera vaccines: WHO position paper" (PDF). Wkly. Epidemiol. Rec. 85 (13): 117–128. March 26, 2010. PMID 20349546. Archived (PDF) from the original on April 13, 2015.
- "Cholera – Vibrio cholerae infection Information for Public Health & Medical Professionals". Centers for Disease Control and Prevention. January 6, 2015. Archived from the original on 20 March 2015. Retrieved 17 March 2015.
- Finkelstein, Richard. "Medical microbiology". Archived from the original on 1 September 2017. Retrieved 14 August 2016.
- Harris, JB; LaRocque, RC; Qadri, F; Ryan, ET; Calderwood, SB (30 June 2012). "Cholera". Lancet. 379 (9835): 2466–76. doi:10.1016/s0140-6736(12)60436-x. PMC 3761070. PMID 22748592.
- "Cholera – Vibrio cholerae infection Treatment". Centers for Disease Control and Prevention. November 7, 2014. Archived from the original on 11 March 2015. Retrieved 17 March 2015.
- GBD 2015 Mortality and Causes of Death, Collaborators. (8 October 2016). "Global, regional, and national life expectancy, all-cause mortality, and cause-specific mortality for 249 causes of death, 1980–2015: a systematic analysis for the Global Burden of Disease Study 2015". Lancet. 388 (10053): 1459–1544. doi:10.1016/s0140-6736(16)31012-1. PMC 5388903. PMID 27733281.
- Bailey, Diane (2011). Cholera (1st ed.). New York: Rosen Pub. p. 7. ISBN 978-1-4358-9437-2. Archived from the original on 2016-12-03.
- "Sources of Infection & Risk Factors". Centers for Disease Control and Prevention. November 7, 2014. Archived from the original on 12 March 2015. Retrieved 17 March 2015.
- "Diagnosis and Detection". Centers for Disease Control and Prevention. February 10, 2015. Archived from the original on 15 March 2015. Retrieved 17 March 2015.
- "Cholera – Vibrio cholerae infection". Centers for Disease Control and Prevention. October 27, 2014. Archived from the original on 17 March 2015. Retrieved 17 March 2015.
- Timmreck, Thomas C. (2002). An introduction to epidemiology (3. ed.). Sudbury, MA: Jones and Bartlett Publishers. p. 77. ISBN 978-0-7637-0060-7. Archived from the original on 2016-12-03.
- "Cholera's seven pandemics". CBC. 9 May 2008. Retrieved 15 July 2018.
- Sack DA, Sack RB, Nair GB, Siddique AK (January 2004). "Cholera". Lancet. 363 (9404): 223–33. doi:10.1016/S0140-6736(03)15328-7. PMID 14738797.
- Azman AS, Rudolph KE, Cummings DA, Lessler J (November 2012). "The incubation period of cholera: A systematic review". J. Infect. 66 (5): 432–438. doi:10.1016/j.jinf.2012.11.013. PMC 3677557. PMID 23201968.
- King AA, Ionides EL, Pascual M, Bouma MJ (August 2008). "Inapparent infections and cholera dynamics" (PDF). Nature. 454 (7206): 877–80. Bibcode:2008Natur.454..877K. doi:10.1038/nature07084. hdl:2027.42/62519. PMID 18704085.
- Greenough, William B. (2 January 2008). "The blue death Disease, disaster, and the water we drink". The Journal of Clinical Investigation. 118 (1): 4. doi:10.1172/JCI34394. PMC 2171164.
- McElroy, Ann; Townsend, Patricia K. (2009). Medical Anthropology in Ecological Perspective. Boulder, CO: Westview. p. 375. ISBN 978-0-8133-4384-6.
- Rita Colwell. Oceans, Climate, and Health: Cholera as a Model of Infectious Diseases in a Changing Environment. Rice University: James A Baker III Institute for Public Policy. Archived from the original on 2013-10-26. Retrieved 2013-10-23.
- Ryan KJ; Ray CG, eds. (2004). Sherris Medical Microbiology (4th ed.). McGraw Hill. pp. 376–7. ISBN 978-0-8385-8529-0.
- "Cholera Biology and Genetics | NIH: National Institute of Allergy and Infectious Diseases". www.niaid.nih.gov. Retrieved 2017-12-05.
- Nelson, EJ; Harris, JB; Morris, JG; Calderwood, SB; Camilli, A (7 October 2009). "Cholera transmission: the host, pathogen and bacteriophage dynamic". Nat Rev Microbiol. 7 (10): 693–702. doi:10.1038/nrmicro2204. PMC 3842031. PMID 19756008.
- Nelson, EJ; Chowdhury, A; Flynn, J; Schild, Stefan; Bourassa, L; Shao, Yue; LaRocque, RC; Calderwood, SB; Qadri, F; Camilli, A (24 October 2008). "Transmission of Vibrio cholerae Is Antagonized by Lytic Phage and Entry into the Aquatic Environment". PLoS Pathog. 4 (10): e1000187. doi:10.1371/journal.ppat.1000187. PMC 2563029. PMID 18949027.
- Archivist (1997). "Cholera phage discovery". Arch. Dis. Child. 76 (3): 274. doi:10.1136/adc.76.3.274. PMC 1717096. Archived from the original on 2008-12-16.
- Prevention and control of cholera outbreaks: WHO policy and recommendations Archived 2011-11-22 at the Wayback Machine, World Health Organization, Regional Office for the Eastern Mediterranean, undated but citing sources from '07, '04, '03, '04, and '05.
- Bertranpetit J, Calafell F (1996). "Genetic and geographical variability in cystic fibrosis: evolutionary considerations". Ciba Found. Symp. Novartis Foundation Symposia. 197: 97–114, discussion 114–8. doi:10.1002/9780470514887.ch6. ISBN 9780470514887. PMID 8827370.
- Almagro-Moreno, S; Pruss, K; Taylor, RK (May 2015). "Intestinal Colonization Dynamics of Vibrio cholerae". PLOS Pathog. 11 (5): e1004787. doi:10.1371/journal.ppat.1004787. PMC 4440752. PMID 25996593.
- O'Neal CJ, Jobling MG, Holmes RK, Hol WG (2005). "Structural basis for the activation of cholera toxin by human ARF6-GTP". Science. 309 (5737): 1093–6. Bibcode:2005Sci...309.1093O. doi:10.1126/science.1113398. PMID 16099990.
- DiRita VJ, Parsot C, Jander G, Mekalanos JJ (June 1991). "Regulatory cascade controls virulence in Vibrio cholerae". Proc. Natl. Acad. Sci. U.S.A. 88 (12): 5403–7. Bibcode:1991PNAS...88.5403D. doi:10.1073/pnas.88.12.5403. PMC 51881. PMID 2052618.
- Lan R, Reeves PR (January 2002). "Pandemic Spread of Cholera: Genetic Diversity and Relationships within the Seventh Pandemic Clone of Vibrio cholerae Determined by Amplified Fragment Length Polymorphism". J. Clin. Microbiol. 40 (1): 172–181. doi:10.1128/JCM.40.1.172-181.2002. ISSN 0095-1137. PMC 120103. PMID 11773113.
- Sack DA, Sack RB, Chaignat CL (August 2006). "Getting serious about cholera". N. Engl. J. Med. 355 (7): 649–51. doi:10.1056/NEJMp068144. PMID 16914700.
- Mackay IM, ed. (2007). Real-Time PCR in microbiology: From diagnosis to characterization. Caister Academic Press. ISBN 978-1-904455-18-9.
- Ramamurthy T (2008). "Antibiotic resistance in Vibrio cholerae". Vibrio cholerae: Genomics and molecular biology. Caister Academic Press. ISBN 978-1-904455-33-2.
- "Laboratory Methods for the Diagnosis of Epidemic Dysentery and Cholera" (PDF). Centers for Disease Control and Prevention. 1999. Archived (PDF) from the original on 2017-06-23. Retrieved 2017-06-30.
- "Cholera Fact Sheet", World Health Organization. who.int Archived 2012-05-05 at the Wayback Machine. Retrieved November 5, 2013.
- "Cholera Kills Boy. All Other Suspected Cases Now in Quarantine and Show No Alarming Symptoms" (PDF). The New York Times. July 18, 1911. Retrieved 2008-07-28.
The sixth death from cholera since the arrival in this port from Naples of the steamship Moltke, thirteen days ago, occurred yesterday at Swineburne Island. The victim was Francesco Farando, 14 years old.
- "More Cholera in Port". Washington Post. October 10, 1910. Archived from the original on December 16, 2008. Retrieved 2008-12-11.
A case of cholera developed today in the steerage of the Hamburg-American liner Moltke, which has been detained at quarantine as a possible cholera carrier since Monday last. Dr. A.H. Doty, health officer of the port, reported the case tonight with the additional information that another cholera patient from the Moltke is under treatment at Swinburne Island.
- Cisneros, Blanca Jimenez; Rose, Joan B. (2009-03-24). Urban Water Security: Managing Risks: UNESCO-IHP. CRC Press. ISBN 978-0-203-88162-0.
- Drasar, B. S.; Forrest, B. D. (2012-12-06). Cholera and the Ecology of Vibrio cholerae. Springer Science & Business Media. p. 24. ISBN 978-94-009-1515-2.
- Singer, Merrill (2016-05-31). A Companion to the Anthropology of Environmental Health. John Wiley & Sons. p. 219. ISBN 978-1-118-78699-4.
- (www.dw.com), Deutsche Welle. "Starting a poop to compost movement | Global Ideas | DW | 09.06.2015". DW.COM. Retrieved 2017-10-01.
- "Cholera and food safety" (PDF). World Health Organization. Archived (PDF) from the original on 2017-08-21. Retrieved 2017-08-20.
- "Cholera: prevention and control". Health topics. WHO. 2008. Archived from the original on 2008-12-14. Retrieved 2008-12-08.
- Sinclair D, Abba K, Zaman K, Qadri F, Graves PM (2011). Sinclair D (ed.). "Oral vaccines for preventing cholera". Cochrane Database Syst. Rev. (3): CD008603. doi:10.1002/14651858.CD008603.pub2. PMC 6532691. PMID 21412922.
- "Is a vaccine available to prevent cholera?". CDC disease info: Cholera. 2010-10-22. Archived from the original on 2010-10-26. Retrieved 2010-10-24.
- FDA Product Approval – Immunization Action Coalition Archived 2017-04-15 at the Wayback Machine
- Graves PM, Deeks JJ, Demicheli V, Jefferson T (2010). Graves PM (ed.). "Vaccines for preventing cholera: killed whole cell or other subunit vaccines (injected)". Cochrane Database Syst. Rev. (8): CD000974. doi:10.1002/14651858.CD000974.pub2. PMC 6532721. PMID 20687062.
- "Cholera vaccines". Health topics. WHO. 2008. Archived from the original on 2010-02-16. Retrieved 2010-02-01.
- Ramamurthy T (2010). Epidemiological and Molecular Aspects on Cholera. Springer. p. 330. ISBN 978-1-60327-265-0. Archived from the original on 2015-11-07.
- Merrill RM (2010). Introduction to epidemiology (5th ed.). Sudbury, MA: Jones and Bartlett Publishers. p. 43. ISBN 978-0-7637-6622-1. Archived from the original on 2015-11-06.
- Starr C (2007). Biology: Today and Tomorrow with Physiology (2 ed.). Cengage Learning. p. 563. ISBN 978-1-111-79701-0. Archived from the original on 2015-11-07.
- THE TREATMENT OF DIARRHOEA, A manual for physicians and other senior health workers Archived 2011-10-19 at the Wayback Machine, World Health Organization, 2005. See page 10 (14 in PDF) and esp chapter 5; "MANAGEMENT OF SUSPECTED CHOLERA", pages 16–17 (20–21 in PDF).
- "Community Health Worker Training Materials for Cholera Prevention and Control" (PDF). Centers for Disease Control and Prevention. Archived (PDF) from the original on 2017-07-02.
- globalhealthcenter.umn.edu Archived 2013-12-03 at the Wayback Machine
- The Civil War That Killed Cholera Archived 2013-12-20 at the Wayback Machine, foreignpolicy.com.
- 22nd April 2009: Sugary drinks worsen stomach 'bug'. BBC News, Health. citing National Institute for Health and Clinical Excellence. Seen 19th Dec. 2015 "Archived copy". Archived from the original on 2015-12-22. Retrieved 2015-12-18.CS1 maint: archived copy as title (link)
- "Oral Rehydration Solutions: Made at Home". The Mother and Child Health and Education Trust. 2010. Archived from the original on 2010-11-24. Retrieved 2010-10-29.
- "First steps for managing an outbreak of acute diarrhea" (PDF). World Health Organization Global Task Force on Cholera Control. Archived (PDF) from the original on August 5, 2014. Retrieved November 23, 2013.
- Cholera Treatment (Report). Centers for Disease Control and Prevention (CDC). November 28, 2011. Archived from the original on March 11, 2015.
- "Cholera treatment". Molson Medical Informatics. 2007. Archived from the original on 6 November 2012. Retrieved 2008-01-03.
- Krishna BV, Patil AB, Chandrasekhar MR (March 2006). "Fluoroquinolone-resistant Vibrio cholerae isolated during a cholera outbreak in India". Trans. R. Soc. Trop. Med. Hyg. 100 (3): 224–6. doi:10.1016/j.trstmh.2005.07.007. PMID 16246383.
- Leibovici-Weissman, Y; Neuberger, A; Bitterman, R; Sinclair, D; Salam, MA; Paul, M (19 June 2014). "Antimicrobial drugs for treating cholera". Cochrane Database Syst. Rev. (6): CD008625. doi:10.1002/14651858.CD008625.pub2. PMC 4468928. PMID 24944120.
- Cholera-Zinc Treatment (Report). Centers for Disease Control and Prevention (CDC). November 28, 2011. Archived from the original on December 3, 2013.
- Telmesani AM (May 2010). "Oral rehydration salts, zinc supplement and rota virus vaccine in the management of childhood acute diarrhea". J. Fam. Community Med. 17 (2): 79–82. doi:10.4103/1319-1683.71988. PMC 3045093. PMID 21359029.
- Todar K. "Vibrio cholerae and Asiatic Cholera". Todar's Online Textbook of Bacteriology. Archived from the original on 2010-12-28. Retrieved 2010-12-20.
- Presenter: Richard Knox (10 December 2010). "NPR News". Morning Edition. NPR.
- Lozano R, Naghavi M, Foreman K, Lim S, Shibuya K, Aboyans V, Abraham J, Adair T, Aggarwal R, Ahn SY, et al. (December 15, 2012). "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010". Lancet. 380 (9859): 2095–128. doi:10.1016/S0140-6736(12)61728-0. hdl:10536/DRO/DU:30050819. PMID 23245604.
- Reidl J, Klose KE (June 2002). "Vibrio cholerae and cholera: out of the water and into the host" (PDF). FEMS Microbiol. Rev. 26 (2): 125–39. doi:10.1111/j.1574-6976.2002.tb00605.x. PMID 12069878.
- Johannes Bruwer (25 June 2017). "The horrors of Yemen's spiralling cholera crisis". BBC.
- Dwyer, Colin. "Yemen Now Faces 'The Worst Cholera Outbreak In The World,' U.N. Says". National Public Radio. Retrieved 25 June 2017.
- Blake PA (1993). "Epidemiology of cholera in the Americas". Gastroenterol. Clin. N. Am. 22 (3): 639–60. PMID 7691740.
- "All Entries by BONDT, Jacob de, Jacobus Bontius: HistoryofMedicine.com". www.historyofmedicine.com. Retrieved 2019-07-23.
- Rosenberg, Charles E. (1987). The cholera years: the United States in 1832, 1849 and 1866. Chicago: University of Chicago Press. ISBN 978-0-226-72677-9.
- "Cholera's seven pandemics Archived 2016-03-02 at the Wayback Machine". CBC News. October 22, 2010.
- Hays, JN (2005). Epidemics and Pandemics: Their Impacts on Human History. ABC-CLIO. p. 193. ISBN 978-1-85109-658-9.
- McNeill, William H, Plagues and People, p. 268.
- McNeil J. Something New Under The Sun: An Environmental History of the Twentieth Century World (The Global Century Series).
- "Cholera – Vibrio cholerae infection | Cholera | CDC". www.cdc.gov. 2017-05-16. Retrieved 2018-04-04.
- Aberth, John (2011). Plagues in World History. Lanham, MD: Rowman & Littlefield. p. 102. ISBN 978-0-7425-5705-5.
- Kelley Lee (2003) "Health impacts of globalization: towards global governance". Palgrave Macmillan. p.131. ISBN 0-333-80254-3
- Geoffrey A. Hosking (2001). "Russia and the Russians: a history". Harvard University Press. p. 9. ISBN 0-674-00473-6
- Byrne JP (2008). Encyclopedia of Pestilence, Pandemics, and Plagues: A–M. ABC-CLIO. p. 99. ISBN 978-0-313-34102-1.
- J. N. Hays (2005). "Epidemics and pandemics: their impacts on human history". p.347. ISBN 1-85109-658-2
- Sehdev PS (November 2002). "The origin of quarantine". Clin. Infect. Dis. 35 (9): 1071–2. doi:10.1086/344062. PMID 12398064.
- Renbourn, E. T. (2012). "The History of the Flannel Binder and Cholera Belt". Med. Hist. 1 (3): 211–25. doi:10.1017/S0025727300021281. PMC 1034286. PMID 13440256.
- www.legatum.sk Archived 2013-05-14 at the Wayback Machine, The American Homoeopathic Review Vol. 06 No. 11-12, 1866, pages 401–403
- "Cholera Infantum, Tomatoes Will Relieve". October 13, 2008. Archived from the original on December 24, 2013.
- "Cholera", World Health Organization. who.int Archived 2013-10-25 at the Wayback Machine
- Pyle GF (2010). "The Diffusion of Cholera in the United States in the Nineteenth Century". Geogr. Anal. 1: 59–75. doi:10.1111/j.1538-4632.1969.tb00605.x. PMID 11614509.
- Lacey SW (1995). "Cholera: calamitous past, ominous future". Clin. Infect. Dis. 20 (5): 1409–19. doi:10.1093/clinids/20.5.1409. PMID 7620035.
- Charles E. Rosenberg (2009). The Cholera Years the United States in 1832, 1849, and 1866. Chicago: University of Chicago Press. p. 74. ISBN 978-0-226-72676-2. Archived from the original on 2015-11-09.
- Fillipo Pacini (1854) "Osservazioni microscopiche e deduzioni patologiche sul cholera asiatico" Archived 2015-11-18 at the Wayback Machine (Microscopic observations and pathological deductions on Asiatic cholera), Gazzetta Medica Italiana: Toscana, 2nd series, 4 (50) : 397–401; 4 (51) : 405–412.
- Reprinted (more legibily) as a pamphlet. Archived 2015-11-10 at the Wayback Machine
- onlinelibrary.wiley.com Archived 2015-02-11 at the Wayback Machine
- haffkineinstitute.org Archived 2015-09-24 at the Wayback Machine
- Dr John Snow, The mode of communication of cholera Archived 2015-11-06 at the Wayback Machine, 2nd ed. (London, England: John Churchill, 1855).
- Aberth, John. Plagues in World History. Lanham, MD: Rowman & Littlefield, 2011, 101.
- Ruxin, JN (October 1994). "Magic bullet: the history of oral rehydration therapy". Medical History. 38 (4): 363–97. doi:10.1017/s0025727300036905. PMC 1036912. PMID 7808099.
- CHATTERJEE, HN (21 November 1953). "Control of vomiting in cholera and oral replacement of fluid". Lancet (London, England). 265 (6795): 1063. doi:10.1016/s0140-6736(53)90668-0. PMID 13110052.
- "Sambhu Nath De". Inmemory. Retrieved 2019-12-05.
- "Albert Lasker Clinical Medical Research Award". Lasker Foundation. Archived from the original on September 1, 2017. Retrieved June 30, 2017.
- Merrell DS, Butler SM, Qadri F, Dolganov NA, Alam A, Cohen MB, Calderwood SB, Schoolnik GK, Camilli A (June 2002). "Host-induced epidemic spread of the cholera bacterium". Nature. 417 (6889): 642–5. Bibcode:2002Natur.417..642M. doi:10.1038/nature00778. PMC 2776822. PMID 12050664.
- "Cholera vaccines. A brief summary of the March 2010 position paper" (PDF). World Health Organization. Retrieved September 19, 2013.
- Walton DA, Ivers LC (2011). "Responding to cholera in post-earthquake Haiti". N. Engl. J. Med. 364 (1): 3–5. doi:10.1056/NEJMp1012997. PMID 21142690.
- Pauw J (2003). "The politics of underdevelopment: metered to death-how a water experiment caused riots and a cholera epidemic". Int. J. Health Serv. 33 (4): 819–30. doi:10.2190/kf8j-5nqd-xcyu-u8q7. PMID 14758861.
- John TJ, Rajappan K, Arjunan KK (2004). "Communicable diseases monitored by disease surveillance in Kottayam district, Kerala state, India". Indian J. Med. Res. 120 (2): 86–93. PMID 15347857.
- Siddique AK, Zaman K, Baqui AH, Akram K, Mutsuddy P, Eusof A, Haider K, Islam S, Sack RB (June 1992). "Cholera epidemics in Bangladesh: 1985–1991". J. Diarrhoeal Dis. Res. 10 (2): 79–86. PMID 1500643. Archived from the original on 2015-11-17.
- DeRoeck D, Clemens JD, Nyamete A, Mahoney RT (2005). "Policymakers' views regarding the introduction of new-generation vaccines against typhoid fever, shigellosis and cholera in Asia". Vaccine. 23 (21): 2762–2774. doi:10.1016/j.vaccine.2004.11.044. PMID 15780724.
- Choe, Chongwoo; Raschky, Paul A. (January 2016). "Media, institutions, and government action: prevention vs. palliation in the time of cholera" (PDF). Eur. J. Political Econ. 41: 75–93. doi:10.1016/j.ejpoleco.2015.11.001.
- Pruyt, Eric (26 July 2009). "Cholera in Zimbabwe" (PDF). Delft University of Technology. Archived from the original (PDF) on 20 October 2013. Cite journal requires
- Kapp C (February 2009). "Zimbabwe's humanitarian crisis worsens". Lancet. 373 (9662): 447. doi:10.1016/S0140-6736(09)60151-3. PMID 19205080. Archived from the original on 2009-08-27.
- Brown, Man and Music, 430–32; Holden, 371; Warrack, Tchaikovsky, 269–270.
- David Brown, Early Years, 46.
- Holden, 23.
- Brown, Man and Music, 431–35; Holden, 373–400.
- Orata, Fabini; Keim, Paul S.; Boucher, Yan (April 2014). "The 2010 Cholera Outbreak in Haiti: How Science solved a Controversy". PLOS Pathog. 10 (4).
- Asimov, Isaac (1982), Asimov's Biographical Encyclopedia of Science and Technology (2nd rev. ed.), Doubleday
- Susan Nagel, Marie Thérèse: Child of Terror, p. 349-350.
- Haynes SW (1997). James K. Polk and the Expansionist Impulse. New York: Longman. p. 191. ISBN 978-0-673-99001-3.
- Smith, Rupert, The Utility of Force, Penguin Books, 2006, page 57
- The Singapore Free Press and Mercantile Advertiser, 25 March 1893, Page 2 Archived 8 August 2014 at the Wayback Machine
|Wikimedia Commons has media related to Cholera.|
|Look up Cholera in Wiktionary, the free dictionary.| | https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Cholera | 21 |
77 | Factors Of Production Worksheet
Posted in Worksheet, by Kimberly R. Foreman
After reviewing the factors of production notes on the back of this paper, complete the following activities. read each item and decide which of the four factors of production it would be classified as land. labor. c capital. e entrepreneur school money books teachers principal Factors of production after reviewing the factors of production notes on the back of this paper, complete the following activities.
read each item and decide which of the four factors of production it would be classified as land labor c capital e entrepreneur. school c. money c. books c. Economics factors of production graphic organizer this worksheet is a graphic organizer for the four factors of production, , capital, and.
List of Factors Of Production Worksheet
In each box, students are to define each factor of production and give four examples of each. this is a. the factor of production land, labor, capital, or entrepreneurship for each of the following school property land money capitol computers capitol desks capitol teachers labor custodians labor books capitol smart capitol cafeteria workers labor chairs capitol school bus capitol gasoline land principal entrepreneurship dry erase boards This preview shows page out of pages.
view full document. fill out on the applying economics worksheet factors of production pick a type of business i. e. gas station, restaurant, library, book store, pizza shop, ice cream store. using the four factors of production, make a list of the things you would need for each of the factors for the business you picked.
1. 4 Factors Production Land Labour Capital
They brainstorm and discuss what went into making a candy bar, then create a timeline of the creation of the candy bar. get free access see review. lesson planet. The four factors of production in making pencils land labor capital entrepreneurship the land i need is trees and metal ore because they are natural resources.
my labor was myself and my brother. we did all the work of putting them together. we cut the wood and shaped it into rods. Some of the worksheets for this concept are characteristics of the entrepreneur four corners activity, market for factors of production, factors of production work, factors of production, section understanding business activity, economics principles and practices reteaching activities, name class date taken total possible marks managing, chapter.
2. Nutrient Cycles Worksheet Answers Cycles Worksheet
Word document file economics factors of production graphic organizer this worksheet is a graphic organizer for the four factors of production, , capital, and. in each box, students are to define each factor of production and give four examples of each.
The four factors of production for teachers tenth graders analyze and discuss the four factors of production land, labor, capital, and entrepreneurship. they brainstorm and discuss what went into making a candy bar, then create a timeline of the creation of the candy bar.
3. High School Economics Entrepreneur Application Worksheet
Four production factors displaying top worksheets found for four production factors. some of the worksheets for this concept are factors of production work, factors of production and economic decision making, factors of production, resources or factors of production are scarce, lesson plan file, four first steps for new entrepreneurs, finding factors, unit human population Use this combo to help you test your understanding of the four factors of production.
of the several topics be assessed on, two include the physical effort of people to. Showing top worksheets in the category factors of production. some of the worksheets displayed are factors of production, market for factors of production, factors of production work, lesson plan file, essential questions economics section scarcity and, introduction to economics, unit basic economic concepts, unit basic economic concepts.
4. Integer Worksheet Add Subtract Multiply Divide Order
Geometry. mat module compound interest worksheet accessible. infinite interest. south mountain community college. Simple interest i. compound interest a p r t. p principle. r rate remember to turn the into a decimal t time remember time is always expressed in years example months.
5. Labour Wages Economics Lessons Economics Notes
Factors of production land is the environmental resource. it includes all natural resources land including anything that grows on or below the soil, water, air, and wildlife. some natural resources are plentiful, while others are scarce. resources are classified as renewable and nonrenewable.
Factor market basics. income from factors of production factors of production factor income is income earned from owning and selling factors of production wages earned from working in labor market. interest earned by renting capital. Factors of production online worksheet for grade.
6. Magic School Bus Worksheet Magic School
Students explore how insulation works to keep in body heat. grades. ,. quick links to lesson materials item. book. teach this lesson. field trip notes. when discovers that his cocoa is cold, he demands to know where the hot went. in response, ms. frizzle whisks the class to.
Displaying top worksheets found for magic school bus gets planted answer key. some of the worksheets for this concept are the magic school bus answers questions, the magic school bus answers questions, magic school bus gets programmed work, the magic school bus answers questions, the magic school bus answers questions, the magic school bus answers The magic school bus kicks up a storm science activity.
7. Makeup Age Images Economics Lessons
A commercial aircraft is a. Wizer. me free interactive worksheet factors of production by teacher. parents teachers schools about log in join now. go factors of production worksheet grades. grade. subjects. other. copy this worksheet to your account, edit and use it with your students.
Factors of production land labor capital physical capital human capital e la. use multiple strategies to develop vocabulary. ss. e. identify factors of production and their necessity to making goods and services. ss. e. understand the concepts relevant to the national economy.
8. Microeconomics Export Restrictions
Feb, worksheet that includes questions on the factors of production land, labor, capital, and enterprise. the worksheet is leveled with green and orange chilies, so that students can choose questions that are best suited to their ability. Jul, the factors of production are the resources used in creating and producing a good or service and are the building blocks of an economy.
the factors of production Nov, the factors of production are the set of three basic resources used to produce goods or services in order to generate profits. dietary fats total fat and fatty acids from factors of production worksheet answers you will need to comprehend how to project cash flow.
9. Multiplication Integers Worksheet Multiplication
For additional practice, or for another approach to understanding multiplication of integers, you may pass out and complete the multiplying integers worksheet with the bags of stones on it. the product of two integers with different signs is negative.
10. Periodic Table Puzzle Worksheet Answers Periodic
May, periodic table puzzle worksheet answers key may,. share this post twitter google. posts related to periodic table puzzle worksheet answers key. worksheet puzzle answer key worksheet periodic table puzzle answers. periodic table answers to puzzle worksheet.
Periodic table puzzle worksheet answer key brokeasshome. com periodic table puzzle worksheet answer key. , leave a comment views. periodic table puzzle worksheet for solved name periodic table puzzle printable crossword the elements chemistry elements word search puzzles.
Sep, in the mean time we talk related with periodic table puns worksheet answers, scroll the page to see some similar images to complete your ideas. alien periodic table answer key, periodic table worksheet answer key and the periodic table elements and atoms worksheet answers are three of main things we want to show you based on the gallery title.
11. Factors Production Worksheet Scarcity Review Worksheet
K w. j b r a g. z h in pa x ww. m worksheet by software software infinite algebra name factoring a date grade factoring worksheets including factoring numbers up to, factoring numbers up to and prime factor trees. worksheets with no login required. Worksheet by software analytic geometry unit mixed factoring review name id date period q glxlwcl. v l arbepsferrnvbesdl. factor each completely. . O worksheet by software ocm. nngaltl. q algebra mixed review of factoring factor the common factor out of each expression.
12. Photosynthesis Diagrams Worksheet Answers Luxury Synthesis
Student id bio fall lecture worksheet photosynthesis how do and reactions provide food for a plant read this plants are the original solar panels. through photosynthesis a plant is able to convert electromagnetic light energy into chemical energy.
13. Polarity Worksheet Answers Work Work
Once you find your worksheet, click on icon or print icon to worksheet to print or download. Ecological principles are basic assumptions or beliefs about ecosystems and how they function that are informed by the ecological concepts. ecological principles use ecological concepts which are understood to be true to draw key conclusions that can then guide human applications section Ecology review worksheet answers science worksheets biology worksheet worksheets source www.
pinterest. com ecology review worksheet answers worksheet resource principles of ecology biology pages. ecology vocabulary preview worksheet ecology. and virtual lab in canvas. ecology lecture guide. whats for dinner worksheet. food chain activity.
14. Printable Education Worksheet Templates
ComFeb, three worksheets with practice sketching and identifying trig graphs. transformations are amplitude, period, vertical translation and combinations of these. no phase shift may add later. answers included. View transformations worksheet answers.
15. Production Possibilities Curve Show Factors
The line on a production possibilities curve showing the relative amounts of two types of goods produced using all resources is called the. production possibilities frontier. Dec, omegas production possibilities curve is given by p l. k. where l is the size of the labor force people and k is the number of capital goods which is.
have the students answer the following questions what is the maximum number of fish that can be produced call this number f. Production possibilities curve as a model of a economy. lesson summary opportunity cost and the. practice opportunity cost and the.
this is the currently selected item. next lesson. comparative advantage and the gains from trade. This product includes a worksheet that can be used to reinforce or review the production possibilities curve. a total of questions are included. this product is also included in my production possibilities curve doodle notes packet.
16. Scarcity Factors Production Circular Flow
Absolute and comparative advantage worksheet over the next few slides, we are going to take a brief of the information that is presented to you on your quiz. please be sure to pay close attention, as you will be challenged on an actual quiz tomorrow over this information.
Globalization, comparative advantage, economic growth, exchange rates, and other international topics. lessons and focus on the and its role in the global economy. lesson ten basic questions about globalization focuses on the history, impact and future implications of living in a globalized economic system.
17. Secondary Industries Factors Industrial
School c. money c. books c. Factors of after reviewing the factors of production notes on the back of this paper, complete the following activities. read each item and decide which of the four factors of production it would be classified as land. labor.
c capital. Classifying factors of production. i teach economics to kids in. this is a worksheet i designed to help them understand the different items that can be classified as land, labor or capital. in particular, that land can actually be the natural resource water.
18. Uploads 1 1 8 9
Hope someone finds Production possibilities graph. g, a point of. c, d, e, f, a, b, s. efficiency efficiency means using resources in such a way as to maximize the production of goods and services. an economy producing output levels on the production possibilities frontier is operating efficiently.
Showing top worksheets in the category four factors of production. some of the worksheets displayed are characteristics of the entrepreneur four corners activity, market for factors of production, factors of production work, factors of production, section understanding business activity, economics principles and practices reteaching activities, name class date taken total possible.
19. Federal Reserve Worksheet Problem Sets Economics
Start studying crash course monetary policy and the federal reserve. This quiz and worksheet combo can help gauge your knowledge of monetary and fiscal policies and how they differ. you will be quizzed on roles of the federal reserve, as well as characteristics of.
May, worksheet may, any potential investor should consider a monetary policy worksheet to help guide his investment decisions. a financial expert can assist in making a decision but it is up to the investor to learn all he can about how financial markets work.
20. Factors Production Worksheet Business
These are helpful for students in doing homework or preparing for the hour factors of production packaging customer satisfaction capital factors money used to buy resources and technology to make the product once the products are produced they must be packaged for distribution.
21. Anchor Chart Intro Goods Services
Packaging can be made with plastic or of production are the resources the economy has available to produce goods and services. labor is the human effort that can be applied to the production of goods and services. labors contribution to an output of goods and services can be increased either by increasing the quantity of labor or by increasing human capital.
22. Economics Business Top 7 Differences Learn
Desert flower worksheet. economic assignments. sitemap. desert flower worksheet investigation identify the factors of production for medicinal flowers. land land allocated to growing the plants. labor workers that are trained to carefully remove the flowers.
23. Blank Vocabulary Worksheet Template Unique Work Sheet
Jun, the templates, much like the budget worksheets, have the skeletons that guide you on your way. they basically grant you the spaces wherein to enter the data and interpret the same thereafter. just download them and set your budget planning rolling well.
just to reiterate an earlier point, budget planning is an undertaking that. Apr, monthly budget templates and worksheets. download. smartsheet. com. monthly budget templates and worksheets. download. final thoughts. a monthly budget template is not only practical but also liberating.
24. Choosing College Worksheet Decision Making
Here are some things to consider carefully academic programs. if you know what you want to study, its important to choose a school that has a strong program this activity you will access the website listed on the choosing a college major worksheet, read the articles listed, and complete the choosing a college major worksheet.
25. Choosing College Worksheet Finding Passions
When considering whether a school fits you, think about size, location, distance from home, programs of study, extracurricular Choosing a career path. career exploration and decisions are important, but then we have to build a plan to get us where we want to go.
26. Classifying Matter Worksheet Answer Key Inspirational
In this classification of matter worksheet, learners answer questions about solids, liquids and gases, types of mixtures and compounds vs. elements. they also answer questions about measurement and question about a. This classification of matter worksheet is suitable for grade.
27. Classifying Matter Worksheet Answer Key Unique
Then what is the mass the mass is the amount of matter in an Displaying top worksheets found for classifying matter for first grade. some of the worksheets for this concept are work classification of matter name, instructional unit grade, first grade science in scope and sequence, unit taxonomy and classification, unit living things, name matter crossword, why does matter matter, topic elements compounds and.
28. Created Worksheets Teach Economics
It is the foundation for much of what is studied in the field and understanding how supply and demand affect the economy can help us to recognize economics everywhere in our daily lives. the classical economist j. r. is attributed. Understand the law of supply and demand.
29. Economic Systems Worksheet Simplifying
Looking closer to the constitution assignment worksheet download constitutionworksheet. doc file size file type doc download file. the structure of government branches chart worksheet. worksheet download thestructureofgovernmentbrancheschartworksheet.
30. Economics Factors Production Worksheet Notes
Factors worksheets printable factors and multiples worksheets. here is a graphic preview for all of the factors worksheets. you can select different variables to customize these factors worksheets for your needs. the factors worksheets are randomly created and will never repeat so you have an endless supply of quality factors worksheets to use in the classroom or at home.
31. Factors Production Practice Worksheet Intro
32. Economics Mini Unit Grades
There are factors of production that influence economic growth within a country. natural resources available. investment in human capital. investment in capital goods. entrepreneurship the presence or absence of these factors determine the In economics, factors of production or productive inputs are the resources employed to produce goods and services.
33. Economics Scarcity Opportunity Factors Production
These can be categorised as these can be categorised as land all natural resources provided by nature such as fields, forests, oil, gas, metals and other mineral resources. This factor of production includes machinery, tools, equipment, buildings, and technology.
34. Economics Understanding Demand Student Simulation
Businesses must constantly upgrade their capital to maintain a competitive edge and operate efficiently. in the last couple decades or so, businesses have faced unprecedented technological change and have had to meet the demands of consumers whose lives.
35. Economics Worksheet Economics Economics Lessons Life
Notes goods and services. goods are physical objects like cars and computers. services are duties performed for people such as car repairs, and computer repairs. factors of production for goods and services to be produced, we need resources. these resources are called factors of production.
36. Economics Worksheet Middle School History Geography
Factors of production has factors of production can sometimes be a bit tricky for students. activities are effective tools to help students visualize and understand the link between land, labor and capital and the role that each factor plays in the production of goods.
37. Economics Worksheets Middle School Economics Worksheets
Activities can help review the Types of economic systems worksheet. in citizens own the most of the factors of production and make most of the economic decisions. the economy is primarily, focusing on manufacturing and commodities. businesses are largely privately owned and independent.
38. Elastic Inelastic Supply Demand Instructional Videos
, name the factors of production describe briefly how you think they are likely to be used in the production of jelly bean sweets a sweet manufacturer plans a major change in its methods of production with the introduction of greater automation machinery than before.
39. Worksheet Polarity Bonds Answers Fresh Chem
Displaying top worksheets found for polarity. some of the worksheets for this concept are b, bond and molecule b, practice b, mapping b, b of bonds, and solubility b, b. Help students understand how to draw structures, understand molecular geometry, and the polar nature of ionic and covalent bonds.
in this practice students the correct name for given chemical the structure for each the. In this molecules worksheet, students answer post lab questions about types of bonds, factors that determine polarity and molecular geometry. they calculate differences in atoms and determine the types of bonds between.
Rows worksheet factors of production student name date directions using the information. Factors of production. factors of production displaying top worksheets found for this concept. some of the worksheets for this concept are factors of production, market for factors of production, factors of production work, lesson plan file, essential questions economics section scarcity and, introduction to economics, unit basic economic concepts, unit basic economic concepts.
In general terms, factors of are the used to make things. e. g. these wonderful notes that you are reading required some of all four factors of to be made. land they were typed up in my apartment. labour the countless hours that i spent typing them were, of course,Tenth graders analyze and discuss the four factors of production land, labor, capital, and entrepreneurship. | http://andrewcannon.net/factors-of-production-worksheet/ | 21 |
63 | Wetlands are natural or artificial ecosystems where water is the main component controlling the environment and its flora and fauna ( Ramsar Convention Secretariat, 2013).Water found in wetlands may be static or flowing, and its quality may range from fresh to salty. Wetlands are prevalent in areas where the water table is near the surface and where the habitat is submerged in shallow waters. These include coastal areas, estuaries, and areas around lakes and along rivers and streams. Wetlands provide provisioning, hydrological, geochemical and cultural benefits, which are commonly referred to as ecosystem services (Peh et al., 2013; Costanza et al., 1997; Davidson, 2014). They provide a variety of products and services which include: food such as fish and other aquatic animals; fresh water; and fibre and wood fuel. They also provide buffer zones and regulate floods and flow regime in landscapes, thereby preventing flooding incidences (Kadykalo & Findlay, 2016). They act as carbon sinks, and thus help combat climate change (Patton et al., 2015) as well as acting as valuable filters and pathogen removers in drinking water, which helps in maintaining good health (Wu et al., 2016). These areas are biodiversity hotspots, with different species of flora and fauna. Wetlands are generally attractive areas, and are often used for recreational, educational and spiritual activities. In Africa, wetlands make up about 16% of the total land (Koohafkan et al., 1998) and virtually support the livelihood of the local communities living around them (Taylor et al., 1995).
In spite of the immense benefits of wetlands, in the pre-1960s, there was little focus on their conservation, partly because of low levels of environmental degradation at the time. Globally, the start of the 1960s marked a period of awareness about the need for wetland conservation which eventually culminated into the Ramsar Convention on Wetlands and its adoption in 1975 ( Ramsar Convention Secretariat, 2013).The adoption of Ramsar Convention saw the listing of several wetlands as protected areas for conservation purposes.
However, wetlands have continued to face threats due to increased human activities which have led to increased degradation and total disappearance in some instances globally (Durigon et al., 2012; Turner, 1991; Hettiarachchi et al., 2015; Barbier, 1993). Demographic growth, increasing poverty, unsustainable farming and rising economic stresses are the major contributors to wetland losses in Africa. These stressors are further exacerbated by increased drought spells (Schuijt, 2002). The East-African region where the great Lake Victoria Basin lies has a number of wetlands that have been degraded in the recent years (Majamba. 2004; Mombo et al., 2013). These wetlands are surrounded by people who are living below the poverty line (Van Dam et al., 2014).
Wetlands in Tanzania make up about 10% of the total land mass, and out of this, about 5.5% are categorized as Ramsar sites (Guidelines for Sustainable Management of Wetlands, 2014). The Masirori and Mara Bay wetlands are located in the Mara province, Tanzania, and form part of the Great Lakes region. This region has been marked as a very important area for terrestrial and freshwater biodiversity conservation (Crisman, 2001). In the recent times, however, these wetlands have come under focus due to anthropogenic pressures (Sakane et al., 2011; Beuel et al., 2016; Hoffman et al., 2011). There is urgent need to assess negative impacts of economic activities with these wetlands to inform management strategies and policy decision-making. There is currently no study, at least by the time this study was undertaken, on the negative impacts of economic activities on wetlands in East Africa, and in particular, the Mara Bay and Masirori wetlands.
This research was aimed at making a contribution towards sustainable utilisation and management of wetlands, using Mara Bay and Masirori Wetlands as case studies. In order to do this, it was important to engage and understand pertinent issues faced by the people living off these wetlands. The specific objectives of the research were to: 1) identify the key economic activities undertaken 2) investigate the main goods harnessed; 3) assess the level of dependency of the locals to the wetlands; 4) identify the negative impacts of the income generating activities on wetlands; 5) and, assess the level of environmental knowledge of the locals. The findings from the research would then be used to devise appropriate strategies for wetland conservation.
2. Materials and Methods
2.1. Study Area
Masirori and Mara Bay wetlands are part of the greater Mara River Basin which is one of the transboundary basins in East Africa. Mara Basin extends over an area of roughly 13,750 km2 from Masai Mara National Reserve on the Kenyan side into Tanzania through the Serengeti National Park before ending along the shores of Lake Victoria. The Mara River originates from the Mau escarpment in Kenya, flows through Masai-Mara and Serengeti National Parks and drains its water to Lake Victoria in Tanzania. Approximately two thirds of Mara Basin is in Kenya, and the rest is in Tanzania.
The Mara Bay and Masirori Wetlands lie on the southern part of the Mara River Basin (Figure 1) and cover an area of about 500 km2. The annual rainfall in the lower reaches of the Mara Basin where the two wetlands are located range from 500 to 800 mm. The two wetlands currently have no formal protection
Figure 1. Location map showing the Mara Bay and Masirori Wetlands.
status, although they are only listed as “Important Bird and Biodiversity Areas.” They are protected by the Tanzanian law under the Environmental Management Act of 2004, Wildlife Conservation Act of 2009, and Water Resources Management Act of 2009. The wetlands are known to have more than 20 plant species, 226 bird species, 14 fish species, and 30 species of terrestrial and semi-aquatic mammals.
This portion of the Mara River Basin in Tanzania administratively falls under Mara Province. The land in the area is government-owned and controlled. The economy of the area is marked with high rural poverty, hinged on small-scale farming and fishing, with a majority of its populace relying on wetland products for economic sustenance and wellbeing. The Mara Bay and the Masirori Wetlands are surrounded by 16 villages of which 8 of them lie on the northern side, namely: Bisarwi, Kwibuse, Nyamerambaro, Nkerege, Kembwi Marasibora, Nyanchabakenye and Surubu; and the remaining 8 namely: Kongoto, Buswahili, Kirumi, Wegero, Ryamisanga, Kwisaro, Kitasakwa and Bukabwa lie on the southern side.
In this study, Buswahili, Marasibora, Kongoto and Kirumi villages were selected because of their close proximity to Lake Victoria and Mara River, and the relatively higher volumes of economic activities undertaken. On the other hand, Kongoto, Kitaramanka, Kiagata, Otegi, Musoma town, Kobwasa, Kirumi, Tarime and Ochuna markets were selected because of their close proximity to the four villages, and because they trade on the wetlands’ goods. The study focused on key stakeholders who had immediate contact with the two wetlands such as: farmers, fishermen, mat-makers, fishmongers, herdsmen (those who harness wetland grass to feed their cows) and charcoal burners. Past studies (e.g. Dixon & Wood, 2003; Scoones, 1991) have singled out these occupations as being intensively engaged in African wetlands.
2.2. Data Collection and Analysis
T In order to propose management strategies for the sustainable use of the Mara Bay and Masirori Wetlands, it was imperative to identify and engage with key wetland users who work on the wetlands on a day-to-day basis for their livelihoods. In order to understand the management and conservation issues in the two wetlands, various data sources were used and many respondents including the leaders of the water users’ association and conservation officers were interviewed, alongside the use of focus groups, observations and market survey tools and techniques. The qualitative interview approach is a recognised method in nature conservation studies (Devetak et al., 2010), whereas market survey was an appropriate quantitative tool that generated answers that could be coded and analysed with parametric statistical methods to allow generalized conclusions to the whole population under study to be made. The tools and techniques used in this study are explained further in the following subsections.
The questionnaires were administered face to face and the responses later transcribed into text by the interviewer. They were used to elicit both quantitative and qualitative information on how the respondents used the provisioning services to meet their livelihoods, and their perception and knowledge on wetlands.
The questionnaires contained both closed and open-ended questions. Closed-ended questions were mainly used to obtain data such as the wetland goods they harnessed, the markets the respondents opted to sell their wetland resource-based goods, the distance travelled, and how they delivered the goods to markets, and the time the respondents spent in harnessing these products from the wetlands and frequency. The amount of income the respondents obtained from selling the wetland goods and other alternative goods (if any) were also sought. This was important in order to understand the economic importance of these wetlands to the locals and to determine if they had other activities they relied on and what they had to forego. The frequency showed how reliant they are on the services provided by the wetlands.
Open-ended questions were used to gauge the respondents’ understanding of meaning of wetlands, the importance of wetlands to them and their families, and the role played by wetlands to ensure the long-term well-being of communities living around them. In total, 116 people were interviewed during the study consisting of: 98 key wetland users, 14 water users’ association leaders and 4 conservation officials. To ensure good representation and quality of information, a stratified random sampling process that exhibit socio-demographic characteristics of the wetland region was used and the respondents were encouraged to seek clarification and/or additional information whenever it was necessary. Table 1 shows the key wetland users and the timing of their interviews:
Table 1. Key wetland users and timing of interview.
Semi-structured interviews in this study leveraged on the information already collected using questionnaires to obtain further information on some important aspects, particularly how the respondents understood environmental services provided by the two wetlands, and the challenges hindering the wetland conservation efforts. The leaders of the water user’s association, from both the northern and southern side of the catchment, were interviewed during a capacity building workshop. Officials working for a local conservation group, Birdlife International Tanzania, conservation officials were also interviewed.
A total of 14 water users association leaders, from both the northern side and southern side of the Mara were interviewed using semi structured interviews. The issues covered during the interviews included the utilisation of the two wetlands by the key wetland users and levels of environmental knowledge amongst the wetland users. Four conservation officials who work with the locals in the two wetlands were also interviewed, and the issues covered included challenges encountered in conservation of the two wetlands, and possible measures for mitigation.
Markets around the study areas where the goods from the wetlands were sold were surveyed and market representatives interviewed. The questionnaires administered to the traders and sellers in the market surveys were different from the ones administered to the key wetland users in that they focused only on the wetland goods brought to the market and their prices. A total of twenty traders from the eight markets covered were interviewed.
2.3. Data Analysis
The quantitative data on income was analysed using Excel®. The total income from each of the villages, incomes from each occupation, and percentages of key wetland users, from the actual number of people were calculated. One-way ANOVA was further used to check if there were statistically significant differences across the villages in terms of income and income per occupation per person, and across the villages and occupations in terms of time. The objective of this analysis was to test homogeneity of the treatments. This analysis technique was preferred because it is a relatively robust procedure with respect to violations of the normality assumption (Kirk, 1995). Using Excel® spreadsheet, the total number of commodities mentioned by the respondents was calculated. Grouping of the commodities and percentage distribution were calculated and visualised in Excel®. The time spent in the wetlands was analysed using Excel® to get the percentages across the villages.
3. Results and Discussion
In total 98 wetland users drawn from the four villages: Buswahili (29), Kongoto (37), Kirumi (11) and Marasibora (21) were interviewed for this study. Out of these, 45% were males and 55% were females. The various occupations of these respondents are shown in Table 1. The level of education of the respondents was varied, with the majority (77%) having only primary school level education, while 4% and 17% had secondary school level education and no education, respectively. The ages of the respondents varied, with the oldest being 70 years and the youngest 19 years old. The greatest number of people interviewed were in the age group 30 - 39 years.
The statistics from the survey in the two wetlands reflected the national trends, and closely matched with the 2012 Tanzanian National Census which indicated that the total population had 48.7% males and 51.3% females, and showed that 81.7% and 14.4% had primary and secondary-level education respectively (The United Republic of Tanzania, 2012).
3.1. Commodities Harnessed from the Wetlands
The goods harnessed from the wetlands which support the major economic activities in the study area are shown in Table 1. The proportion of wetland users who were found to be involved in papyrus harvesting, food crop cultivation and fishing was 30%, 25% and 24%, respectively (Figure 2). The main food crops grown in the study area included maize, sorghum, millet, rice and watermelon. Charcoal/firewood and grass for livestock accounted for 12 and 7%, respectively (Figure 2).
Similar findings were obtained by Ajwang et al. (2016) in a similar catchment in the Lake Victoria Basin (Ombeyi wetland in Kenya), where papyrus was found to be the major commodity harnessed by the wetland users interviewed at 93.4%, followed by farming (76.1%), fishing (41.8%) and firewood (8.5%).
3.2. Income Generated from the Wetlands across the Four Villages
The income generated from the four villages in the survey (Table 2), demonstrated
Figure 2. Percentage of key wetland users harnessing wetland commodities.
Table 2. Weekly income generated from the four villages.
1 USD = 2121.28 TZS; 31/12/2016.
the role the two wetlands play in meeting the economic needs of the respondents who mostly live below the poverty line and utilise the two wetlands for economic sustenance. Income across the villages was spread differently. There were significant differences in incomes for charcoal (p < 0.05) across the four villages, while farming, fishing, mat making, herding and fish mongering had no significant differences (p > 0.05).
Data on income was used to identify how much money the respondents earned from the direct sales of the wetlands’ goods. The data collected provided a clear indication on which wetlands goods were traded in the market or ended up in the market in one way or the other. The income of the respondents represented weekly earnings per person. These findings reflected the national trends in occupations, with farmers having the highest income. The findings of the 2012 Tanzania National Census indicated that 62.1% of the population engaged in farming and was the main occupation. Livestock keeping (herding) and fishing were practised by 2.4% and 1.0% of the population, respectively, while other elementary occupations constituted 6.3% (The United Republic of Tanzania, 2012).
3.3. Time Spent by Respondents in Harnessing Wetland Resources
The average time spent by the respondents in carrying out their economic activities in the two wetlands is shown inTable 3. Time was used as an indicator of the dependency of the locals on the wetlands, the rationale being that a respondent spending an average of 5 hours per day on the wetland carrying out strenuous work, would not be sufficiently productive carrying out any other economic activity.
In Buswahili village, charcoal burners interviewed spent the highest amount of time (average of 8 hours/day) and the least were mat-makers at 4 hours/day on average. The charcoal burners, undeterred by lack of enforcement officials, were free to spend many hours in the wetlands. There were significant differences in the time spent in the village (p < 0.05) as a consequence of the charcoal burners spending more time in the wetlands as compared to other users. In Kirumi village, mat-making, which is labour intensive, took the highest time (6 hours/day on average) and fish mongering, which is less practised in the village, took the least amount of time (5 hours/day on average). There was no significant difference in the time spent across the village (p > 0.05), and all the wetlands users spent on average similar hours in the wetlands. Kongoto village had herdsmen spending the highest time with an average of 8 hours/day, due to vast areas of herding. The least time was spent by the mat-makers with an average of 4 hours/day. There was significant difference in the time spent among the respondents (p < 0.05) because they also spent similar hours in the wetlands. Marasibora village had the highest time spent by fishmongers at an average of 6 hours/day and charcoal burning, which is less practised in the village had the least time at an average of 3 hours/day. There was no significant difference in the time spent on the wetlands across this village (p > 0.05), and this also attributed to similar working hours in the wetlands. Having spent an average of 5 hours minimum in the wetlands, the respondents were found to not engage much in other economic activities, and any other economic activity was rather a supplementary in nature. Overall, based on the amount of time the respondents spent
Table 3. Average time spent in wetlands across the four villages.
on the wetlands, they were classified as being very dependent economically on the two wetlands.
3.4. Level of Knowledge on Wetland Ecosystem Services amongst the Respondents
It is expected that some knowledge on wetland ecosystem services or general environmental education helps communities in wetland conservation. In this study, only 6% of those interviewed regarded themselves as having some considerable knowledge on wetland ecosystem services, while the rest (94%) thought that they lacked knowledge. Hence the minimal knowledge on wetland ecosystem services exhibited by the respondents is likely to be a catalyst for activities that negatively affect the wetlands.
3.5. Wetland Management and Conflicts
Using a semi-structured interview, views were sought from officials of the water users’ association and a local conservation agency on the wetland management and conflicts. It emerged that village government councils are charged with the management and utilisation of wetlands in their jurisdictions. More often, these officials allotted sections of the wetlands to various villagers to undertake various economic activities without due regard to conservation. As explained earlier, the majority of the residents living around the wetlands rely on them for economic sustenance.
Other issues that were raised were conflicts and illegal activities within the wetlands. Clashes between herders and farmers who engaged in cropping were common. The latter often accused the former of letting loose their livestock on their crops. There were also illegal activities reported, for instance unsustainable fishing and destruction of biodiversity. These illegal activities were fuelled by market forces and poor enforcement by village government officials. Other activities that threatened the sustainability of the two wetlands include unsustainable harvesting of papyrus, use of synthetic fertilisers in farming and the use of biomass for charcoal production.
The locals over-rely on the papyrus for mat-making. Papyrus mats are cheap and cost-effective to make, and hence preferred by the locals. However, the money raised from sale of mats is low, and this drives the locals to use more papyrus which leads to unsustainable use. In a bid to improve their yields due to deteriorating soils, the farmers apply synthetic fertilisers which the conservation officials reported to be impacting the Mara River and Lake Victoria.
3.6. Negative Impacts on Wetlands and Proposed Mitigation Strategies
Mara Bay and Masirori wetlands are of great importance economically to the residents of the Mara region based on the dependency of the local communities on the two wetlands for economic sustenance. However, as shown by Davidson (2014), the anthropogenic activities that are carried out in wetlands in Africa due to reliance on these resources, oftentimes have negative ramifications on them. Studies undertaken on the wetlands within the Great Lakes region show that conversion of wetlands to farm lands, fuel wood and charcoal harnessing, river flow modification, poverty and weak translation of management policies are some of the major issues which afflict most wetlands (Brooks & Thompson, 2001).
From the study of the two wetlands, issues arising from income generation activities for the markets were highlighted. These issues contributed to the destruction of the wetlands while the locals went about harnessing resources for the local markets. The results of this study support the findings of Mombo et al. (2011) which showed that the Mara wetlands are a source of economic sustenance to the locals who rely on them for income, food and water. This was indicated by the time the villagers spend on the wetlands harnessing the goods.
Like many African wetlands facing pressures from farming (Van Dam et al., 2014; Barbier, 1993), the findings from the field study and reports from the conservation officials in the area showed that parts of Mara Bay and Masirori wetlands were used as farm lands (for instance maize farm as shown in Figure 3). Conversion of the two wetlands into farmlands is rampant because agriculture is the major economic backbone of the region and a source of income to many communities in Tanzania (Mombo et al., 2011). In convergence with the findings of Ostrovskaya et al. (2013), poor management and enforcement of policies is the major cause of ecological destruction in the two wetlands. In this study, farming was found to be spread across the four villages, with Kongoto village generating the highest income at a total of US $ 516. The residents are allotted farmlands inside the wetlands by government officials who have no training in environmental matters.
As shown by similar studies undertaken by Musamba et al. (2011), the current study identified livestock keeping as a major economic activity in the Mara
Figure 3. A maize plantation in the wetland.
region. The residents in the region keep livestock mostly as a financial back-up during difficult financial times and sell them to generate quick cash during financial emergencies. Herding was highest in Kongoto village where the total income was US $ 345. There was no significant difference in the income generated from herding. Use of wetlands as grazing lands is an issue in most African wetlands (Musamba et al., 2011) because of the damage caused by the hooves on the top soil. The study equally verified this in the two wetlands, and also by concerns raised by the conservation officials in the area. The study concurred with Dessu and Melesse (2013) and Mango et al. (2011) that one of the main reasons why the communities turn to the wetlands for grazing lands is because of increased population in the region. With the increase in the population and reduced grazing sites, the locals therefore turn to the wetlands for pasture and watering of the livestock, consequently increasing the pressure on the two wetlands.
Charcoal burning is a major threat to African wetlands, because of the destruction of vegetation which are vital for maintaining the wetlands’ regulatory functions in sequestering carbon (Beuel et al., 2016). The activity of charcoal generation in the two wetlands was driven by the need for cheaper fuel for the local communities who could not afford or access gas or electricity. In Tanzania, the main source of energy for cooking in terms of percentage are firewood and charcoal, at 68.5% and 25.7%, respectively. Charcoal burning was very attractive to the locals as can be seen in Figure 4.
Papyrus is a major source of income for many locals in African wetlands (Van Dam et al., 2014). It is quickly processed into mats which are popular in the local markets due to their low prices and practicality in use in almost any household’s setting (Figure 5). In this study, it emerged from the different sources of information that papyrus harvesting from the two wetlands was happening rapidly and in unsustainable manner due to low mat prices in the markets. This leads to the loss of papyrus and habitats for various herbivorous species such as sitatunga (Tragelaphus spekii) and hippo (Hippopotamus amphibious). Among the respondents interviewed, makers were the majority at 31%, yet they were the group
Figure 4. Charcoal being transported to the market using a bicycle.
with the lowest income generated at only a total of US $ 53 per group/week in Marasibora village.
Fishing was also found to be popular amongst the respondents interviewed, with 25% of them being fishermen. Fishing is a major source of income and food for many African communities living around lakes and wetlands (Koohafkan et al., 1998). However, the challenge with fishing in the African context is ensuring that it is done in a sustainable manner. In many areas of Africa, there is an issue with using smaller-meshed nets in order to capture more fish. However, this leads to the capture of immature fish and in some instances unhatched eggs, and this is usually driven by the desire of fishermen and fish mongers for more sales in open markets (Figure 6). Fishing income had no significant difference across the four villages the fishermen enjoyed similar prices across the four villages in the two study areas.
An economic activity which was only identified during the field work and was never anticipated was brick-making. In this activity, the top soil is used to bake clays into bricks and this is what causes a challenge in the management of wetlands in the region. The soils in the wetlands soils are known to sequester carbon
Figure 5. A resident using papyrus to make mats for sale.
Figure 6. A fish trader serves customers at Kirumi market.
(Chmura et al., 2003) which is released during the baking process, hence the process not only releases carbon back into the atmosphere, but also destroys the chemical and biological composition of the soils in the wetlands. This economic activity was seen scattered around the study area, and the conservation officials indicated that a stack of 20 bricks would retail for US $ 1. While the cost of mud bricks are much lower than the conventional bricks chiselled out of stone, the bricks are often sold in the range of hundreds in order to make more money by the local residents.
Consequences on the provisioning services
The factors discussed above have a number of impacts on the ecosystem services of the two wetlands. Firstly, they lead to the reduction in provisioning capabilities of the two wetlands. Davidson (2014) argues that uncontrolled harvesting results in slow reduction in the capacity wetlands to provide ecosystem services. In this study, economic activities such as fishing and mat-making using papyrus (papyrus provide shelter for young fish) were found to be undertaken in an unsustainable manner. This will eventually reduce the provisioning services of the two wetlands, implying that in the near future, the two wetlands will not be able to fully provide for these resources without facing further pressure.
The vegetation in wetlands to act as carbon sinks (Beuel et al., 2016), and enables the wetlands to help in regulating services (Beuel et al., 2016). The destruction of trees and other vegetation was found to be happening as a result of charcoal burning. The economic activity is therefore reducing the ability of the two wetlands to regulate climate as needed. Another economic activity which impacted on the wetlands’ regulatory ability is the reduction in storm and flood control due to loss of papyrus (MEA―Millennium Ecosystem Assessment, 2005).
Previous studies have shown that wetlands help in the accumulation of organic matter and sediment retention during soil formation (Davidson, 2014). Soil formation and nutrient cycling are the essential supporting services wetlands provide (MEA―Millennium Ecosystem Assessment, 2005). Wetlands have also been found to store more carbon in the soils than even rainforests. Soils found in wetlands help with the regulatory and supporting services, and this shows how cross-cutting the wetland ecosystem services can be. The activity of using the wetlands as grazing lands also has negative effects on the top soil in the two wetlands. The hooves of livestock loosen the top soil (Ajwang et al., 2016), making it vulnerable to erosion during rain events leading to not only loss of nutrients, but also turbidity in the wetland waters as well as Lake Victoria.
The two wetlands carry a lot of biodiversity and are endangered bird hotspot sanctuaries (Kassenga, 1997; Sritharan & Burgess, 2012). These wetlands are aesthetic and educational, and provide cultural services in the region. The destruction of biodiversity in these wetlands during various income-generation activities leads to the slow decline of the cultural support offered by the two wetlands. Perhaps more troubling is the possibility that some of the biodiversity could be endangered or become extinct in the near future.
Conversion of wetlands into farmlands has caused massive reduction in wetlands (Ajwang et al., 2016), and thus a reduction in the ecosystem services provided by these wetlands. In the current study conversion of sections of wetlands to farmlands was found to be on the increase in the two wetlands. This was found to be driven by the fact that the region relies economically on agriculture and also due to uninformed allotment done by the government officials. Therefore, conversion into farmlands in this study is viewed as the biggest threat because farmers use synthetic fertilizers which get washed in the waters and soils, affecting the biodiversity and may lead to the eventual loss of the two wetlands.
Studies done by Davidson (2014) and Ajwang et al. (2016) attribute total wetland losses to unsustainable anthropogenic forces aided by improper management. The present study also argues that, based on the information from the survey and documented impacts, the existence of the two wetlands in the long run is endangered by the presence of markets around the region, and that they soon cease to function properly unless quick interventions are put in place to help mitigate these impacts.
Proposed mitigation strategies
Therefore, the study proposed different strategies to help mitigate the destruction taking place in the two wetlands, these strategies were adapted from the following sources: Turner (1991), and Davis and Froend (1999), and from the ideas of the conservation officials and water users’ association leaders. These included, improved environmental education, attachment of wetlands ecologists or conservation professionals to the village governments, valuation of the Mara wetlands resources and commissioning of a price and resource regulatory board in the Mara wetlands basin, creation of wetlands monitoring program and a fining regime system.
The main objective of this study was to help highlight the negative impacts of economic activities on wetlands in East Africa with a specific focus on Mara Bay and Masirori wetlands. This was achieved by highlighting the findings obtained from the surveys. Most of the findings from this study were found to be similar to those obtained from other studies carried out in the Lake Victoria basin. The lack of understanding of ecosystem services by the respondents from the four villages was thought to be the cause of the propagation of harmful activities on the two wetlands by the locals. Therefore, urgent intervention is needed to help reverse the damage that is already happening, and to find alternative means of livelihoods for the locals. This is anticipated to help create a scenario in which the management process will be beneficial to both the locals and the threatened wetlands.
The findings however highlight the plight of most wetlands in East Africa and show how they are on the path to further destruction unless appropriate interventions are put into place. The findings also help highlight the low economic conditions of most locals living around these wetlands, and the effect this can have on conservation efforts. Future research in the region could expand the study area and engage more respondents to get additional responses which may be used to improve the research. Studies should also be done to model an economic scenario where the locals are completely kept away from the wetlands and a break-even scenario benefiting the locals also modelled in this instance to achieve conservation as well as improve the local’s livelihood.
This study would like to thank Birdlife Tanzania, for the support accorded during the field work, and IHE Delft and CQU Australia supervisors for their guidance and support in the development of this paper.
Ajwang’ Ondiek, R., Kitaka, N., & Omondi Oduor, S. (2016). Assessment of Provisioning and Cultural Ecosystem Services in Natural Wetlands and Rice Fields in Kano Floodplain, Kenya. Ecosystem Services, 21, Part A, 166-173. https://doi.org/10.1016/j.ecoser.2016.08.008
Barbier, E. B. (1993). Sustainable Use of Wetlands Valuing Tropical Wetland Benefits: Economic Methodologies and Applications. The Geographical Journal, 159, 22-32.
Beuel, S., Alvarez, M., Amler, E., Behn, K., Kotze, D., Kreye, C., & Becker, M. (2016). A Rapid Assessment of Anthropogenic Disturbances in East African Wetlands. Ecological Indicators, 67, 684-692. https://doi.org/10.1016/j.ecolind.2016.03.034
Chmura, G. L., Anisfeld, S. C., Cahoon, D. R., & Lynch, J. C. (2003). Global Carbon Sequestration in Tidal, Saline Wetland Soils. Global Biogeochemical Cycles, 17, 1111.
Costanza, R., D’Arge, R., De Groot, R., Farber, S., Grasso, M., Hannon, B., Limburg, K., Naeem, S., O’neill, R. V., Paruelo, J., Raskin, R. G., Sutton, P., & Van den Belt, M. (1997). The Value of the World’s Ecosystem Services and Natural Capital. Nature, 387, 253-260.
Davis, J. A., & Froend, R. (1999). Loss and Degradation of Wetlands in Southwestern Australia: Underlying Causes, Consequences and Solutions. Wetlands Ecology and Management, 7, 13-23.
Devetak, I., Glažar, S. A., & Vogrinc, J. (2010). The Role of Qualitative Research in Science Education. Eurasia Journal of Mathematics, Science & Technology Education, 6, 77-84.
Dixon, A. B., & Wood, A. P. (2003). Wetland Cultivation and Hydrological Management in Eastern Africa: Matching Community and Hydrological Needs through Sustainable Wetland Use. Natural Resources Forum, 27, 117-129. https://doi.org/10.1111/1477-8947.00047
Durigon, D., Hickey, G. M., & Kosoy, N. (2012). Assessing National Wetland Policies’ Portrayal of Wetlands: Public Resources or Private Goods? Ocean & Coastal Management, 58, 36-46.
Kassenga, G. R. (1997). A Descriptive Assessment of the Wetlands of the Lake Victoria Basin in Tanzania. Resources, Conservation and Recycling, 20, 127-141.
Koohafkan, P., Nachtergaele, F., & Antoine, J. (1998). Use of Agro-Ecological Zones and Resource Management Domains for Sustainable Management of African Wetlands. In Wetland Characterization and Classification for Sustainable Agricultural Development. Rome: FAO/SAFR.
Mango, L. M., Melesse, A. M., McClain, M. E., Gann, D., & Setegn, S. (2011). Land Use and Climate Change Impacts on the Hydrology of the Upper Mara River Basin, Kenya: Results of a Modeling Study to Support Better Resource Management. Hydrology and Earth System Sciences, 15, 2245. https://doi.org/10.5194/hess-15-2245-2011
Mombo, F. M., Speelman, S., Van Huylenbroeck, G., Hella, J., Pantaleo, M., & Moe, S. (2011). Ratification of the Ramsar Convention and Sustainable Wetlands Management: Situation Analysis of the Kilombero Valley Wetlands in Tanzania. Journal of Agricultural Extension and Rural Development, 3, 153-164.
Mombo, F., Speelman, S., Hella, J., & Van Huylenbroeck, G. (2013). How Characteristics of Wetlands Resource Users and Associated Institutions Influence the Sustainable Management of Wetlands in Tanzania. Land Use Policy, 35, 8-15.
Musamba, E. B., Ngaga, Y. M., Boon, E. K., & Giliba, R. A. (2011). Impact of Socio-Economic Activities around Lake Victoria: Land Use and Land Use Changes in Musoma Municipality, Tanzania. Journal of Human Ecology, 35, 143-154.
Ostrovskaya, E., Douven, W., Schwartz, K., Pataki, B., Mukuyu, P., & Kaggwa, R. C. (2013). Capacity for Sustainable Management of Wetlands: Lessons from the WETwin Project. Environmental Science & Policy, 34, 128-137.
Patton, D., Bergstrom, J. C., Moore, R., & Covich, A. P. (2015). Economic Value of Carbon Storage in US National Wildlife Refuge Wetland Ecosystems. Ecosystem Services, 16, 94-104.
Peh, K. S. H., Balmford, A., Bradbury, R. B., Brown, C., Butchart, S. H. M., Hughes, F. M. R., Birch, J. C. et al. (2013). TESSA: A Toolkit for Rapid Assessment of Ecosystem Services at Sites of Biodiversity Conservation Importance. Ecosystem Services, 5, 51-57.
Sakane, N., Alvarez, M., Becker, M., Böhme, B., Handa, C., Kamiri, H. W., Mogha, N. G. et al. (2011). Classification, Characterisation, and Use of Small Wetlands in East Africa. Wetlands, 31, 1103-1116. https://doi.org/10.1007/s13157-011-0221-4
Taylor, A. R. D., Howard, G. W., & Begg, G. W. (1995). Developing Wetland Inventories in Southern Africa: A Review. Classification and Inventory of the World’s Wetlands, 118, 57-79.
Van Dam, A., Kipkemboi, J., Mazvimavi, D., & Irvine, K. (2014). A Synthesis of Past, Current and Future Research for Protection and Management of Papyrus (Cyperus papyrus L.) Wetlands in Africa. Wetlands Ecology and Management, 22, 99-114.
Wu, S., Carvalho, P. N., Müller, J. A., Manoj, V. R., & Dong, R. (2016). Sanitation in Constructed Wetlands: A Review on the Removal of Human Pathogens and Fecal Indicators. Science of the Total Environment, 541, 8-22. https://doi.org/10.1016/j.scitotenv.2015.09.047 | https://m.scirp.org/papers/89006 | 21 |
29 | In chapter 10 Howard Zinn talks about the civil war, the disadvantages and advantages between the poor and rich. The poor have always been the bait in America, due to the lack of money and power. When war is in progress, most of the time the poor are demanded to go to war because the wealthy groups have the money and power to escape from death. “ To give people a choice between two different parties and allow them, in a period of rebellion, to choose the slightly more democratic one was an ingenious mode of control. Like so much in the American system, it was not devilishly contrived by some master plotters; it developed naturally out of the needs of the situation”( Zinn, 200).
Business owners made lots of money from the railroads because they were able to transport goods farther and faster with ease. Although the railroads tremendously impacted businesses and therefore the economy, the native americans were negatively impacted because the railroads were being laid on “their” land. This caused distrust between the settlers and the natives because of the “disrespect” for the land. Because of the new ways of transportation, the industrial revolution took place causing skilled artisans to be replaced by unskilled workers that used large complex machines. The
Trusts, or large monopolies, were corporations that combined and lowered their prices to drive competitors out of the business. This infuriated many americans at that time because it allowed such a small number of people to become wealthy, or even successful at all. When Theodore Roosevelt became president, he sympathized with workers unlike most of the presidents in the past who usually tried to help the corporations. As illustrated in Document A, Roosevelt wanted to hunt down the bad trusts ad put a leash on the good ones in order to regulate them. However, it only had a limited effect because the government was unable to control the activity of banks and railroads which were two of the most powerful industries in the world.
The business owners had more power than the politicians. Railroads were a hung impact on the United States, it provided faster mobility and hundreds of jobs. In the Gilded Age was when everything went corrupt. The business people were paying off the people in the government to get favors from them. “Gilded” otherwise meant shiny on the outside but not so shiny on the out.
The Monetary Policy was issued to reduce the amount of money that was circulating in the America. As many farmers saw that money was hard to come by, and the government’s ongoing discussion to get rid of silver and paper bills and exchange it for gold, limiting the currency in America. A young man named William Jennings Bryan, believed strongly that ‘bimetallism’, or “free-silver” would bring the nation to prosperity, as he expressed during his “Cross of Gold” speech given on July 9th, 1896. He stated that “[people of the government] we ought to declare in favor of international bimetallism and thereby declare that the gold standard is wrong and that the principles of bimetallism are better.” It clearly showed his standpoint on the currency issue that was going on during that time, reassuring that certain actions will be taken to make gold the main currency of America. Many citizens found that the policy was an insurance to make the price of money rise.
Henry George’s was a critic of big business and since there we social problems he blames it on a few monopolists to grow wealthy as a result of rising land values. He proposed a single tax on land to replace taxes which would be returned to in addition to the people. If they propose this tax it would destroy monopolies, distribute wealth, and it would eliminate poverty. Robber barons are the reason why people were being driven into poverty (DOC 1). The result of this was how the Northern capitalists led the South away from agriculture and economic dependence, and how they used their wealth to further grow the American industry.
The model is supposed to bring renewed prosperity to the United States but it brought more inequality and stripped safety net programs that actually helped most Americans. This lack of assistance means that struggling people are struggling even more and they have less money to spend and to put back into the economy. Since the creation of the Better Business Climate model, government spending on food stamps, unemployment insurance, and other social programs has been cut as
The Gilded Age was a time of economic growth as well as social changes that took place in the United States. During this time there was a rapid growth in industrialization, urbanization, and a rise of big businesses. However the Progressive Reformers didn't like the way things were going. During the Gilded Age we had several presidents such as Ulysses S. Grant, Grover Cleveland, and Rutherford B. Hayes that were very well unliked by Americans. A lot of Americans didn't want to come to terms with politicians whom they felt would ruin the peace that was created after the Civil War.
These people wanted changes of the way factories run. They faced opposition from other mill owners who knew that reforms would cost them money and give workers more rights. The reformers successfully forced changes to the way workers was treated. They are now called Factory Acts. The factory act changes in time and increased the rights of men, women and children
I discussed how neoliberalism caused a loss of the state revenue, how it weakened the regulation of labor, how it caused the discharging of employees and the decrease in wages. Another of neoliberalism negative effect is the increase of the price food products, oil, and fuel and other essential products. I also discussed peoples’ opinion regarding this issue and explained why I oppose their opinions. I gave evidence why I think my opinion is right. The world started changing when neoliberalism was adopted.
During the period of 1870 to 1900 large corporations, such as the railway company, grew significantly in size, number, and influence. The cause of this was the need for a new way of transportation, the demand was great so the railways expanded all over the United States so that they could meet these demands. These large corporations affected the economy by making it easier to pay for everyday chores, politics in the way that it gave politicians too much power but in doing so gave normal limited power. The corporations had great power and influence which made them a huge impact to society. The economy was consistent in the United States during the 1870’s but as the years went on large businesses were able to lower the cost of food prices, fuel and lighting
Many political machines used this to be elected into government. They would find jobs and places to stay for immigrants, in exchange they would have to vote the political machines into government. After political machines got into government he than would put his friends into office. This made the the government corrupt because the people in office did not do their jobs. Another kind of corruption was people with monopoly would pay the government so they could do want they want or keep monopolizing.
The Citizens United Ruling made by Supreme Court in 2010 only made the issue of money ruling the elections worse. Its main effects, stated in the video, “paved the way” for big corporations or unions to spend as much money as they feel necessary in elections and the political process. They can utilize this rule through advertisements, messages, and many different ways of communication to potential and up and coming voters. It changed the way campaigns were carried out by not only putting a bigger emphasis on the political spending from candidates and outside organizations, but also in a sense demerits the aspect of democracy, with having the amount money spent on a campaign be noticed more than the voices of the people. Voting does not really represent the country, but rather, represents the rich and powerful of the country.
Alexander II made changes in the Russian government in order to get the country in a more stable economic situation. The negative side for those changes was that the working class were the only caught on the process. The amount of time for the peasants and high amount of money for the debt was unfair for them. The upper class were unhappy for those changes because they were greedy in the way of thinking.
Transportation- A big portion of railroads and industrial supplies were destroyed over the course of the war. The south had begun rebuilding transportation by the nineteenth century. West: Political- Because of the trouble between white settlers and immigrants at that time there were numerous outbreaks of violence and laws aimed towards discrimination. Social- Chinese immigrants who migrated to the west would work for wages considerably less than normal and them doing so caused tension between white settlers. Economic or type of economy- The west relied more on agriculture than any other place because it was the most efficient. | https://www.ipl.org/essay/Essay-On-Changing-Economy-PCGJ92XCNFG | 21 |
42 | Seven Years' War
The Seven Years' War (1756–1763) was a global conflict, "a struggle for global primacy between Britain and France", which also had a major impact on the Spanish Empire. In Europe, the conflict arose from issues left unresolved by the War of the Austrian Succession (1740–1748), with Prussia seeking greater dominance. Long-standing colonial rivalries between Britain against France and Spain in North America and the Caribbean islands were fought on a grand scale with consequential results. In Europe, the war broke out over territorial disputes between Prussia and Austria, which wanted to regain Silesia after it was captured by Prussia in the previous war. Britain, France and Spain fought both in Europe and overseas with land-based armies and naval forces, while Prussia sought territorial expansion in Europe and consolidation of its power.
|Seven Years' War|
Clockwise from top left:
Wyandot of Ohio Country (British supported faction)||
|Commanders and leaders|
Marquess of Granby
James Wolfe †
Prince de Soubise
|Great Britain: 300,000 (total mobilized)||France: 1,000,000 (total mobilized)|
|Casualties and losses|
In a realignment of traditional alliances, known as the Diplomatic Revolution of 1756, Prussia became part of a coalition led by Britain, which also included long-time Prussian competitor Hanover. At the same time, Austria ended centuries of conflict by allying with France, along with Saxony, Sweden and Russia. Spain aligned formally with France in 1762. Spain unsuccessfully attempted to invade Britain's ally Portugal, attacking with their forces facing British troops in Iberia. Smaller German states either joined the Seven Years' War or supplied mercenaries to the parties involved in the conflict.
Anglo-French conflict over their colonies in North America had begun in 1754 in what became known in North America as the French and Indian War, a nine-year war that ended France's presence as a land power. It was "the most important event to occur in eighteenth-century North America". Spain entered the war in 1761, joining France in the Third Family Compact between the two Bourbon monarchies. The alliance with France was a disaster for Spain, with the loss to Britain of two major ports, Havana in the Caribbean and Manila in the Philippines, returned in the 1763 Treaty of Paris between France, Spain and Great Britain. In Europe the large-scale conflict that drew in most of the European powers was centered on Austria's desire to recover Silesia from Prussia. The Treaty of Hubertusburg ended the war between Saxony, Austria and Prussia, in 1763. Britain began its rise as the world's predominant colonial and naval power. For a time France's supremacy in Europe was halted until after the French Revolution and the emergence of Napoleon Bonaparte. Prussia confirmed its status as a great power, challenging Austria for dominance within the German states, thus altering the European balance of power.
This section needs additional citations for verification. (June 2020)
What came to be known as the Seven Years' War (1756–1763) began as a conflict between Great Britain and France in 1754, when the British sought to expand into territory claimed by the French in North America. The war came to be known as the French and Indian War, with both the British and the French and their respective Native American allies fighting for control of territory. Hostilities were heightened when a British unit led by a 22-year-old Lt. Colonel George Washington ambushed a small French force at the Battle of Jumonville Glen on 28 May 1754. The conflict exploded across the colonial boundaries and extended to Britain's seizure of hundreds of French merchant ships at sea.
Prussia, a rising power, struggled with Austria for dominance within and outside the Holy Roman Empire in central Europe. In 1756, the four greatest powers "switched partners" so that Great Britain and Prussia were allied against France and Austria. Realizing that war was imminent, Prussia pre-emptively struck Saxony and quickly overran it. The result caused uproar across Europe. Because of Austria's alliance with France to recapture Silesia, which had been lost in the War of the Austrian Succession, Prussia formed an alliance with Britain. Reluctantly, by following the Imperial diet of the Holy Roman Empire, which declared war on Prussia on 17 January 1757, most of the states of the empire joined Austria's cause. The Anglo-Prussian alliance was joined by a few smaller German states within the empire (most notably the Electorate of Hanover but also Brunswick and Hesse-Kassel). Sweden, seeking to regain Pomerania (most of which had been lost to Prussia in previous wars) joined the coalition, seeing its chance when all the major continental powers of Europe opposed Prussia. Spain, bound by the Pacte de Famille, intervened on behalf of France and together they launched an unsuccessful invasion of Portugal in 1762. The Russian Empire was originally aligned with Austria, fearing Prussia's ambition on the Polish–Lithuanian Commonwealth, but switched sides upon the succession of Tsar Peter III in 1762.
Many middle and small powers in Europe, as in the previous wars, tried steering away from the escalating conflict, even though they had interests in the conflict or with the belligerents. Denmark–Norway, for instance, was close to being dragged into the war on France's side when Peter III became Russian emperor and switched sides; Dano-Norwegian and Russian armies were close to ending up in battle, but the Russian emperor was deposed before war formally broke out. The Dutch Republic, a long-time British ally, kept its neutrality intact, fearing the odds against Britain and Prussia fighting the great powers of Europe, and even tried to prevent Britain's domination in India. Naples-Sicily and Savoy, although sided with the Franco-Spanish alliance, declined to join the coalition under fear of British naval power. The taxation needed for war caused the Russian people considerable hardship, being added to the taxation of salt and alcohol begun by Empress Elizabeth in 1759 to complete her addition to the Winter Palace. Like Sweden, Russia concluded a separate peace with Prussia.
The war ended with two separate treaties dealing with the two different theaters of war. The Treaty of Paris between France, Spain and Great Britain ended the war in North America and for overseas territories taken in the conflict. The 1763 Treaty of Hubertusburg ended the war between Saxony, Austria and Prussia.
The war was successful for Great Britain, which gained the bulk of New France in North America, Spanish Florida, some individual Caribbean islands in the West Indies, the colony of Senegal on the West African coast, and superiority over the French trading outposts on the Indian subcontinent. The Native American tribes were excluded from the settlement; a subsequent conflict, known as Pontiac's War, which was a small scale war between the indigenous tribe known as the Odawas and the British, where the Odawas claimed seven of the ten forts created or taken by the British to show them that they need to distribute land equally amongst their allies, was also unsuccessful in returning them to their pre-war status. In Europe, the war began disastrously for Prussia, but with a combination of good luck and successful strategy, King Frederick the Great managed to retrieve the Prussian position and retain the status quo ante bellum. Prussia solidified its position as a newer European great power. Although Austria failed to retrieve the territory of Silesia from Prussia (its original goal), its military prowess was also noted by the other powers. The involvement of Portugal and Sweden did not return them to their former status as great powers. France was deprived of many of its colonies and had saddled itself with heavy war debts that its inefficient financial system could barely handle. Spain lost Florida but gained French Louisiana and regained control of its colonies, e.g., Cuba and the Philippines, which had been captured by the British during the war.
The Seven Years' War was perhaps the first global war, taking place almost 160 years before World War I, known as the Great War before the outbreak of World War II, and globally influenced many later major events. Winston Churchill described the conflict as the "first world war". The war restructured not only the European political order, but also affected events all around the world, paving the way for the beginning of later British world supremacy in the 19th century, the rise of Prussia in Germany (eventually replacing Austria as the leading German state), the beginning of tensions in British North America, as well as a clear sign of France's revolutionary turmoil. It was characterized in Europe by sieges and the arson of towns as well as open battles with heavy losses.
In the historiography of some countries, the war is named after combatants in its respective theatres. In the present-day United States—at the time, the southern English-speaking British colonies in North America—the conflict is known as the French and Indian War (1754–1763). In English-speaking Canada—the balance of Britain's former North American colonies—it is called the Seven Years' War (1756–1763). In French-speaking Canada, it is known as La guerre de la Conquête (the War of the Conquest). Swedish historiography uses the name Pommerska kriget (The Pomeranian War), as the Sweden–Prussia conflict between 1757 and 1762 was limited to Pomerania in northern central Germany. The Third Silesian War involved Prussia and Austria (1756–1763). On the Indian subcontinent, the conflict is called the Third Carnatic War (1757–1763).
The war was described by Winston Churchill as the first "world war", although this label was also given to various earlier conflicts like the Eighty Years' War, the Thirty Years' War, the War of the Spanish Succession and the War of the Austrian Succession, and to later conflicts like the Napoleonic Wars. The term "Second Hundred Years' War" has been used in order to describe the almost continuous level of worldwide conflict between France and Great Britain during the entire 18th century, reminiscent of the Hundred Years' War of the 14th and 15th centuries.
In North America
The boundary between British and French possessions in North America was largely undefined in the 1750s. France had long claimed the entire Mississippi River basin. This was disputed by Britain. In the early 1750s the French began constructing a chain of forts in the Ohio River Valley to assert their claim and shield the Native American population from increasing British influence.
The British settlers along the coast were upset that French troops would now be close to the western borders of their colonies. They felt the French would encourage their tribal allies among the North American natives to attack them. Also, the British settlers wanted access to the fertile land of the Ohio River Valley for the new settlers that were flooding into the British colonies seeking farm land.
The most important French fort planned was intended to occupy a position at "the Forks" where the Allegheny and Monongahela Rivers meet to form the Ohio River (present-day Pittsburgh, Pennsylvania). Peaceful British attempts to halt this fort construction were unsuccessful, and the French proceeded to build the fort they named Fort Duquesne. British colonial militia from Virginia were then sent to drive them out. Led by George Washington, they ambushed a small French force at Jumonville Glen on 28 May 1754 killing ten, including commander Jumonville. The French retaliated by attacking Washington's army at Fort Necessity on 3 July 1754 and forced Washington to surrender. These were the first engagements of what would become the worldwide Seven Years' War.
News of this arrived in Europe, where Britain and France unsuccessfully attempted to negotiate a solution. The two nations eventually dispatched regular troops to North America to enforce their claims. The first British action was the assault on Acadia on 16 June 1755 in the Battle of Fort Beauséjour, which was immediately followed by their expulsion of the Acadians. In July British Major General Edward Braddock led about 2,000 army troops and provincial militia on an expedition to retake Fort Duquesne, but the expedition ended in disastrous defeat. In further action, Admiral Edward Boscawen fired on the French ship Alcide on 8 June 1755, capturing it and two troop ships. In September 1755, British colonial and French troops met in the inconclusive Battle of Lake George.
The British also harassed French shipping beginning in August 1755, seizing hundreds of ships and capturing thousands of merchant seamen while the two nations were nominally at peace. Incensed, France prepared to attack Hanover, whose prince-elector was also the King of Great Britain and Menorca. Britain concluded a treaty whereby Prussia agreed to protect Hanover. In response France concluded an alliance with its long-time enemy Austria, an event known as the Diplomatic Revolution.
This section needs additional citations for verification. (May 2016)
In the War of the Austrian Succession, which lasted from 1740 to 1748, King Frederick II of Prussia, known as Frederick the Great, seized the prosperous province of Silesia from Austria. Empress Maria Theresa of Austria had signed the Treaty of Aix-la-Chapelle in 1748 in order to gain time to rebuild her military forces and forge new alliances.
The War of the Austrian Succession had seen the belligerents aligned on a time-honoured basis. France's traditional enemies, Great Britain and Austria, had coalesced just as they had done against Louis XIV. Prussia, the leading anti-Austrian state in Germany, had been supported by France. Neither group, however, found much reason to be satisfied with its partnership: British subsidies to Austria produced nothing of much help to the British, while the British military effort had not saved Silesia for Austria. Prussia, having secured Silesia, came to terms with Austria in disregard of French interests. Even so, France concluded a defensive alliance with Prussia in 1747, and the maintenance of the Anglo-Austrian alignment after 1748 was deemed essential by the Duke of Newcastle, British secretary of state in the ministry of his brother Henry Pelham. The collapse of that system and the aligning of France with Austria and of Great Britain with Prussia constituted what is known as the "diplomatic revolution" or the "reversal of alliances".
In 1756 Austria was making military preparations for war with Prussia and pursuing an alliance with Russia for this purpose. On 2 June 1756, Austria and Russia concluded a defensive alliance that covered their own territory and Poland against attack by Prussia or the Ottoman Empire. They also agreed to a secret clause that promised the restoration of Silesia and the countship of Glatz (now Kłodzko, Poland) to Austria in the event of hostilities with Prussia. Their real desire, however, was to destroy Frederick's power altogether, reducing his sway to his electorate of Brandenburg and giving East Prussia to Poland, an exchange that would be accompanied by the cession of the Polish Duchy of Courland to Russia. Alexey Bestuzhev-Ryumin, grand chancellor of Russia under Empress Elizabeth, was hostile to both France and Prussia, but he could not persuade Austrian statesman Wenzel Anton von Kaunitz to commit to offensive designs against Prussia so long as Prussia was able to rely on French support.
The Hanoverian king George II of Great Britain was passionately devoted to his family's continental holdings, but his commitments in Germany were counterbalanced by the demands of the British colonies overseas. If war against France for colonial expansion was to be resumed, then Hanover had to be secured against Franco-Prussian attack. France was very much interested in colonial expansion and was willing to exploit the vulnerability of Hanover in war against Great Britain, but it had no desire to divert forces to Central Europe for Prussia's interest.
French policy was, moreover, complicated by the existence of the Secret du Roi—a system of private diplomacy conducted by King Louis XV. Unbeknownst to his foreign minister, Louis had established a network of agents throughout Europe with the goal of pursuing personal political objectives that were often at odds with France's publicly stated policies. Louis's goals for le Secret du roi included the Polish crown for his kinsman Louis François de Bourbon, Prince of Conti, and the maintenance of Poland, Sweden and Turkey as French allies in opposition to Russian and Austrian interests.
Frederick saw Saxony and Polish west Prussia as potential fields for expansion, but could not expect French support if he started an aggressive war for them. If he joined the French against the British in the hope of annexing Hanover, he might fall victim to an Austro-Russian attack. The hereditary elector of Saxony, Augustus III, was also elective King of Poland as Augustus III, but the two territories were physically separated by Brandenburg and Silesia. Neither state could pose as a great power. Saxony was merely a buffer between Prussia and Austrian Bohemia, whereas Poland, despite its union with the ancient lands of Lithuania, was prey to pro-French and pro-Russian factions. A Prussian scheme for compensating Frederick Augustus with Bohemia in exchange for Saxony obviously presupposed further spoliation of Austria.
In the attempt to satisfy Austria at the time, Britain gave their electoral vote in Hanover for the candidacy of Maria Theresa's son, Joseph II, as the Holy Roman Emperor, much to the dismay of Frederick and Prussia. Not only that, Britain would soon join the Austro-Russian alliance, but complications arose. Britain's basic framework for the alliance itself was to protect Hanover's interests against France. At the same time, Kaunitz kept approaching the French in the hope of establishing just such an alliance with Austria. Not only that, France had no intention to ally with Russia, who, years earlier, had meddled in France's affairs during Austria's succession war. France also saw the dismemberment of Prussia as threatening to the stability of Central Europe.
Years later, Kaunitz kept trying to establish France's alliance with Austria. He tried as hard as he could to avoid Austrian entanglement in Hanover's political affairs, and was even willing to trade Austrian Netherlands for France's aid in recapturing Silesia. Frustrated by this decision and by the Dutch Republic's insistence on neutrality, Britain soon turned to Russia. On 30 September 1755, Britain pledged financial aid to Russia in order to station 50,000 troops on the Livonian-Lithuanian border, so they could defend Britain's interests in Hanover immediately. Besthuzev, assuming the preparation was directed against Prussia, was more than happy to obey the request of the British. Unbeknownst to the other powers, King George II also made overtures to the Prussian king, Frederick, who, fearing the Austro-Russian intentions, was also desirous of a rapprochement with Britain. On 16 January 1756, the Convention of Westminster was signed, whereby Britain and Prussia promised to aid one another; the parties hoped to achieve lasting peace and stability in Europe.
The carefully coded word in the agreement proved no less catalytic for the other European powers. The results were absolute chaos. Empress Elizabeth of Russia was outraged at the duplicity of Britain's position. Not only that, but France was enraged and terrified, by the sudden betrayal of its only ally, Prussia. Austria, particularly Kaunitz, used this situation to their utmost advantage. Now-isolated France was forced to accede to the Austro-Russian alliance or face ruin. Thereafter, on 1 May 1756, the First Treaty of Versailles was signed, in which both nations pledged 24,000 troops to defend each other in the case of an attack. This diplomatic revolution proved to be an important cause of the war; although both treaties were ostensibly defensive in nature, the actions of both coalitions made the war virtually inevitable.
Methods and technologies
European warfare in the early modern period was characterised by the widespread adoption of firearms in combination with more traditional bladed weapons. Eighteenth-century European armies were built around units of massed infantry armed with smoothbore flintlock muskets and bayonets. Cavalrymen were equipped with sabres and pistols or carbines; light cavalry were used principally for reconnaissance, screening and tactical communications, while heavy cavalry were used as tactical reserves and deployed for shock attacks. Smoothbore artillery provided fire support and played the leading role in siege warfare. Strategic warfare in this period centred around control of key fortifications positioned so as to command the surrounding regions and roads, with lengthy sieges a common feature of armed conflict. Decisive field battles were relatively rare.
The Seven Years' War, like most European wars of the eighteenth century, was fought as a so-called cabinet war in which disciplined regular armies were equipped and supplied by the state to conduct warfare on behalf of the sovereign's interests. Occupied enemy territories were regularly taxed and extorted for funds, but large-scale atrocities against civilian populations were rare compared with conflicts in the previous century. Military logistics was the decisive factor in many wars, as armies had grown too large to support themselves on prolonged campaigns by foraging and plunder alone. Military supplies were stored in centralised magazines and distributed by baggage trains that were highly vulnerable to enemy raids. Armies were generally unable to sustain combat operations during winter and normally established winter quarters in the cold season, resuming their campaigns with the return of spring.
For much of the eighteenth century, France approached its wars in the same way. It would let colonies defend themselves or would offer only minimal help (sending them limited numbers of troops or inexperienced soldiers), anticipating that fights for the colonies would most likely be lost anyway. This strategy was to a degree forced upon France: geography, coupled with the superiority of the British navy, made it difficult for the French navy to provide significant supplies and support to overseas colonies. Similarly, several long land borders made an effective domestic army imperative for any French ruler. Given these military necessities, the French government, unsurprisingly, based its strategy overwhelmingly on the army in Europe: it would keep most of its army on the continent, hoping for victories closer to home. The plan was to fight to the end of hostilities and then, in treaty negotiations, to trade territorial acquisitions in Europe to regain lost overseas possessions (as had happened in, e.g., the Treaty of Saint-Germain-en-Laye (1632)). This approach did not serve France well in the war, as the colonies were indeed lost, and although much of the European war went well, by its end France had few counterbalancing European successes.
The British—by inclination as well as for practical reasons—had tended to avoid large-scale commitments of troops on the continent. They sought to offset the disadvantage of this in Europe by allying themselves with one or more continental powers whose interests were antithetical to those of their enemies, particularly France. By subsidising the armies of continental allies, Britain could turn London's enormous financial power to military advantage. In the Seven Years' War, the British chose as their principal partner the most brilliant general of the day, Frederick the Great of Prussia, then the rising power in central Europe, and paid Frederick substantial subsidies for his campaigns. This was accomplished in the diplomatic revolution of 1756, in which Britain ended its long-standing alliance with Austria in favour of Prussia, leaving Austria to side with France. In marked contrast to France, Britain strove to prosecute the war actively in the colonies, taking full advantage of its naval power. The British pursued a dual strategy—naval blockade and bombardment of enemy ports, and rapid movement of troops by sea. They harassed enemy shipping and attacked enemy colonies, frequently using colonists from nearby British colonies in the effort.
The Russians and the Austrians were determined to reduce the power of Prussia, the new threat on their doorstep and Austria was anxious to regain Silesia, lost to Prussia in the War of the Austrian Succession. Along with France, Russia and Austria agreed in 1756 to mutual defence and an attack by Austria and Russia on Prussia, subsidized by France.
William Pitt the Elder, who entered the cabinet in 1756, had a grand vision for the war that made it entirely different from previous wars with France. As prime minister, Pitt committed Britain to a grand strategy of seizing the entire French Empire, especially its possessions in North America and India. Britain's main weapon was the Royal Navy, which could control the seas and bring as many invasion troops as were needed. He also planned to use colonial forces from the thirteen American colonies, working under the command of British regulars, to invade New France. In order to tie the French army down he subsidized his European allies. Pitt was head of the government from 1756 to 1761, and even after that the British continued his strategy. It proved completely successful. Pitt had a clear appreciation of the enormous value of imperial possessions, and realized the vulnerability of the French Empire.
|Wikisource has original text related to this article:|
The British prime minister, the Duke of Newcastle, was optimistic that the new series of alliances could prevent war from breaking out in Europe. However, a large French force was assembled at Toulon, and the French opened the campaign against the British with an attack on Menorca in the Mediterranean. A British attempt at relief was foiled at the Battle of Minorca, and the island was captured on 28 June (for which Admiral Byng was court-martialed and executed). Britain formally declared war on France on 17 May, nearly two years after fighting had broken out in the Ohio Country.
Frederick II of Prussia had received reports of the clashes in North America and had formed an alliance with Great Britain. On 29 August 1756, he led Prussian troops across the border of Saxony, one of the small German states in league with Austria. He intended this as a bold pre-emption of an anticipated Austro-French invasion of Silesia. He had three goals in his new war on Austria. First, he would seize Saxony and eliminate it as a threat to Prussia, then use the Saxon army and treasury to aid the Prussian war effort. His second goal was to advance into Bohemia, where he might set up winter quarters at Austria's expense. Thirdly, he wanted to invade Moravia from Silesia, seize the fortress at Olmütz, and advance on Vienna to force an end to the war.
Accordingly, leaving Field Marshal Count Kurt von Schwerin in Silesia with 25,000 soldiers to guard against incursions from Moravia and Hungary, and leaving Field Marshal Hans von Lehwaldt in East Prussia to guard against Russian invasion from the east, Frederick set off with his army for Saxony. The Prussian army marched in three columns. On the right was a column of about 15,000 men under the command of Prince Ferdinand of Brunswick. On the left was a column of 18,000 men under the command of the Duke of Brunswick-Bevern. In the centre was Frederick II, himself with Field Marshal James Keith commanding a corps of 30,000 troops. Ferdinand of Brunswick was to close in on the town of Chemnitz. The Duke of Brunswick-Bevern was to traverse Lusatia to close in on Bautzen. Meanwhile, Frederick and Keith would make for Dresden.
The Saxon and Austrian armies were unprepared, and their forces were scattered. Frederick occupied Dresden with little or no opposition from the Saxons. At the Battle of Lobositz on 1 October 1756, Frederick stumbled into one of the embarrassments of his career. Severely underestimating a reformed Austrian army under General Maximilian Ulysses Browne, he found himself outmanoeuvred and outgunned, and at one point in the confusion even ordered his troops to fire on retreating Prussian cavalry. Frederick actually fled the field of battle, leaving Field Marshall Keith in command. Browne, however, also left the field, in a vain attempt to meet up with an isolated Saxon army holed up in the fortress at Pirna. As the Prussians technically remained in control of the field of battle, Frederick, in a masterful coverup, claimed Lobositz as a Prussian victory. The Prussians then occupied Saxony; after the Siege of Pirna, the Saxon army surrendered in October 1756, and was forcibly incorporated into the Prussian army. The attack on neutral Saxony caused outrage across Europe and led to the strengthening of the anti-Prussian coalition. The Austrians had succeeded in partially occupying Silesia and, more importantly, denying Frederick winter quarters in Bohemia. Frederick had proven to be overly confident to the point of arrogance and his errors were very costly for Prussia's smaller army. This led him to remark that he did not fight the same Austrians as he had during the previous war.[page needed]
Britain had been surprised by the sudden Prussian offensive but now began shipping supplies and £670,000 (equivalent to £100.4 million in 2020) to its new ally. A combined force of allied German states was organised by the British to protect Hanover from French invasion, under the command of the Duke of Cumberland. The British attempted to persuade the Dutch Republic to join the alliance, but the request was rejected, as the Dutch wished to remain fully neutral. Despite the huge disparity in numbers, the year had been successful for the Prussian-led forces on the continent, in contrast to the British campaigns in North America.
On 18 April 1757, Frederick II again took the initiative by marching into the Kingdom of Bohemia, hoping to inflict a decisive defeat on Austrian forces. After winning the bloody Battle of Prague on 6 May 1757, in which both forces suffered major casualties, the Prussians forced the Austrians back into the fortifications of Prague. The Prussian army then laid siege to the city. In response, Austrian commander Leopold von Daun collected a force of 30,000 men to come to the relief of Prague. Following the battle at Prague, Frederick took 5,000 troops from the siege at Prague and sent them to reinforce the 19,000-man army under the Duke of Brunswick-Bevern at Kolín in Bohemia. Von Daun arrived too late to participate in the battle of Prague, but picked up 16,000 men who had escaped from the battle. With this army he slowly moved to relieve Prague. The Prussian army was too weak to simultaneously besiege Prague and keep von Daun away, and Frederick was forced to attack prepared positions. The resulting Battle of Kolín was a sharp defeat for Frederick, his first. His losses further forced him to lift the siege and withdraw from Bohemia altogether.
Later that summer, the Russians under Field Marshal Stepan Fyodorovich Apraksin besieged Memel with 75,000 troops. Memel had one of the strongest fortresses in Prussia. However, after five days of artillery bombardment the Russian army was able to storm it. The Russians then used Memel as a base to invade East Prussia and defeated a smaller Prussian force in the fiercely contested Battle of Gross-Jägersdorf on 30 August 1757. In the words of the American historian Daniel Marston, Gross-Jägersdorf left the Prussians with "a newfound respect for the fighting capabilities of the Russians that was reinforced in the later battles of Zorndorf and Kunersdorf". However, the Russians were not yet able to take Königsberg after using up their supplies of cannonballs at Memel and Gross-Jägersdorf and retreated soon afterwards.
Logistics was a recurring problem for the Russians throughout the war. The Russians lacked a quartermaster's department capable of keeping armies operating in Central Europe properly supplied over the primitive mud roads of eastern Europe. The tendency of Russian armies to break off operations after fighting a major battle, even when they were not defeated, was less about their casualties and more about their supply lines; after expending much of their munitions in a battle, Russian generals did not wish to risk another battle knowing resupply would be a long time coming. This long-standing weakness was evident in the Russian-Ottoman War of 1735–1739, where Russian battle victories led to only modest war gains due to problems supplying their armies. The Russian quartermasters department had not improved, so the same problems reoccurred in Prussia. Still, the Imperial Russian Army was a new threat to Prussia. Not only was Frederick forced to break off his invasion of Bohemia, he was now forced to withdraw further into Prussian-controlled territory. His defeats on the battlefield brought still more opportunistic nations into the war. Sweden declared war on Prussia and invaded Pomerania with 17,000 men. Sweden felt this small army was all that was needed to occupy Pomerania and felt the Swedish army would not need to engage with the Prussians because the Prussians were occupied on so many other fronts.
Things were looking grim for Prussia now, with the Austrians mobilising to attack Prussian-controlled soil and a combined French and Reichsarmee army under Prince Soubise approaching from the west. The Reichsarmee was a collection of armies from the smaller German states that had banded together to heed the appeal of the Holy Roman Emperor Franz I of Austria against Frederick. However, in November and December 1757, the whole situation in Germany was reversed. First, Frederick devastated Soubise's forces at the Battle of Rossbach on 5 November 1757 and then routed a vastly superior Austrian force at the Battle of Leuthen on 5 December 1757. Rossbach was the only battle between the French and the Prussians during the entire war. At Rossbach, the Prussians lost about 548 men killed while the Franco-Reichsarmee force under Soubise lost about 10,000 killed. Frederick always called Leuthen his greatest victory, an assessment shared by many at the time as the Austrian Army was considered to be a highly professional force. With these victories, Frederick once again established himself as Europe's premier general and his men as Europe's most accomplished soldiers. However, Frederick missed an opportunity to completely destroy the Austrian army at Leuthen; although depleted, it escaped back into Bohemia. He hoped the two smashing victories would bring Maria Theresa to the peace table, but she was determined not to negotiate until she had re-taken Silesia. Maria Theresa also improved the Austrians' command after Leuthen by replacing her incompetent brother-in-law, Charles of Lorraine, with von Daun, who was now a field marshal.
This problem was compounded when the main Hanoverian army under Cumberland, which include Hesse-Kassel and Brunswick troops, was defeated at the Battle of Hastenbeck and forced to surrender entirely at the Convention of Klosterzeven following a French Invasion of Hanover. The convention removed Hanover from the war, leaving the western approach to Prussian territory extremely vulnerable. Frederick sent urgent requests to Britain for more substantial assistance, as he was now without any outside military support for his forces in Germany.
Calculating that no further Russian advance was likely until 1758, Frederick moved the bulk of his eastern forces to Pomerania under the command of Marshal Lehwaldt, where they were to repel the Swedish invasion. In short order, the Prussian army drove the Swedes back, occupied most of Swedish Pomerania, and blockaded its capital Stralsund. George II of Great Britain, on the advice of his British ministers after the battle of Rossbach, revoked the Convention of Klosterzeven, and Hanover reentered the war. Over the winter the new commander of the Hanoverian forces, Duke Ferdinand of Brunswick (until immediately before a commander in the Prussian Army), regrouped his army and launched a series of offensives that drove the French back across the River Rhine. Ferdinand's forces kept Prussia's western flank secure for the rest of the war. The British had suffered further defeats in North America, particularly at Fort William Henry. At home, however, stability had been established. Since 1756, successive governments led by Newcastle and Pitt had fallen. In August 1757, the two men agreed to a political partnership and formed a coalition government that gave new, firmer direction to the war effort. The new strategy emphasised both Newcastle's commitment to British involvement on the continent, particularly in defence of its German possessions, and Pitt's determination to use naval power to seize French colonies around the globe. This "dual strategy" would dominate British policy for the next five years.
Between 10 and 17 October 1757, a Hungarian general, Count András Hadik, serving in the Austrian army, executed what may be the most famous hussar action in history. When the Prussian king, Frederick, was marching south with his powerful armies, the Hungarian general unexpectedly swung his force of 5,000, mostly hussars, around the Prussians and occupied part of their capital, Berlin, for one night. The city was spared for a negotiated ransom of 200,000 thalers. When Frederick heard about this humiliating occupation, he immediately sent a larger force to free the city. Hadik, however, left the city with his hussars and safely reached the Austrian lines. Subsequently, Hadik was promoted to the rank of marshal in the Austrian Army.
In early 1758, Frederick launched an invasion of Moravia and laid siege to Olmütz (now Olomouc, Czech Republic). Following an Austrian victory at the Battle of Domstadtl that wiped out a supply convoy destined for Olmütz, Frederick broke off the siege and withdrew from Moravia. It marked the end of his final attempt to launch a major invasion of Austrian territory. In January 1758, the Russians invaded East Prussia, where the province, almost denuded of troops, put up little opposition. East Prussia had been occupied by Russian forces over the winter and would remain under their control until 1762, although it was far less strategically valuable to Prussia than Brandenburg or Silesia. In any case, Frederick did not see the Russians as an immediate threat and instead entertained hopes of first fighting a decisive battle against Austria that would knock them out of the war.
In April 1758, the British concluded the Anglo-Prussian Convention with Frederick in which they committed to pay him an annual subsidy of £670,000. Britain also dispatched 9,000 troops to reinforce Ferdinand's Hanoverian army, the first British troop commitment on the continent and a reversal in the policy of Pitt. Ferdinand's Hanoverian army, supplemented by some Prussian troops, had succeeded in driving the French from Hanover and Westphalia and re-captured the port of Emden in March 1758 before crossing the Rhine with his own forces, which caused alarm in France. Despite Ferdinand's victory over the French at the Battle of Krefeld and the brief occupation of Düsseldorf, he was compelled by the successful manoeuvering of larger French forces to withdraw across the Rhine.
By this point Frederick was increasingly concerned by the Russian advance from the east and marched to counter it. Just east of the Oder in Brandenburg-Neumark, at the Battle of Zorndorf (now Sarbinowo, Poland), a Prussian army of 35,000 men under Frederick on 25 August 1758, fought a Russian army of 43,000 commanded by Count William Fermor. Both sides suffered heavy casualties—the Prussians 12,800, the Russians 18,000—but the Russians withdrew, and Frederick claimed victory. The American historian Daniel Marston described Zorndorf as a "draw" as both sides were too exhausted and had taken such losses that neither wished to fight another battle with the other. In the undecided Battle of Tornow on 25 September, a Swedish army repulsed six assaults by a Prussian army but did not push on Berlin following the Battle of Fehrbellin.
The war was continuing indecisively when on 14 October Marshal Daun's Austrians surprised the main Prussian army at the Battle of Hochkirch in Saxony. Frederick lost much of his artillery but retreated in good order, helped by dense woods. The Austrians had ultimately made little progress in the campaign in Saxony despite Hochkirch and had failed to achieve a decisive breakthrough. After a thwarted attempt to take Dresden, Daun's troops were forced to withdraw to Austrian territory for the winter, so that Saxony remained under Prussian occupation. At the same time, the Russians failed in an attempt to take Kolberg in Pomerania (now Kołobrzeg, Poland) from the Prussians.[page needed]
In France, 1758 had been disappointing, and in the wake of this a new chief minister, the Duc de Choiseul, was appointed. Choiseul planned to end the war in 1759 by making strong attacks on Britain and Hanover.
Prussia suffered several defeats in 1759. At the Battle of Kay, or Paltzig, the Russian Count Saltykov with 47,000 Russians defeated 26,000 Prussians commanded by General Carl Heinrich von Wedel. Though the Hanoverians defeated an army of 60,000 French at Minden, Austrian general Daun forced the surrender of an entire Prussian corps of 13,000 in the Battle of Maxen. Frederick himself lost half his army in the Battle of Kunersdorf (now Kunowice, Poland), the worst defeat in his military career and one that drove him to the brink of abdication and thoughts of suicide. The disaster resulted partly from his misjudgment of the Russians, who had already demonstrated their strength at Zorndorf and at Gross-Jägersdorf (now Motornoye, Russia), and partly from good cooperation between the Russian and Austrian forces. However, disagreements with the Austrians over logistics and supplies resulted in the Russians withdrawing east yet again after Kunersdorf, ultimately enabling Frederick to re-group his shattered forces.
The French planned to invade the British Isles during 1759 by accumulating troops near the mouth of the Loire and concentrating their Brest and Toulon fleets. However, two sea defeats prevented this. In August, the Mediterranean fleet under Jean-François de La Clue-Sabran was scattered by a larger British fleet under Edward Boscawen at the Battle of Lagos. In the Battle of Quiberon Bay on 20 November, the British admiral Edward Hawke with 23 ships of the line caught the French Brest fleet with 21 ships of the line under Marshal de Conflans and sank, captured, or forced many of them aground, putting an end to the French plans.
The year 1760 brought yet more Prussian disasters. The general Fouqué was defeated by the Austrians in the Battle of Landshut. The French captured Marburg in Hesse and the Swedes part of Pomerania. The Hanoverians were victorious over the French at the Battle of Warburg, their continued success preventing France from sending troops to aid the Austrians against Prussia in the east.
Despite this, the Austrians, under the command of General Laudon, captured Glatz (now Kłodzko, Poland) in Silesia. In the Battle of Liegnitz Frederick scored a strong victory despite being outnumbered three to one. The Russians under General Saltykov and Austrians under General Lacy briefly occupied his capital, Berlin, in October, but could not hold it for long. Still, the loss of Berlin to the Russians and Austrians was a great blow to Frederick's prestige as many pointed out that the Prussians had no hope of occupying temporarily or otherwise St. Petersburg or Vienna. In November 1760 Frederick was once more victorious, defeating the able Daun in the Battle of Torgau, but he suffered very heavy casualties, and the Austrians retreated in good order.
Meanwhile, after the battle of Kunersdorf, the Russian army was mostly inactive due mostly to their tenuous supply lines. Russian logistics were so poor that in October 1759, an agreement was signed under which the Austrians undertook to supply the Russians as the quartermaster's department of the Russian Army was badly strained by the demands of Russian armies operating so far from home. As it was, the requirement that the Austrian quartermaster's department supply both the Austrian and Russian armies proved beyond its capacity, and in practice, the Russians received little in the way of supplies from the Austrians. At Liegnitz (now Legnica, Poland), the Russians arrived too late to participate in the battle. They made two attempts to storm the fortress of Kolberg, but neither succeeded. The tenacious resistance of Kolberg allowed Frederick to focus on the Austrians instead of having to split his forces.
Prussia began the 1761 campaign with just 100,000 available troops, many of them new recruits, and its situation seemed desperate. However, the Austrian and Russian forces were also heavily depleted and could not launch a major offensive.
In February 1761 Duke Ferdinand of Brunswick surprised French troops at Langensalza and then advanced to besiege Cassel in March. He was forced to lift the siege and retreat after French forces regrouped and captured several thousand of his men at the Battle of Grünberg. At the Battle of Villinghausen, forces under Ferdinand defeated a 92,000-man French army.
On the eastern front, progress was very slow. The Russian army was heavily dependent upon its main magazines in Poland, and the Prussian army launched several successful raids against them. One of them, led by general Platen in September resulted in the loss of 2,000 Russians, mostly captured, and the destruction of 5,000 wagons.[page needed] Deprived of men, the Prussians had to resort to this new sort of warfare, raiding, to delay the advance of their enemies. Frederick's army, though depleted, was left unmolested at its headquarters in Brunzelwitz, as both the Austrians and the Russians were hesitant to attack it. Nonetheless, at the end of 1761, Prussia suffered two critical setbacks. The Russians under Zakhar Chernyshev and Pyotr Rumyantsev stormed Kolberg in Pomerania, while the Austrians captured Schweidnitz. The loss of Kolberg cost Prussia its last port on the Baltic Sea. A major problem for the Russians throughout the war had always been their weak logistics, which prevented their generals from following up their victories, and now with the fall of Kolberg, the Russians could at long last supply their armies in Central Europe via the sea. The fact that the Russians could now supply their armies over the sea, which was considerably faster and safer (Prussian cavalry could not intercept Russian ships in the Baltic) than over the land threatened to swing the balance of power decisively against Prussia, as Frederick could not spare any troops to protect his capital. In Britain, it was speculated that a total Prussian collapse was now imminent.
Britain now threatened to withdraw its subsidies if Frederick did not consider offering concessions to secure peace. As the Prussian armies had dwindled to just 60,000 men and with Berlin itself about to come under siege, the survival of both Prussia and its king was severely threatened. Then on 5 January 1762 the Russian Empress Elizabeth died. Her Prussophile successor, Peter III, at once ended the Russian occupation of East Prussia and Pomerania (see: the Treaty of Saint Petersburg (1762)) and mediated Frederick's truce with Sweden. He also placed a corps of his own troops under Frederick's command. Frederick was then able to muster a larger army, of 120,000 men, and concentrate it against Austria.[page needed] He drove them from much of Silesia after recapturing Schweidnitz, while his brother Henry won a victory in Saxony in the Battle of Freiberg (29 October 1762). At the same time, his Brunswick allies captured the key town of Göttingen and compounded this by taking Cassel.
Two new countries entered the war in 1762. Britain declared war against Spain on 4 January 1762; Spain reacted by issuing its own declaration of war against Britain on 18 January. Portugal followed by joining the war on Britain's side. Spain, aided by the French, launched an invasion of Portugal and succeeded in capturing Almeida. The arrival of British reinforcements stalled a further Spanish advance, and in the Battle of Valencia de Alcántara British-Portuguese forces overran a major Spanish supply base. The invaders were stopped on the heights in front of Abrantes (called the pass to Lisbon) where the Anglo-Portuguese were entrenched. Eventually the Anglo-Portuguese army, aided by guerrillas and practicing a scorched earth strategy, chased the greatly reduced Franco-Spanish army back to Spain, recovering almost all the lost towns, among them the Spanish headquarters in Castelo Branco full of wounded and sick that had been left behind.
Meanwhile, the long British naval blockade of French ports had sapped the morale of the French populace. Morale declined further when news of defeat in the Battle of Signal Hill in Newfoundland reached Paris. After Russia's about-face, Sweden's withdrawal and Prussia's two victories against Austria, Louis XV became convinced that Austria would be unable to re-conquer Silesia (the condition for which France would receive the Austrian Netherlands) without financial and material subsidies, which Louis was no longer willing to provide. He therefore made peace with Frederick and evacuated Prussia's Rhineland territories, ending France's involvement in the war in Germany.
By 1763, the war in central Europe was essentially a stalemate between Prussia and Austria. Prussia had retaken nearly all of Silesia from the Austrians after Frederick's narrow victory over Daun at the Battle of Burkersdorf. After his brother Henry's 1762 victory at the Battle of Freiberg, Frederick held most of Saxony but not its capital, Dresden. His financial situation was not dire, but his kingdom was devastated and his army severely weakened. His manpower had dramatically decreased, and he had lost so many effective officers and generals that an offensive against Dresden seemed impossible. British subsidies had been stopped by the new prime minister, Lord Bute, and the Russian emperor had been overthrown by his wife, Catherine, who ended Russia's alliance with Prussia and withdrew from the war. Austria, however, like most participants, was facing a severe financial crisis and had to decrease the size of its army, which greatly affected its offensive power. Indeed, after having effectively sustained a long war, its administration was in disarray.[page needed] By that time, it still held Dresden, the southeastern parts of Saxony, and the county of Glatz in southern Silesia, but the prospect of victory was dim without Russian support, and Maria Theresa had largely given up her hopes of re-conquering Silesia; her Chancellor, husband and eldest son were all urging her to make peace, while Daun was hesitant to attack Frederick. In 1763 a peace settlement was reached at the Treaty of Hubertusburg, in which Glatz was returned to Prussia in exchange for the Prussian evacuation of Saxony. This ended the war in central Europe.
The stalemate had really been reached by 1759–1760, and Prussia and Austria were nearly out of money. The materials of both sides had been largely consumed. Frederick was no longer receiving subsidies from Britain; the Golden Cavalry of St. George had produced nearly 13 million dollars (equivalent). He had melted and coined most of the church silver, had ransacked the palaces of his kingdom and coined that silver, and reduced his purchasing power by mixing it with copper. His banks' capital was exhausted, and he had pawned nearly everything of value from his own estate. While Frederick still had a significant amount of money left from the prior British subsidies, he hoped to use it to restore his kingdom's prosperity in peacetime; in any case, Prussia's population was so depleted that he could not sustain another long campaign.[page needed] Similarly, Maria Theresa had reached the limit of her resources. She had pawned her jewels in 1758; in 1760, she approved a public subscription for support and urged her public to bring their silver to the mint. French subsidies were no longer provided.[page needed] Although she had many young men still to draft, she could not conscript them and did not dare to resort to impressment, as Frederick had done.[page needed] She had even dismissed some men because it was too expensive to feed them.[page needed]
British amphibious "descents"
Great Britain planned a "descent" (an amphibious demonstration or raid) on Rochefort, a joint operation to overrun the town and burn shipping in the Charente. The expedition set out on 8 September 1757, Sir John Mordaunt commanding the troops and Sir Edward Hawke the fleet. On 23 September the Isle d'Aix was taken, but military staff dithered and lost so much time that Rochefort became unassailable. The expedition abandoned the Isle d'Aix, returning to Great Britain on 1 October.
Despite the debatable strategic success and the operational failure of the descent on Rochefort, William Pitt—who saw purpose in this type of asymmetric enterprise—prepared to continue such operations. An army was assembled under the command of Charles Spencer, 3rd Duke of Marlborough; he was aided by Lord George Sackville. The naval squadron and transports for the expedition were commanded by Richard Howe. The army landed on 5 June 1758 at Cancalle Bay, proceeded to St. Malo, and, finding that it would take prolonged siege to capture it, instead attacked the nearby port of St. Servan. It burned shipping in the harbor, roughly 80 French privateers and merchantmen, as well as four warships which were under construction.[page needed] The force then re-embarked under threat of the arrival of French relief forces. An attack on Havre de Grace was called off, and the fleet sailed on to Cherbourg; the weather being bad and provisions low, that too was abandoned, and the expedition returned having damaged French privateering and provided further strategic demonstration against the French coast.
Pitt now prepared to send troops into Germany; and both Marlborough and Sackville, disgusted by what they perceived as the futility of the "descents", obtained commissions in that army. The elderly General Bligh was appointed to command a new "descent", escorted by Howe. The campaign began propitiously with the Raid on Cherbourg. Covered by naval bombardment, the army drove off the French force detailed to oppose their landing, captured Cherbourg, and destroyed its fortifications, docks and shipping.
The troops were reembarked and moved to the Bay of St. Lunaire in Brittany where, on 3 September, they were landed to operate against St. Malo; however, this action proved impractical. Worsening weather forced the two armies to separate: the ships sailed for the safer anchorage of St. Cast, while the army proceeded overland. The tardiness of Bligh in moving his forces allowed a French force of 10,000 from Brest to catch up with him and open fire on the reembarkation troops. At the battle of Saint Cast a rear-guard of 1,400 under Dury held off the French while the rest of the army embarked. They could not be saved; 750, including Dury, were killed and the rest captured.
The colonial conflict mainly between France and Britain took place in India, North America, Europe, the Caribbean isles, the Philippines, and coastal Africa. Over the course of the war, Great Britain gained enormous areas of land and influence at the expense of the French and the Spanish Empire.
Great Britain lost Menorca in the Mediterranean to the French in 1756 but captured the French colonies in Senegal in 1758. More importantly, the British defeated the French in its defense of New France in 1759, with the fall of Quebec. The buffer that French North America had provided to New Spain, the Spanish Empire's most important overseas holding, was now lost. Spain had entered the war in 1761 following the Third Family (15 August 1761) with France. The British Royal Navy took the French Caribbean sugar colonies of Guadeloupe in 1759 and Martinique in 1762 as well as the Spanish Empire's main port in the Caribbean, Havana in Cuba, and its main Asian port of Manila in the Philippines, both major Spanish colonial cities. British attempts at expansion into the hinterlands of Cuba and the Philippines met with stiff resistance. In the Philippines, the British were confined to Manila until their agreed upon withdrawal at the war's end.
During the war, the Six Nations of The Iroquois Confederacy were allied with the British. Native Americans of the Laurentian valley—the Algonquin, the Abenaki, the Huron and others, were allied with the French. Although the Algonquin tribes living north of the Great Lakes and along the St. Lawrence River were not directly concerned with the fate of the Ohio River Valley tribes, they had been victims of the Iroquois Confederation which included the Seneca, Mohawk, Oneida, Onondaga, Cayuga and Tuscarora tribes of central New York. The Iroquois had encroached on Algonquin territory and pushed the Algonquins west beyond Lake Michigan and to the shore of the St. Lawrence. The Algonquin tribes were interested in fighting against the Iroquois. Throughout New England, New York and the north-west, Native American tribes formed differing alliances with the major belligerents.
In 1756 and 1757 the French captured forts Oswego and William Henry from the British. The latter victory was marred when France's native allies broke the terms of capitulation and attacked the retreating British column, which was under French guard, slaughtering and scalping soldiers and taking captive many men, women and children while the French refused to protect their captives. French naval deployments in 1757 also successfully defended the key Fortress of Louisbourg on Cape Breton Island called Ile du Roi by the French, securing the seaward approaches to Quebec.
British Prime Minister William Pitt's focus on the colonies for the 1758 campaign paid off with the taking of Louisbourg after French reinforcements were blocked by British naval victory in the Battle of Cartagena and in the successful capture of Fort Duquesne and Fort Frontenac. The British also continued the process of deporting the Acadian population with a wave of major operations against Île Saint-Jean (present-day Prince Edward Island), and the St. John River and the Petitcodiac River valleys. The celebration of these successes was dampened by their embarrassing defeat in the Battle of Carillon (Ticonderoga), in which 4,000 French troops repulsed 16,000 British. When the British led by generals James Abercrombie and George Howe attacked, they believed that the French led by general Marquis de Montcalm were defended only by a small abatis which could be taken easily given the British force's significant numerical advantage. The British offensive which was supposed to advance in tight columns and overwhelm the French defenders fell into confusion and scattered, leaving large spaces in their ranks. When the French Chevalier de Levis sent 1,000 soldiers to reinforce Montcalm's struggling troops, the British were pinned down in the brush by intense French musket fire and they were forced to retreat.
All of Britain's campaigns against New France succeeded in 1759, part of what became known as an Annus Mirabilis. Fort Niagara and Fort Carillon on 8 July 1759 fell to sizable British forces, cutting off French frontier forts further west. Starting in June 1759, the British under James Wolfe and James Murray set up camp on the Ile d'Orleans across the St. Lawrence River from Quebec, enabling them to commence the 3-month siege that ensued. The French under the Marquis de Montcalm anticipated a British assault to the east of Quebec so he ordered his soldiers to fortify the region of Beauport. On 31 July the British attacked with 4,000 soldiers but the French positioned high up on the cliffs overlooking the Montmorency Falls forced the British forces to withdraw to the Ile d'Orleans. While Wolfe and Murray planned a second offensive, British rangers raided French settlements along the St. Lawrence, destroying food supplies, ammunition and other goods in an attempt to vanquish the French through starvation.
On 13 September 1759, General James Wolfe led 5,000 troops up a goat path to the Plains of Abraham, 1 mile west of Quebec City. He had positioned his army between Montcalm's forces an hour's march to the east and Bougainville's regiments to the west, which could be mobilised within 3 hours. Instead of waiting for a coordinated attack with Bougainville, Montcalm attacked immediately. When his 3,500 troops advanced, their lines became scattered in a disorderly formation. Many French soldiers fired before they were within range of striking the British. Wolfe organised his troops in two lines stretching 1 mile across the Plains of Abraham. They were ordered to load their Brown Bess muskets with two bullets to obtain maximum power and hold their fire until the French soldiers came within 40 paces of the British ranks. When Montcalm's army was within range of the British, their volley was powerful and nearly all bullets hit their targets, devastating the French ranks. The French fled the Plains of Abraham in a state of utter confusion while they were pursued by members of the Scottish Fraser regiment and other British forces. Despite being cut down by musket fire from the Canadiens and their indigenous allies, the British vastly outnumbered these opponents and won the Battle of the Plains of Abraham. General Wolfe was mortally wounded in the chest early in the battle so the command fell to James Murray, who would become the lieutenant governor of Quebec after the war. The Marquis de Montcalm was also severely wounded later in the battle and died the following day. The French abandoned the city and French Canadians led by the Chevalier de Levis staged a counteroffensive on the Plains of Abraham in the spring of 1760, with initial success at the Battle of Sainte-Foy. During the subsequent siege of Quebec, however, Lévis was unable to retake the city, largely because of British naval superiority following the Battle of Neuville and the Battle of Restigouche, which allowed the British to be resupplied but not the French. The French forces retreated to Montreal in the summer of 1760, and after a two-month campaign by overwhelming British forces, they surrendered on 8 September, essentially ending the French Empire in North America.
Seeing French and Indian defeat, in 1760 the Six Nations of The Iroquois Confederacy resigned from the war and negotiated the Treaty of Kahnawake with the British. Among its conditions was their unrestricted travel between Canada and New York, as the nations had extensive trade between Montreal and Albany as well as populations living throughout the area.
In 1762, towards the end of the war, French forces attacked St. John's, Newfoundland. If successful, the expedition would have strengthened France's hand at the negotiating table. Although they took St. John's and raided nearby settlements, the French forces were eventually defeated by British troops at the Battle of Signal Hill. This was the final battle of the war in North America, and it forced the French to surrender to Lieutenant Colonel William Amherst. The victorious British now controlled all of eastern North America.
The history of the Seven Years' War in North America, particularly the expulsion of the Acadians, the siege of Quebec, the death of Wolfe, and the Battle of Fort William Henry generated a vast number of ballads, broadsides, images, and novels (see Longfellow's Evangeline, Benjamin West's The Death of General Wolfe, James Fenimore Cooper's The Last of the Mohicans), maps and other printed materials, which testify to how this event held the imagination of the British and North American public long after Wolfe's death in 1759.
Between September 1762 and April 1763, Spanish forces led by don Pedro Antonio de Cevallos, Governor of Buenos Aires (and later first Viceroy of the Rio de la Plata) undertook a campaign against the Portuguese in the Banda Oriental, now Uruguay and south Brazil. The Spanish conquered the Portuguese settlement of Colonia do Sacramento and Rio Grande de São Pedro and forced the Portuguese to surrender and retreat.
Under the Treaty of Paris (1763), Spain had to return to Portugal the settlement of Colonia do Sacramento, while the vast and rich territory of the so-called "Continent of S. Peter" (the present-day Brazilian state of Rio Grande do Sul) would be retaken from the Spanish army during the undeclared Hispano-Portuguese war of 1763–1777.
As consequence of the war the Valdivian Fort System, a Spanish defensive complex in southern Chile, was updated and reinforced from 1764 onwards. Other vulnerable localities of colonial Chile such as Chiloé Archipelago, Concepción, Juan Fernández Islands and Valparaíso were also made ready for an eventual English attack. The war contributed also to a decision to improve communications between Buenos Aires and Lima resulting in the establishment of a series of mountain shelters in the high Andes called Casuchas del Rey.
In India, the outbreak of the Seven Years' War in Europe renewed the long running conflict between the French and the British trading companies for influence on the subcontinent. The French allied themselves with the Mughal Empire to resist British expansion. The war began in Southern India but spread into Bengal, where British forces under Robert Clive recaptured Calcutta from the Nawab Siraj ud-Daulah, a French ally, and ousted him from his throne at the Battle of Plassey in 1757. In the same year, the British also captured Chandernagar, the French settlement in Bengal.
In the south, although the French captured Cuddalore, their siege of Madras failed, while the British commander Sir Eyre Coote decisively defeated the Comte de Lally at the Battle of Wandiwash in 1760 and overran the French territory of the Northern Circars. The French capital in India, Pondicherry, fell to the British in 1761; together with the fall of the lesser French settlements of Karikal and Mahé this effectively eliminated French power in India.
In 1758, at the urging of an American merchant, Thomas Cumming, Pitt dispatched an expedition to take the French settlement at Saint-Louis, Senegal. The British captured Senegal with ease in May 1758 and brought home large amounts of captured goods. This success convinced Pitt to launch two further expeditions to take the island of Gorée and the French trading post on the Gambia. The loss of these valuable colonies further weakened the French economy.
|Wikisource has original text related to this article:|
The Anglo-French hostilities were ended in 1763 by the Treaty of Paris, which involved a complex series of land exchanges, the most important being France's cession to Spain of Louisiana, and to Great Britain the rest of New France. Britain returned to France the islands of St. Pierre and Miquelon, which had been ceded to Britain in 1714 under the Treaty of Utrecht, to assist with French fishing rights. Faced with the choice of regaining either New France or its Caribbean island colonies of Guadeloupe and Martinique, France chose the latter to retain these lucrative sources of sugar, writing off New France as an unproductive, costly territory. France also returned Menorca to the British. Spain lost control of Florida to Great Britain, but it received from the French the Île d'Orléans and all of the former French holdings west of the Mississippi River. The exchanges suited the British as well, as their own Caribbean islands already supplied ample sugar, and, with the acquisition of New France and Florida, they now controlled all of North America east of the Mississippi.
In India, the British retained the Northern Circars, but returned all the French trading ports. The treaty, however, required that the fortifications of these settlements be destroyed and never rebuilt, while only minimal garrisons could be maintained there, thus rendering them worthless as military bases. Combined with the loss of France's ally in Bengal and the defection of Hyderabad to the British as a result of the war, this effectively brought French power in India to an end, making way for British hegemony and eventual control of the subcontinent. France's navy was crippled by the war. Only after an ambitious rebuilding program in combination with Spain was France again able to challenge Britain's command of the sea.
Bute's settlement with France was mild compared with what Pitt's would have been. He had hoped for a lasting peace with France, and he was afraid that if he took too much, the whole of Europe would unite in envious hostility against Great Britain. Choiseul, however, had no intention of making a permanent peace, and, when France went to war with Great Britain during the American Revolution, the British found no support among the European powers. France's defeat caused the French to embark upon major military reforms, with particular attention being paid to the artillery. The origins of the famed French artillery that played a prominent role in the wars of the French Revolution and beyond can to be traced to military reforms that started in 1763.
The Treaty of Hubertusburg, between Austria, Prussia and Saxony, was signed on 15 February 1763, at a hunting lodge between Dresden and Leipzig. Negotiations had started there on 31 December 1762. Frederick, who had considered ceding East Prussia to Russia if Peter III helped him secure Saxony, finally insisted on excluding Russia (in fact, no longer a belligerent) from the negotiations. At the same time, he refused to evacuate Saxony until its elector had renounced any claim to reparation. The Austrians wanted at least to retain Glatz, which they had in fact reconquered, but Frederick would not allow it. The treaty simply restored the status quo of 1748, with Silesia and Glatz reverting to Frederick and Saxony to its own elector. The only concession that Prussia made to Austria was to consent to the election of Archduke Joseph as Holy Roman emperor. Saxony emerged from the war weakened and bankrupt; despite losing no territory, Saxony had essentially been a battleground between Prussia and Austria throughout the conflict, with many of its towns and cities (including the capital of Dresden) damaged by bombardment and looting.
Austria was not able to retake Silesia or make any significant territorial gain. However, it did prevent Prussia from invading parts of Saxony. More significantly, its military performance proved far better than during the War of the Austrian Succession and seemed to vindicate Maria Theresa's administrative and military reforms. Hence, Austria's prestige was restored in great part and the empire secured its position as a major player in the European system.[page needed] Also, by promising to vote for Joseph II in the Imperial elections, Frederick II accepted the Habsburg preeminence in the Holy Roman Empire. The survival of Prussia as a first-rate power and the enhanced prestige of its king and its army, however, was potentially damaging in the long run to Austria's influence in Germany.
Not only that, Austria now found herself estranged with the new developments within the empire itself. Beside the rise of Prussia, Augustus III, although ineffective, could muster an army not only from Saxony, but also Poland, since he was also the King of Poland as well as Elector of Saxony. Bavaria's growing power and independence was also apparent as it asserted more control on the deployment of its army, and managed to disengage from the war at its own will. Most importantly, with the now belligerent Hanover united personally under George III of Great Britain, It amassed a considerable power, and even brought Britain in on future conflicts. This power dynamic was important to the future and the latter conflicts of the Reich. The war also proved that Maria Theresa's reforms were still insufficient to compete with Prussia: unlike its enemy, the Austrians were almost bankrupt at the end of war. Hence, she dedicated the next two decades to the consolidation of her administration.
Prussia emerged from the war as a great power whose importance could no longer be challenged. Frederick the Great's personal reputation was enormously enhanced, as his debt to fortune (Russia's volte-face after Elizabeth's death) and to British financial support were soon forgotten, while the memory of his energy and his military genius was strenuously kept alive. Though depicted as a key moment in Prussia's rise to greatness, the war weakened Prussia. Prussia's lands and population were devastated, though Frederick's extensive agrarian reforms and encouragement of immigration soon solved both these problems. Unfortunately for Prussia, its army had taken heavy losses (particularly the officer corps), and in the war's aftermath, Frederick could not afford to rebuild the Prussian Army to what it was before the war. In the War of the Bavarian Succession, the Prussians fought poorly despite being led by Frederick in person. During the war with France in 1792–95, the Prussian Army did not fare well against revolutionary France, and in 1806, the Prussians were annihilated by the French at the Battle of Jena. It was only after 1806 when Prussian government brought in reforms to recover from the disaster of Jena that Prussia's rise to greatness later in the 19th century was realized. However, none of this had happened yet, and after 1763, various nations all sent officers to Prussia to learn the secrets of Prussia's military power. After the Seven Years' War, Prussia become one of the most imitated powers in Europe.
Russia, on the other hand, made one great invisible gain from the war: the elimination of French influence in Poland. The First Partition of Poland (1772) was to be a Russo-Prussian transaction, with Austria only reluctantly involved and with France simply ignored. Though the war had ended in a draw, the performance of the Imperial Russian Army against Prussia had improved Russia's reputation as a factor in European politics, as many had not expected the Russians to hold their own against the Prussians in campaigns fought on Prussian soil. The American historian David Stone observed that Russian soldiers proved capable of going head-on against the Prussians, inflicting and taking one bloody volley after another "without flinching", and though the quality of Russian generalship was quite variable, the Russians were never decisively defeated once in the war. The Russians defeated the Prussians several times in the war, but the Russians lacked the necessary logistical capability to follow up their victories with lasting gains, and in this sense, the salvation of the House of Hohenzollern was due more to Russian weakness with respect to logistics than to Prussian strength on the battlefield. Still, the fact that the Russians proved capable of defeating in battle the army of a "first-rate" European power on its own soil despite the often indifferent quality of their generals improved Russia's standing in Europe. A lasting legacy of the war was that it awakened the Russians to their logistic weaknesses, and led to major reforms of the Imperial Russian Army's quartermaster department. The supply system that allowed the Russians to advance into the Balkans during the war with the Ottomans in 1787–92, Marshal Alexander Suvorov to campaign effectively in Italy and Switzerland in 1798–99, and for the Russians to fight across Germany and France in 1813–14 to take Paris was created directly in response to the logistic problems experienced by the Russians in the Seven Years' War.
The British government was close to bankruptcy, and Britain now faced the delicate task of pacifying its new French-Canadian subjects as well as the many American Indian tribes who had supported France. In 1763, Pontiac's War broke out as a group of Indian tribes in the Great Lakes region and the Northwest (the modern American Midwest) said to have been led by the Ottawa chief Pontiac (whose role as the leader of the confederation seems to have been exaggerated by the British), unhappy with the eclipse of French power, rebelled against British rule. The Indians had long established congenial and friendly relations with the French fur traders, and the Anglo-American fur traders who had replaced the French had engaged in business practices that enraged the Indians, who complained about being cheated when they sold their furs. Moreover, the Indians feared that with the coming of British rule might lead to white settlers displacing them off their land, whereas it was known that the French had only come as fur traders. Pontiac's War was a major conflict in which the British temporarily lost control of the Great Lakes-Northwest regions to the Indians. By the middle of 1763, the only forts the British held in the region were Fort Detroit (modern Detroit, Michigan), Fort Niagara (modern Youngstown, New York) and Fort Pitt (modern Pittsburgh, Pennsylvania) with the rest all being lost to the Indians. It was only with the British victory at the Battle of Bushy Run that prevented a complete collapse of British power in the Great Lakes region. King George III's Proclamation of 1763, which forbade white settlement beyond the crest of the Appalachians, was intended to appease the Indians but led to considerable outrage in the Thirteen Colonies, whose inhabitants were eager to acquire native lands. The Quebec Act of 1774, similarly intended to win over the loyalty of French Canadians, also spurred resentment among American colonists. The Act protected Catholic religion and French language, which enraged the Americans, but the Québécois remained loyal to the British Crown during the American Revolution and did not rebel.
The war also brought to an end the "Old System" of alliances in Europe, In the years after the war, under the direction of Lord Sandwich, the British attempted to re-establish this system. But after her surprising grand success against a coalition of great powers, European states such as Austria, the Dutch Republic, Sweden, Denmark-Norway, the Ottoman Empire and Russia, now saw Britain as a greater threat than France and did not join with it, while the Prussians were angered by what they considered a British betrayal in 1762. Consequently, when the American War of Independence turned into a global war between 1778 and 1783, Britain found itself opposed by a strong coalition of European powers, and lacking any substantial ally.
- The novel The Luck of Barry Lyndon (1844) by William Makepeace Thackeray is set against the Seven Years' War. This is a quote about the war from the novel:
It would require a greater philosopher and historian than I am to explain the causes of the famous Seven Years' War in which Europe was engaged; and, indeed, its origin has always appeared to me to be so complicated, and the books written about it so amazingly hard to understand, that I have seldom been much wiser at the end of a chapter than at the beginning, and so shall not trouble my reader with any personal disquisitions concerning the matter.
- Stanley Kubrick's film Barry Lyndon (1975) is based on the Thackeray novel.
- The events in the early chapters of Voltaire's Candide are based on the Seven Years' War; according to Jean Starobinski, ("Voltaire's Double-Barreled Musket", in Blessings in Disguise (California, 1993). p. 85), all the atrocities described in Chapter 3 are true to life. When Candide was written, Voltaire had been opposed to militarism; the book's themes of disillusionment and suffering underscore this position
- The board games Friedrich and, more recently, Prussia's Defiant Stand and Clash of Monarchs are based on the events of the Seven Years' War.
- The Grand strategy wargame Rise of Prussia covers the European campaigns of the Seven Years' War
- The novel The Last of the Mohicans (1826) by James Fenimore Cooper, and its subsequent adaptations, are set in the North American theatre of the Seven Years' War.
- The Partisan in War (1789), a treatise on light infantry tactics written by Colonel Andreas Emmerich, is based on his experiences in the Seven Years' War.
- The Seven Years' War is the central theme of G. E. Lessing's 1767 play Minna von Barnhelm or the Soldiers' Happiness.
- Numerous towns and other places now in United States were named after Frederick the Great to commemorate the victorious conclusion of the war, including Frederick, Maryland, and King of Prussia, Pennsylvania.
- The fourth scenario of the second act in the RTS Age of Empires III is about this military conflict, with the player fighting alongside the French against the British.
- In Ubisoft's video game Assassin's Creed III, early missions in the main story/campaign centred around the Assassin/Templar Haytham Kenway are set during the North American campaigns of the French and Indian War. Additionally Assassin's Creed Rogue, released in 2014, is set within the timescale of the Seven Years' War.
- Several installments of Diana Gabaldon's fictional Lord John series (itself an offshoot of the Outlander series) describe a homosexual officer's experiences in Germany and France during the Seven Years' War. In particular, the short story "Lord John and the Succubus" occurs just before the Battle of Rossbach, and the novel Lord John and the Brotherhood of the Blade centers around the Battle of Krefeld.
- Battles of the Seven Years' War
- France in the Seven Years' War
- French India
- Great Britain in the Seven Years' War
- List of wars
- Rule of 1756
- Wars and battles involving Prussia
- World war
- "British History in depth: Was the American Revolution Inevitable?". BBC History. Retrieved 21 July 2018.
In 1763, Americans joyously celebrated the British victory in the Seven Years' War, revelling in their identity as Britons and jealously guarding their much-celebrated rights which they believed they possessed by virtue of membership in what they saw as the world's greatest empire.
- Project Seven Years War: Palatine Army.
- Kohn (2000), p. 417.
- Gregory Hanlon. "The Twilight Of A Military Tradition: Italian Aristocrats And European Conflicts, 1560-1800." Routledge: 1997. Page 322.
- incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Württemberg". Encyclopædia Britannica. 28 (11th ed.). Cambridge University Press. pp. 856–859.This article
- Project Seven Years War: Modenese Army.
- The Cambridge History of the British Empire. 1929. p. 126. Retrieved 16 December 2014.
- Riley, James C. (1986). The Seven Years War and the Old Regime in France: The Economic and Financial Toll Princeton University Press, p. 78.
- Clodfelter (2017), p. 85.
- Speelman (2012), p. 524; of which 20,000 by the Russians.
- McLeod, A. B. (2012). British Naval Captains of the Seven Years' War: The View from the Quarterdeck Boydell Press, p. 90.
- Speelman (2012), p. 524.
- "Disappointed, facing incredible resistance and losing everything in the field, the Spaniards abandoned the fight and left behind twenty-five thousand men [in Portugal] ..." In Henry, Isabelle – Dumouriez: Général de la Révolution (1739–1823), L'Harmattan, Paris, 2002, p. 87.
- Marley (2008), p. 440 gives figures of 3,800 killed or dead from sickness and 5,000 captured at the Siege of Havana.
- Elliott, J.H., Empires of the Atlantic World: Britain and Spain in America, 1492-1830. New Haven: Yale University Press 2006, p.292.
- Anderson (2007), p. xvii.
- Füssel (2010), p. 7.
- Churchill, Winston (1983). A History of the English Speaking Peoples (Reissue ed.). Dodd Mead. ISBN 978-0880294270.
- Bowen, HV (1998). War and British Society 1688–1815. Cambridge: Cambridge University Press. p. 7. ISBN 978-0-521-57645-1.
- Tombs, Robert and Isabelle. That Sweet Enemy: The French and the British from the Sun King to the Present. London: William Heinemann, 2006.
- Anderson (2007), p. 17.
- Anderson (2007), pp. 5–7.
- Anderson (2007), pp. 51–65.
- Anderson (2007), pp. 112–5.
- Anderson (2007), p. 114.
- Anderson2006, p. 77.
- Anderson (2007), pp. 119–20.
- Szabo (2007), p. 2.
- Black (1994), pp. 38–52
- Black (1994), pp. 67–80
- Clark (2006), p. 209
- Creveld (1977), pp. 26–28
- Pritchard, James (2004). In Search of Empire: The French in the Americas, 1670–1730. Cambridge: Cambridge University Press. p. 356. ISBN 978-0-521-82742-3.
- Dull (2007), p. 14.
- Borneman, Walter R. (2007). The French and Indian War: Deciding the Fate of North America. New York City: HarperCollins. p. 80. ISBN 978-0060761844.
- Lee, Stephen J. (1984). Aspects of European History, 1494–1789. London: Routledge. p. 285. ISBN 978-0-416-37490-2.
- Till, Geoffrey (2006). Development of British Naval Thinking: Essays in Memory of Bryan Ranft. Abingdon: Routledge. p. 77. ISBN 978-0-714-65320-4.
- Schweizer (1989), pp. 15–6.
- Schweizer (1989), p. 106.
- Black, Jeremy (1999). Britain As A Military Power, 1688–1815. London: UCL Press. pp. 45–78. ISBN 978-1-85728-772-1.
- E.g., Simms, Brendan (2008). Three Victories and a Defeat: The Rise and Fall of the First British Empire. London: Penguin Books. pp. 64–6. ISBN 978-0140289848. OCLC 319213140.
- Vego, Milan N. (2003). Naval Strategy and Operations in Narrow Seas. London: Frank Cass. pp. 156–157. ISBN 978-0-7146-5389-1.
- Szabo (2007), pp. 17–8.
- Lawrence James (1997). The Rise and Fall of the British Empire. p. 71ff. ISBN 9780312169855.
- William R. Nester (2000). The Great Frontier War: Britain, France, and the Imperial Struggle for North America, 1607–1755. p. 115ff. ISBN 9780275967727.
- Anderson (2007), p. 129.
- Rodger (2006), pp. 265–7.
- "His Majesty's Declaration of War Against the French King. [17 May, 1756.]". T. Baskett and the Assigns of R. Baskett. 1 January 1756.
- Asprey (1986), p. 427.
- Asprey (1986), p. 428.
- Szabo (2007), pp. 56-8.
- Dull (2007), p. 71.
- Bled, Jean-Paul (2006). Friedrich der Grosse (in German). Düsseldorf: Artemis & Winkler. ISBN 978-3538072183.
- Asprey (1986), p. 465.
- Asprey (1986), Footnote on p. 441.
- Carter (1971), pp. 84–102.
- Marston (2001), p. 37.
- Luvaas (1999), p. 6.
- Marston (2001), p. 39.
- Asprey (1986), p. 454.
- Asprey (1986), p. 460.
- Marston (2001), pp. 40–1.
- Marston (2001), p. 22.
- Stone (2006), p. 70.
- Anderson (2007), p. 176.
- Marston (2001), p. 41.
- Asprey (1986), pp. 469–72.
- Asprey (1986), pp. 476–81.
- Marston (2001), p. 42.
- Anderson (2007), pp. 211–2.
- Anderson (2007), pp. 176–7.
- Asprey (1986), p. 473.
- Anderson (2007), pp. 215–6.
- Asprey (1986), p. 486.
- Asprey (1986), p. 467.
- Asprey (1986), p. 489.
- Szabo (2007), pp. 148–55.
- Szabo (2007), pp. 179–82.
- Asprey (1986), pp. 494–99.
- Szabo (2007), pp. 162–9.
- Marston (2001), p. 54.
- Asprey (1986), p. 500.
- Asprey (1986), pp. 501–6.
- Szabo (2007), pp. 195–202.
- Szabo (2007).
- Stone (2006), p. 74.
- Anderson (2007), p. 491.
- Redman (2014).
- Anderson (2007), p. 492.
- Stone (2006), p. 75.
- Fish 2003, p. 2
- Dumouriez, Charles François Du Périer (1797). An Account of Portugal. London: C. Law. p. 247, p. 254. See also García Arenas (2004), pp. 41, 73-4 (pdf file).
- The Royal Military Chronicle (1812), pp. 50-1. See also Dull (2009), p. 88.
- Terrage (1904), p. 151.
- According to C. R. Boxer in Descriptive List of the State Papers Portugal, 1661–1780, in the Public Record Office, London: 1724–1765, Vol II, Lisbon, Academia das Ciências de Lisboa, with the collaboration of the British Academy and the P.R.O., 1979, p. 415. Also according to the historian Fernando Dores Costa, 30 000 Franco-Spaniards were lost mostly from hunger and desertion. See Milícia e sociedade. Recrutamento in Nova História Militar de Portugal (Portuguese), vol. II, Círculo de Leitores, Lisboa, 2004, p. 341
- Sales, Ernesto Augusto-O Conde de Lippe em Portugal, Vol 2, Publicações de Comissão de História Militar, Minerva, 1936, p. 29
- Reflexiones Histórico-Militares que manifiestan los Motivos Porque se Mantiene Portugal Reino Independiente de España y Generalmente Desgraciadas Nuestras Empresas y que Lo Serán Mientras No se Tomen Otras Disposiciones (in Spanish), Borzas, 28 November 1772; cited by Jorge Cejudo López in Catálogo del archivo del conde de Campomanes, Fundación Universitaria Española, 1975, legajo (file) n. 30 Archived 14 July 2014 at the Wayback Machine/12.
- The Royal Military Chronicle (1812), pp. 52-3.
- Anderson (2007), p. 498.
- Mitford (2013), p. 242–243.
- Scott, Hamish M. (2001). The emergence of the Eastern powers. Cambridge University Press. ISBN 978-0521792691.
- Mahan (2011), .
- Mahan (2011), .
- Corbett (2011).
- Rodger (2006).
- Burkholder, Suzanne Hiles, " Seven Years' War" in Encyclopedia of Latin American History and Culture, vol. 5, pp. 103-104, New York: Charles Scribner's Sons 1996.
- Anderson (2007), p. 14.
- Anderson (2007), pp. 150–7.
- Anderson (2007), pp. 185–201.
- Dodge (1998), pp. 91–2.
- Anderson (2007), pp. 208–9.
- Anderson (2007), pp. 280–3.
- Anderson (2007), pp. 258–66.
- Anderson (2007), pp. 330–9.
- Anderson (2007), pp. 240–9.
- Anderson (2007), pp. 355–60.
- Anderson (2007), pp. 392–3.
- D. Peter MacLeod, "'Free and Open Roads': The Treaty of Kahnawake and the Control of Movement over the New York-Canadian Border during the Military Regime, 1760–1761," read at the Ottawa Legal History Group, 3 December 1992 (1992, 2001). Retrieved 31 January 2011.
- Virtual Vault: "Canadiana", Library and Archives Canada
- Ojer, Pablo- La Década Fundamental en la Controversia de Límites entre Venezuela y Colombia, 1881–1891 (in Spanish), Academia Nacional de la Historia, 1988, p. 292.
- United States Army Corps of Engineers- Report on Orinoco-Casiquiare-Negro Waterway. Venezuela-Colombia-Brazil, July 1943, Vol. I, 1943, p. 15.
- Southern, Robert – History of Brazil, part third, London, 1819, p. 584.
- Block, David – Mission Culture on the Upper Amazon: native Tradition, Jesuit enterprise and Secular Policy in Moxos, 1660–1880, University of Nebraska Press, 1994, p. 51.
- Marley (2008), p. 449-50.
- Bento, Cláudio Moreira- Brasil, conflitos externos 1500–1945 (electronic version), Academia de História Militar Terrestre do Brasil, chapter 5: As guerras no Sul 1763–77.
- Ricardo Lesser- Las Orígenes de la Argentina, Editorial Biblos, 2003, see chapter "El desastre”, see pp. 63–72.
- Bento, Cláudio Moreira- Rafael Pinto Bandeira in O Tuiuti, nr. 95, Academia de Historia Militar Terrestre do Brasil, 2013, pp. 3–18.
- "Ingeniería Militar durante la Colonia", Memoria chilena (in Spanish), retrieved 30 December 2015
- "Lugares estratégicos", Memoria chilena (in Spanish), retrieved 30 December 2015
- Ramos, V.A.; Aguirre-Urreta, B. (2009). Las Casuchas del Rey: un patrimonio temprano de la integración chileno-argentina (PDF). XII Congreso Geológico Chileno (in Spanish). Santiago.
- Peter Harrington, Plassey, 1757: Clive of India's Finest Hour (Praeger, 1994).
- Sen, S.N. (2006). History Modern India (Third ed.). Delhi, India: New Age International. p. 34. ISBN 978-8122417746.
- James L.A. Webb Jr, "The mid-eighteenth century gum Arabic trade and the British conquest of Saint-Louis du Senegal, 1758." Journal of Imperial and Commonwealth History 25#1 (1997): 37–58.
- Eccles, William John (2006). "Seven Years' War". The Canadian Encyclopedia. Retrieved 17 June 2006.
- E.g., Canada to Confederation p. 8: Barriers to Immigration Archived 26 March 2009 at the Wayback Machine, mentioning the mother country's image of New France as an "Arctic wasteland with wild animals and savage Indians".
- Szabo (2007), p. 432.
- Kennedy, Paul (1976). The Rise and Fall of British Naval Mastery (book) (new introduction ed.). London: Penguin Books. ISBN 978-0-684-14609-6.
- Eric Robson, "The Seven Years' War", in J O Lindsay, ed., The New Cambridge Modern History (1957) 7:465-86.
- Marston (2001), p. 90.
- Bled, Jean-Paul (2001). Marie-Thérèse d'Autriche (in French). Fayard. ISBN 978-2213609973.
- Stone (2006), pp. 70–1.
- Marston (2001), pp. 90–1.
- Marston (2002), pp. 84–5.
- Marston (2002), pp. 85–7.
- Marston (2002), p. 86.
- Marston (2002), p. 87.
- MacLeod, D. Peter (2008). Northern Armageddon: The Battle of the Plains of Abraham. Vancouver: Douglas & McIntyre. ISBN 9781553654124.
- A structure of alliances with European powers, in which Britain had formed grand coalitions against Bourbon ambitions in Europe.
- Gipson, Lawrence Henry (1950). "The American Revolution as an Aftermath of the Great War for the Empire, 1754–1763". Political Science Quarterly. 65 (1): 86–104. doi:10.2307/2144276. JSTOR 2144276.
- Thackeray (2001), p. 72.
- Anderson, Fred (2006). The War That Made America: A Short History of the French and Indian War. Penguin. ISBN 978-1101117750.
- Anderson, Fred (2007). Crucible of War: The Seven Years' War and the Fate of Empire in British North America, 1754–1766. Vintage - Random House. ISBN 978-0307425393.
- Asprey, Robert B. (1986). Frederick the Great: The Magnificent Enigma. New York City: Ticknor & Field. ISBN 978-0899193526. Popular biography.
- Baugh, Daniel. The Global Seven Years War, 1754–1763 (Pearson Press, 2011) 660 pp; online review in H-FRANCE;
- Black, Jeremy (1994). European Warfare, 1660–1815. London: UCL Press. ISBN 978-1-85728-172-9.
- Blanning, Tim. Frederick the Great: King of Prussia (2016). scholarly biography.
- Browning, Reed. "The Duke of Newcastle and the Financing of the Seven Years' War." Journal of Economic History 31#2 (1971): 344–377. JSTOR 2117049.
- Browning, Reed. The Duke of Newcastle (Yale University Press, 1975).
- Carter, Alice Clare (1971). The Dutch Republic in Europe in the Seven Years' War. MacMillan.
- Charters, Erica. Disease, War, and the Imperial State: The Welfare of the British Armed Forces During the Seven Years' War (University of Chicago Press, 2014).
- Clark, Christopher (2006). Iron Kingdom: The Rise and Downfall of Prussia, 1600–1947. Cambridge, MA: Belknap Press. ISBN 978-0-674-03196-8.
- Clodfelter, M. (2017). Warfare and Armed Conflicts: A Statistical Encyclopedia of Casualty and Other Figures, 1492–2015 (4th ed.). Jefferson, NC: McFarland & Company. ISBN 978-0786474707.
- Corbett, Julian S. (2011) . England in the Seven Years' War: A Study in Combined Strategy. (2 vols.). Pickle Partners. ISBN 9781908902436. Its focus is on naval history.
- Creveld, Martin van (1977). Supplying War: Logistics from Wallenstein to Patton. Cambridge: Cambridge University Press. ISBN 978-0-521-21730-9.
- Crouch, Christian Ayne. Nobility Lost: French and Canadian Martial Cultures, Indians, and the End of New France. Ithaca, NY: Cornell University Press, 2014.
- The Royal Military Chronicle. Vol. V. London: J. Davis. 1812.
|volume=has extra text (help)
- Dodge, Edward J. (1998). Relief is Greatly Wanted: the Battle of Fort William Henry. Bowie MD: Heritage Books. ISBN 978-0788409325. OCLC 39400729.
- Dorn, Walter L. Competition for Empire, 1740–1763 (1940) focus on diplomacy free to borrow
- Duffy, Christopher. Instrument of War: The Austrian Army in the Seven Years War (2000); By Force of Arms: The Austrian Army in the Seven Years War, Vol II (2008)
- Dull, Jonathan R. (2007). The French Navy and the Seven Years' War. University of Nebraska Press. ISBN 978-0803260245.
- Dull, Jonathan R. (2009). The Age of the Ship of the Line: the British and French navies, 1650–1851. University of Nebraska Press. ISBN 978-0803219304.
- Fish, Shirley When Britain ruled the Philippines, 1762–1764: the story of the 18th-century British invasion of the Philippines during the Seven Years' War. 1stBooks Library, 2003. ISBN 1-4107-1069-6, ISBN 978-1-4107-1069-7
- Fowler, William H. Empires at War: The Seven Years' War and the Struggle for North America. Vancouver: Douglas & McIntyre, 2005. ISBN 1-55365-096-4.
- Higgonet, Patrice Louis-René. "The Origins of the Seven Years' War." Journal of Modern History 40.1 (March 1968): 57–90. doi:10.1086/240165
- Kaplan, Herbert. Russia and the Outbreak of the Seven Years' War (U of California Press, 1968).
- Keay, John. The Honourable Company: A History of the English East India Company. Harper Collins, 1993.
- Kohn, George C. (2000). Seven Years War in Dictionary of Wars. Facts on File. ISBN 978-0816041572.
- Luvaas, Jay (1999). Frederick the Great on the Art of War. Boston: Da Capo. ISBN 978-0306809088.
- Mahan, Alexander J. (2011). Maria Theresa of Austria. Read Books. ISBN 978-1446545553.
- Marley, David F. (2008). Wars of the Americas: a chronology of armed conflict in the New World, 1492 to the present. Vol. II. ABC-CLIO. ISBN 978-1598841015.
|volume=has extra text (help)
- Marston, Daniel (2001). The Seven Years' War. Essential Histories. Osprey. ISBN 978-1579583439.
- Marston, Daniel (2002). The French and Indian War. Essential Histories. Osprey. ISBN 1841764566.
- McLynn, Frank. 1759: The Year Britain Became Master of the World. (Jonathan Cape, 2004). ISBN 0-224-06245-X.
- Middleton, Richard. Bells of Victory: The Pitt-Newcastle Ministry & the Conduct of the Seven Years' War (1985), 251pp.
- Mitford, Nancy (2013). Frederick the Great. New York City: New York Review Books. ISBN 978-1-59017-642-9.
- Nester, William R. The French and Indian War and the Conquest of New France (U of Oklahoma Press, 2014).
- Pocock, Tom. Battle for Empire: the very first World War 1756-1763 (1998).
- Redman, Herbert J. (2014). Frederick the Great and the Seven Years' War, 1756–1763. McFarland. ISBN 9780786476695.
- Robson, Martin. A History of the Royal Navy: The Seven Years War (IB Tauris, 2015).
- Rodger, N. A. M. (2006). Command of the Ocean: A Naval History of Britain 1649–1815. W.W. Norton. ISBN 978-0393328479.
- Schumann, Matt, and Karl W. Schweizer. The Seven Years War: A Transatlantic History. (Routledge, 2012).
- Schweizer, Karl W. (1989). England, Prussia, and the Seven Years War: Studies in Alliance Policies and Diplomacy. Lewiston NY: Edwin Mellen Press. ISBN 9780889464650.
- Smith, Digby George. Armies of the Seven Years' War: Commanders, Equipment, Uniforms and Strategies of the 'First World War' (2012).
- Speelman, P.J. (2012). Danley, M.H.; Speelman, P.J. (eds.). The Seven Years' War: Global Views. Brill. ISBN 978-90-04-23408-6.
- Stone, David (2006). A Military History of Russia: From Ivan the Terrible to the War in Chechnya. New York City: Praeger. ISBN 978-0275985028.
- Syrett, David. Shipping and Military Power in the Seven Year War, 1756-1763: The Sails of Victory (2005)
- Szabo, Franz A.J. (2007). The Seven Years' War in Europe 1756–1763. Routledge. ISBN 978-0582292727.
- Füssel, Marian (2010). Der Siebenjährige Krieg. Ein Weltkrieg im 18. Jahrhundert (in German). München: Beck. ISBN 978-3-406-60695-3.
- García Arenas, Mar (2004). "El periplo ibérico del general Dumouriez: Una aproximación a las relaciones diplomáticas hispano-portuguesas (1765–1767)" (PDF). Revista de Historia Moderna (in Spanish). Universidad de Alicante. 22: 403–30. doi:10.14198/RHM2004.22.14. ISSN 0212-5862.
- Terrage, Marc de Villiers du (1904). Les dernières années de la Louisiane française (in French). E. Guilmoto.
- de Ligne, Prince Charles-Joseph, Mon Journal de la guerre de Sept Ans. Textes inédits introduits, établis et annotés par Jeroom Vercruysse et Bruno Colson (Paris, Editions Honoré Champion, 2008) (L'Âge des Lumières, 44). | https://worddisk.com/wiki/Seven_Years'_War/ | 21 |
18 | May 18, 2021
The Paris Agreement is a legally binding international treaty on climate change, signed by 197 countries, with the goal of limiting global warming to well below 2, preferably to 1.5 degrees Celsius, relative to pre-industrial levels. To achieve this, countries aim to hit peak greenhouse gas emissions as soon as possible and as such, the energy transition, away from fossil fuels to renewable types of energy is rising to the top of government agendas. In keeping with this, green energy has also been a key feature of post-Covid, multi-billion-dollar government spending plans including the European Green Deal and Joe Biden’s infrastructure package. With capital and resources being channelled towards this pressing issue, a number of alternative energy sources have emerged. In this piece we will focus on hydrogen.
The Chemistry of Hydrogen
Hydrogen is the first and the lightest element in the periodic table and most abundant chemical substance in the universe. At standard temperature and pressure, hydrogen is a colourless, odourless, tasteless, non-toxic, non-metallic, and a highly combustible diatomic (composed of only 2 atoms) gas with the molecular formula H₂.
Most hydrogen on the Earth’s surface is bound together with other types of atoms as molecules that form various substances such as water (H₂O) and methane (CH4) or organic compounds. Before it can be used as a fuel, hydrogen must first be extracted from these substances and then contained, usually in highly compressed liquid form.
In relation to mass, hydrogen has the highest energy density of all fuels. By way of comparison, one litre of hydrogen contains almost as much energy as three litres of petrol. However, there are some risks: When liquid hydrogen is stored in tanks, it is relatively safe, but if it escapes, it is highly flammable in the presence of an oxidizer (oxygen is a good one) and burns more easily than gasoline.
As a fuel, hydrogen can be combusted directly (it only emits water when burned and can be made without releasing CO₂), or it can be used in a fuel cell to produce electricity.
In contrast to natural gas, crude oil, and coal, which are primary energy sources, hydrogen is considered as a secondary energy source as it doesn’t occur naturally in its pure form; only once it is isolated, can hydrogen serve as a form of fuel. Essentially, it needs to be produced before being consumed.
Hydrogen can be separated from water via several means, including steam reforming (normally involving the use of fossil fuels) and electrolysis (requiring electricity).
As of today, over 90% of the world’s hydrogen is produced using the steam methane reforming process (SMR). In this reaction, natural gas is (CH4) reacted with steam (H₂O) at an elevated temperature to produce carbon monoxide (CO) and hydrogen (3H2). As such, whether hydrogen is “clean” depends on how that natural gas is sourced — and subsequently processed.
Hydrogen can also be produced by the electrolysis of water, but this is generally a costlier approach. When electricity is used to produce hydrogen, thermodynamics dictate that you will always produce less energy than you consume. In other words, the energy input in electricity will be greater than the energy output of hydrogen. Nevertheless, if a cheap source of electricity is available — such as excess grid electricity at certain times of the day — it may be economical to produce hydrogen in this way.
Industry players use colour codes to distinguish between the different types of technology used to produce hydrogen.
- Green hydrogen, is hydrogen made by using clean electricity from renewable energy to electrolyse water (H₂O), separating the hydrogen atom from its molecular twin oxygen. This is a costlier, infrastructure-intensive process and while green hydrogen is the ultimate aim, it currently accounts for only 1% of the global hydrogen supply.
- Blue hydrogen is produced mainly from natural gas via steam reforming, which brings together natural gas and heated water in the form of steam. The output is hydrogen and carbon dioxide, with the latter then caught through industrial Carbon Capture, Utilisation and Storage projects.
- Grey or black hydrogen is essentially any hydrogen created from fossil fuels without capturing the greenhouse gases made in the process.
Green hydrogen is the ultimate goal, however, there are still challenges to its use: there are practical issues relating to storage and transportation, the production process requires a lot of electricity and expensive infrastructure, and its production cost is significantly greater than grey alternatives (currently 2.5 to 5 times costlier). Closing this price gap will require financial support on both the production side and on the demand side in order for economies-of-scale benefits to kick-in, making green hydrogen competitive.
As of today, public aid and mechanisms are seeking to improve the competitiveness of hydrogen but more needs to be done. For example, under the EU’s carbon credits scheme (European Emission Allowances (EUAs)) whereby companies need to pay for permits to emit CO2 over a certain threshold, the price is currently around €50/ton. EUA pricing will need to rise to a level that encourages fuel switching – to green hydrogen and away from fossil-fuel energy sources such as natural gas and petroleum products and grey hydrogen. According to BNP Paribas research estimates, EU carbon prices will need to reach at least €79/ton by 2030 in order for green hydrogen to become competitive enough for big industrial users to ditch the version made from fossil fuels
Mass production will inevitably lower production costs in the future, but producing, compressing, and transporting low-carbon hydrogen can induce energy losses. Even if there is public aid and mechanisms to improve its competitiveness (CO₂ pricing), shifting towards hydrogen will come at a higher collective cost than today’s fossil-based energy mix.
A key technology associated with hydrogen is a fuel cell. This device converts the chemical energy contained within the hydrogen to generate electricity, as well as water and heat. It must be noted that the double conversion mechanism (production of hydrogen + transformation of hydrogen to generate electricity) embedded in the fuel cell technology means larger energy depletion.
Fuel cells are being explored for use in passenger cars as propulsion systems, however, interest from private car markers has been limited. Hydrogen-powered cars face competition from electric cars which, as of now, more efficient. Only Toyota, Hyundai and Honda still investing in and fuel cell electric vehicles with most automakers betting on battery electric vehicles for the passenger market.
Fuel cell electric vehicles are victim to the chicken-and-egg problem. As you cannot refuel at home, a massive infrastructure transformation would be required. But this doesn’t mean that we should throwing out the baby with the bathwater.
Where hydrogen has a role to play
A quietening down around hydrogen passenger cars says nothing about the prospects for trucks, buses, trains, ships, and airplanes. For these forms of transport, the expectations for hydrogen as a sustainable alternative fuel are still high. For longer-distance travel, hydrogen’s greater energy density becomes more attractive. Hydrogen compressed to 700 atmospheres contains between two and five times more usable energy per litre than a lithium-ion battery. If it is liquefied (which requires more complex technology) that increases further. Worth to mention is the recent announcement of Daimler Truck AG and Volvo Group that they plan to jointly manufacture hydrogen fuel cells for trucks in Europe starting in 2025 (while calling on EU policymakers to boost incentives). According to Daimler and Volvo, electrical batteries will work for short haul trucks but they see hydrogen fuel cells playing a major role for heavier loads and longer distances.
The double constraint of being economically competitive and environmentally friendly means that hydrogen will probably remain a luxury energy vector for some type of mobility issues due to its structural limits. Low-carbon hydrogen should be dedicated to high emitting and hard to abate industrial sectors that cannot be electrified, like steel and ammonia production.
The scaling up of clean hydrogen for heavy transportation, freight and industry is a major pillar of the EU’s commitment to become carbon neutral by 2050. The strategy contains an ambitious target of 40 Gigawatts of European electrolyser capacity to produce up to 10 million tonnes of green hydrogen by 2030 in the EU.
At the same time, there is huge potential for innovation in the space of hydrogen electrolysers and fuel cell technologies. Economies of scale and expanded production will be crucial in bringing down prices for low-carbon hydrogen, as will continued low prices for electricity from renewable energy.
A key advantage of hydrogen is that it could help balance the use of variable renewables to generate electricity. Solar panels, wind turbines and hydropower are all highly dependent on the weather, making them inconsistent. With renewable energy sources being unstable, hydrogen electrolysis could deliver backup power for when there is little wind or sun. Moreover, facing a surplus of green electricity, hydrogen could be conveniently produced via electrolysis.
Heaven on earth
Scotland’s Orkney Islands have an over-abundance of renewable electricity. The windy islands at the Northeast corner of Scotland had been generating wind energy with large wind turbines for several years, but since 2013 they have been producing more energy than needed and the power grid connection to the mainland was too weak to send it elsewhere. As such, they were forced to shut off large turbines on occasions to avoid damaging the power lines. Thanks to this reliable source of clean electricity, the Orkney Islands were an ideal place to start producing hydrogen via electrolysis. The hydrogen produced acts as an energy storage medium which can be used at a later date to produce heat, power, fuel for use as low carbon transport or for any other purpose. The hydrogen produced can be used for heating buildings and vessels in Kirkwall harbour, as well as for fuelling several hydrogen vehicles on the Orkney Islands. Work is underway on a zero-emission hydrogen-powered flight from an Orkney airfield, as well as investigation on the feasibility of producing local gin using hydrogen as a fuel.
The Orkney Islands’ learning experiences in the sphere of hydrogen inspired hydrogen valley projects, like for instance, the HEAVENN (H₂ Energy Applications in Valley Environments for Northern Netherlands) project in the Dutch city of Groningen. A hydrogen valley is a geographical area where several hydrogen applications are combined into an integrated ecosystem that covers the entire value chain: production, storage, distribution and final use. They make particular sense, both commercially and environmentally, in industrial areas which both produce and consume large amounts of energy in a small amount of space. As of now, the goal is to make green hydrogen the go-to type of hydrogen when it comes to production and consumption.
Hydrogen’s role in the energy transition
“After many false starts, hydrogen power might now bear fruit, but it will fill in the gaps, rather than dominating the economy”. – The Economist
Hydrogen has certain advantages: it is abundant, it can replace certain uses of fossil fuels and it can be complementary to the energy transition by helping to balance the use of variable renewables to generate electricity. Certain characteristics of hydrogen production make it a solution that is probably more suited to certain uses than others from an industrial, financial and ecological point of view, for example in energy-intensive industries or for longer-distance travel.
On the other hand, to deploy green hydrogen on a large scale, massive amounts of renewable energy would be required which in turn would require substantial investment in infrastructure. As such, the case around zero-carbon hydrogen mobility on a grand scale is far from a done deal, even if it’s currently attracting significant R&D investment from some major companies. Scaling-up is, as usual, the main trigger to make it economically viable, but demand from end-users will also be key..
Ultimately, we believe that ambitions to reach carbon neutrality will reduce the costs of hydrogen and that demand will rise. Concrete commitments and investments from significant players (EU, the US and Chinese authorities, industrial leaders such as Air Liquide, Daimler, Siemens, Linde, etc.) are also at play to make hydrogen a credible solution in the long term.
However, we advocate that investors consider hydrogen among a broader mix of solutions that seek exposure to the energy transition. Indeed, proponents of the hydrogen economy inside the investment community must be careful not to succumb to the marketing myopia. Decarbonizing our economy is the objective: Hydrogen is part of the means to an end, but it is not the end it itself and while governments have started to support the growth of low carbon hydrogen market, just as they did for renewables, just as a successful decarbonization path cannot solely rely on renewable electricity, the same is true for hydrogen.
The growth of hydrogen as an alternative source of energy offers opportunities for those investors that are able to deduce where the financial rewards will be greatest. Tracking down tomorrow ‘s winners is not always easy feat in this space where there is a limited number of pure players and a handful of early-stage small companies. As such, diversification is recommended while the nascent nature of the hydrogen industry could mean that existing investment vehicles exhibit volatility levels that will not suit all investors.
To conclude on a positive perspective, if resilience to preserve common good is our priority, we need pioneers and people that are willing to take risks. When it comes to the energy transition, we have no guarantee that it’s going to work perfectly, but in order to progress we need to get out of our comfort zones and search for alternatives and complementary solutions with enthusiasm. While it is not a silver bullet, hydrogen does have promising advantages: it is abundant, it makes it possible to advantageously replace certain uses of fossil fuels, it can be complementary to the transition to sources of renewable electricity production.
Hydrogen energy density (in electrical terms) is equal to 33.33 kWh/kg, while methane is 13.9 kWh/kg and petrol is 12 kWh/kg.
From the International Energy Agency: The Future of Hydrogen Report prepared by the IEA for the G20, Japan Seizing today’s opportunities (June 2019)
“All energy carriers, including fossil fuels, encounter efficiency losses each time they are produced, converted or used. In the case of hydrogen, these losses can accumulate across different steps in the value chain. After converting electricity to hydrogen, shipping it and storing it, then converting it back to electricity in a fuel cell, the delivered energy can be below 30% of what was in the initial electricity input. This makes hydrogen more ‘expensive’ than electricity or the natural gas used to produce it. It also makes a case for minimizing the number of conversions between energy carriers in any value chain. That said, in the absence of constraints to energy supply, and as long as CO2 emissions are valued, efficiency can be largely a matter of economics, to be considered at the level of the whole value chain.”
Metal hybrids are also in development with the idea to store hydrogen at lower pressures in small spaces, allowing the use of smaller tanks operating at lower pressure and temperature (via GKN Powder Metallurgy, acquired by Melrose industries in 2018). Furthermore, energy storage density techniques are under development via a joint agreement of Daimler Truck and Linde.
Author: Group Investment Office | https://www.bilinvestmentinsights.com/hydrogens-role-in-the-energy-transition/ | 21 |
15 | Dissolved / In Solution
These terms refer to a homogenous mixture of two fluids – in this case oil and water – implying that the individual water molecules are discrete and mixed with the oil molecules. The water is in solution. The sample cannot be separated by allowing the solution to stand at a given temperature. The fluid is clear.
This describes the condition in which a fluid is saturated and is past the point where water is the solution. If more water is added to the oil, the water sinks to the bottom and the oil rises to the top. The visible horizontal line at the boundary between the two elements is called the interface.
Another example of free water is emulsions. They form when enough mechanical agitation acts on the fluid so that the free water forms a cloudy mixture of water and hydrocarbons. The mechanical shearing action creates very small water droplets which have too much surface tension to join and form an interface. This is still free water as it is not in solution, but it does not create an interface boundary, causing a visible cloud or haze instead.
Saturation / Saturation Point
At this point the fluid carries as much water in the dissolved state as it possibly can at a given temperature. At this point the saturation level is 100%. If any more water were to be added, a free water condition would result and that would be the beginning of an emulsion or interface. When the saturation point is given, a corresponding temperature is also given because saturation varies according to temperature.
Saturation Level / Percent Saturation
This is the degree of saturation which indicates what percent of maximum possible water in a dissolved state is in the oil. A reading of 0% would indicate oil free of water, while a reading of 100% would indicate oil that is saturated with water.
Water Vapor Pressure
This is the pressure exerted by water vapor. Water gives off vapor, consisting of molecules that have evaporated and are in a gaseous state. The presence of water in oil results in a water vapor pressure on the surface of the oil. This water vapor pressure depends on the water content, the type of oil (including additives and particles), and temperature. If the ambient water vapor pressure is higher than that of the oil, water moves into the oil. By contrast, if the ambient water vapor is lower, water evaporates out of the oil.
Saturated Water Vapor Pressure
When adding water to oil, the water vapor pressure increases until a maximum value. The vapor is then said to be saturated vapor and the pressure it exerts saturated water vapor pressure. In oil this is the case when a maximum amount of water is dissolved. | https://www.hydac-na.com/faq/glossary/ | 21 |
76 | Some of Colorado’s earliest visitors and settlers came for the Pike’s Peak Gold Rush. Starting In late 1858, many men from other states and territories heard that they could find gold in the streams and rivers flowing through the Rocky Mountains. This was a decade after the first major American gold rush to California. Some Colorado prospectors were in fact frustrated ‘49ers looking for a second chance. The Colorado prospectors helped found the first major mining district in the area that would later become Central City. Prior to being used for mining, these high altitude zones were familiar and important places for Ute Indian bands along with elk, antelope, moose, and marmots. After 1859 these districts began to fill quickly with crowds of men hoping for a lucky strike. Colorado gold rush pioneers soon found that most gold was in fact not easily scooped up from the water or picked up off the ground. The abundant gold and silver in the Rocky Mountains was locked deep underground and tightly bound to other rocks and minerals. Many frustrated prospectors decided to leave. Some stayed, but they had to make important changes. The new miners had to dig much deeper than the prospectors. They needed to borrow huge sums of money to buy expensive digging and refining and transportation equipment. The miners soon needed railroads to move gold ore, and smelters to separate gold from other minerals that had less value. Eventually, companies supplanted individual miners. Subsequent silver mining booms followed this pattern as well.
As miners and owners created cities near the mines, often high up in the Rocky Mountains, new challenges arose. Native Americans were displaced, often as a result of violent pressure and treaty negotiations. Trees were quickly stripped from the nearby hillsides and demands for supplies and food drove prices higher and higher. Mines several hundred feet deep posed greater dangers to life and limb than surface prospecting. Smelters featured furnaces that heated gold or silver ore as high as 800 degrees Fahrenheit. Individual miners and collectives could not compete with corporate-run industrial mining. Coloradans had to adapt in new ways to these changes.
Eventually four major mining districts emerged in the state: Central City, Leadville, the San Juans, and Cripple Creek. In this roughly sixty-year time period gold and silver mining generated more than $1.1 billion in revenue. Not all of that wealth stayed in Colorado, as eastern and even European investors sought to profit from Colorado resources. This tremendous mining wealth did help create early millionaires in the state, such as Horace Tabor and Nathaniel Hill. Given the wealth accruing to a privileged few, mine workers at times contested their working conditions and wages in bitter strikes and protests. The industrial system developing around mining would have a profound impact on the state through World War One. After 1930 mining would never again produce such amazing wealth nor employ so many workers in Colorado. Many bustling mining towns became ghost towns leaving crumbling frames and foundations across the mountains of Colorado. But the legacy of mining endures in many ways in the state.
As students will quickly learn from reviewing these sources, mining created most early white settlements in Colorado. Without gold and silver booms, the state would likely have developed more slowly and with far less industry. Ute bands along the western slope and southern Cheyenne and Arapaho could perhaps have remained and coexisted with more gradual white farm settlements along the plains. Instead, desire for the gold and silver deposits in the state created instant cities in Colorado mountains: Central City, Leadville, the San Juan area, and Cripple Creek. Settlement in Colorado did not proceed in a slow, western path along a frontier band from the Kansas border across to Utah. Precious metal and coal mining drew tens of thousands of Euro-American and later Mexican American migrants, along with immigrants from abroad, into the territory and then state. Rapidly swelling mining districts in the mountains in turn drove economic growth and railroad expansion across the state. Mining riches helped to build cities along the foothills such as Boulder, Denver, Colorado Springs, and Pueblo in turn. Colorado government also emerged at the same time as many mining communities. State leaders were regularly called upon to address various mining conflicts and create rules to guide those who built fortunes in these industries. Colorado School of Mines, one of the territory’s first institutions of higher learning, was founded in Golden in 1874.
Industrial mining also altered the Colorado environment dramatically. Mountains sides were stripped of forests; rock debris piled up around mine sites; smelters left behind pools of toxic chemicals and mounds of contaminated gravel. As recently as 2015, the abandoned Gold King Mine in the San Juan mountains leaked waste water containing lead, arsenic, cadmium and other pollutants into the Animas River. The cost of cleaning up mine waste was never factored into the profit/loss balance sheets of nineteenth-century mining and smelter businesses. Those costs are born by Coloradans and the nation today.
Sources For Students:
Doc. 1: Colorado State Seal, 1876
Before getting into more specifics, it might help to remember how important mining was to early Colorado settlers. Here is the state’s official seal, created when Colorado became a state in 1876.
This official seal includes some interesting symbols. There are tools on the seal representing only one job though: the pick and hammer of an early miner. The mountains above these tools help situate the location for this job as well.
- Why would the founders of Colorado include these images on the seal?
- Why not include symbols from other jobs on the state seal?
- What other kinds of tools do you think miners in the 1800s used?
- “Nil sine Numine” is a Latin phrase meaning: “Nothing is possible without divine help.” Divine refers to God or gods. What might the founders have meant with that phrase? Did it have anything to do with mining?
- If our Colorado government leaders wanted to change this seal, what work or play symbols could they include today? What do you think the most important job or business is in Colorado today?
Doc. 2: Michigan Newspaper Report on Pike’s Peak Gold Rush, February 1859
News of gold discoveries in the Rocky Mountain area spread fast after the summer of 1858. By 1859 the Pike’s Pike Gold Rush was on. Here’s how a man who went to Colorado described what he saw.
There is just gold enough to excite a certain class of excitable persons to leave their homes and that is all. There are plenty of speculators laying out towns all through the territory, who sell shares to anyone they can at enormous profits. . . . When you hear persons talking of going to Pike’s Peak just tell them to stay at home, if they can make an honest living.
[Source: Hillsdale Standard (Michigan), February 15, 1859, p. 2]
Excite: motivate, get moving
Speculator: a person who guesses what might happen
Laying out towns: creating and mapping new streets
Shares: stocks in a company
Enormous: very large
Profits: money made from a sale
- In 1859 why did Pike’s Peak matter to newspaper readers in Michigan?
- What sort of person did the author think would go to Colorado?
- Did the author think that people could get rich in Colorado?
- Did the author think that gold mining was a good way to make a living?
Doc. 3: Horace Greeley described Central City, June 1859
As the Colorado Gold Rush began, several New Yorkers went west to see it for themselves. One newspaper owner, Horace Greeley, visited the area that would later become Central City. He wrote details of what he saw. Greeley, Colorado is named after him.
[T]he entire population of the valley—which cannot number less than four thousand including five white women and seven [Indian women]
Diggings: place where miners dig for gold
Staples: just the basics
Regarded: thought of
living with white men—sleep in tents . . . cooking and eating in the open air. I doubt that there is as yet a table or chair in these diggings. . . . The food, like that of the plains, is restricted to a few staples—pork, hot bread, beans and coffee. . . . [L]ess than half of the four or five thousand people now in this [valley] have been here a week; he who has been here three weeks is regarded as quite an old settler.
[Source: Horace Greeley, An Overland Journey from New York to San Francisco in the Summer of 1859 (New York: C. M. Saxton, Barker, and Co., 1860), 122, 123]
- How did Greeley give us a sense that men especially were rushing into this mountain valley?
- Why didn’t the newcomers stop to build houses and stores and schools?
- What do you imagine would happen to folks in the area when winter starts?
- As an eye witness, Horace Greeley makes good primary source. Why?
Doc. 4: George White drawing of the area that became Central City, 1867.
Albert Richardson was a newspaperman like Horace Greeley [Doc. 3]. He too published an account of his trip to the Central City area that summer of 1859. Years later Richardson asked New York artist George White to create this picture based on what Richardson remembered from his visit to Colorado.
[Source: George White print made from a wood engraving that appeared in Albert D. Richardson, Beyond the Mississippi (Hartford: American Publishing, 1867).
- The wooden slides in the picture were called sluices. How might those help prospectors find gold in streams or waterways?
- This image shows a scene similar to that Greeley wrote about in document 3. What differences do you notice between Greeley’s description and this image?
- There appears to be one woman in the picture. How many men?
- What would a community be like if it were mostly men but only a few women and children?
- This picture was drawn by an artist who didn’t actually visit Colorado. Can we trust that the artist pictured exactly how the gold rush looked? Why or why not?
Doc 5: Traveler Bayard Taylor described Central City in 1866
Bayard Taylor visited Colorado about seven years after the gold rush started. In his description below, he focused on different changes to the Rocky Mountains:
[Trees have] been wholly cut away….The great, awkwardly rounded mountains are cut up and down by the lines of paying“lodes,” and are pitted all over by the holes and heaps of rocks made either by prospectors or to secure claims. Nature seems to be suffering from an attack of…smallpox. My experience in California taught me that gold-mining utterly ruins the appearance of a county….[This] hideous slashing, tearing, and turning upside down is the surest indication of mineral wealth.
[Source: Bayard Taylor, Colorado: A Summer Trip (New York: Putnam and Son, 1867), 56.]
Lode: a collection of metal in the earth
Secure claims: make sure that a miner owns a specific spot
Smallpox: a nasty sickness like chicken pox that leaves scars behind
Hideous: horrible and ugly
Mineral: a natural substance in the earth
- Taylor talks about changes to the land or the environment. What examples did you find of these changes?
- Smallpox was a nasty disease that created painful, red sores all over human bodies. Why would Taylor say that the land looked like it had smallpox?
- How did Taylor connect “slashing” and “tearing” to “wealth” or money?
- Was anyone fixing the land (replanting trees, filling holes, cleaning streams) during the Gold Rush? Why or why not?
Doc. 6: Photograph of Central City, 1864
The camping scenes described by Greeley and Richardson (Docs. 3 and 4) had disappeared when this photo of Central City was taken in 1864. This was where Clara Brown (in Chapter 2) lived.
[Source: Denver Public Library, “Central City, 1864,” Call # L-609 http://digital.denverlibrary.org/cdm/singleitem/collection/p15330coll22/id/75009/rec/17]
- What kinds of changes do you notice between the drawing in document 4 from 1859 to this photo, taken five years later?
- Before the gold rush, this Central City valley was densely wooded with pine trees, some eighty-feet tall. Where did those trees go?
- Why do all the stores and houses face each other? Why not spread out over the valley?
- If all these building were made of wood, what could happen if a fire started? Remember what happened to Clara Brown? (Hint: check the previous chapter).
Doc. 7: Photograph of six miners in Central City, 1889.
By the 1870s, mines were dug deeper and deeper into the Rocky Mountains of Colorado. Prospectors could not pan for gold successfully anymore. Instead, miners worked for large businesses that build mechanical hoists like the one in the picture below. A hoist was a kind of elevator, powered by a coal furnace and steam engine. These men were about to descend for work below. The hoist operator who controlled the miners’ cage is in the background to the left.
[Source: Denver Public Library, “Saratoga Mines, Central City,” Call # X-61094. http://digital.denverlibrary.org/cdm/singleitem/collection/p15330coll22/id/37944/rec/1
- Here six men are going deep underground into tunnels to dig for gold. How is that different from the drawing in document 4? What has changed?
- At the bottom of the picture, you can see small metal rails. What might those be used for?
- The hoist or elevator was powered by coal, but there was not much coal in the ground around this mine. How might mine owners move coal from mines south of Pueblo to Central City?
- These miners were paid about $3 per day to work for 9 or 10 hours underground. They also faced the danger of mine cave-ins or explosions in tunnels. It was very unlikely they would become millionaires. Why would they do this work anyway?
Doc. 8: Photograph of the smelting process, 1900
Much of the gold and silver dug out of the mountains in Colorado was stuck inside rock called ore. Miners had to separate the gold and silver from the ore. Basically they would crush the ore, heat it to high temperatures, and add chemicals like mercury to release the gold and silver. This work was done inside stamp mills and smelters. Here is picture of the inside of a smelter used to separate copper from surrounding minerals in rock:
[Source: Denver Public Library, “Converters,” Call # X-60601: http://digital.denverlibrary.org/cdm/singleitem/collection/p15330coll22/id/35535/rec/155]
- The men in the picture are also involved in mining work, though they’re obviously not in an underground tunnel. They are heating metal ore in order to separate copper from the other minerals. Check online to find the melting point (temperature) of copper, silver, and gold.
- What dangers might this kind of work have?
- Railroad cars likely shipped the ore to this smelter. Coal powered steam engines moved those cars. What moves the cart on rails in the picture?
- Smelter workers like these men received about $1.80 to $2.50 per day for ten hours of work in 1900. Why were they paid less than the miners underground, do you think?
Doc. 9: Brunot Agreement, 1873
White prospectors discovered gold the San Juan mountain area in southwestern Colorado in the 1860s and early 1870s. Ute Indian bands, however, lived in this area. The U.S. government had earlier made two treaties with Ute leaders that recognized Ute control San Juan Mountains. But pressure from Euro-American miners and settlers led the U.S. government to make new treaties with the Utes in 1873. U.S. agent Felix Brunot made this agreement with Ute leader Chief Ouray. Soon afterward a silver and gold mining boom began in the San Juan Mountains.
- [T]he Ute Nation hereby relinquish to the United States all . . . claim . . . to the following [parts] of the [existing Ute] reservation: [land that later became the Colorado counties of Dolores, La Plata, Hinsdale, Ouray, San Juan, Montezuma, and San Miguel]
Hereby: with this agreement
Relinquish: give up or surrender
Negotiator: someone who helps make a deal between two different sides or groups
- The United States shall permit the Ute Indians to hunt upon [these] lands so long as . . . the Indians are at peace with the white people.
- The United States agrees [to pay] twenty-five thousand dollars [each year] . . . for the benefit of the Ute Indians . . . forever.
- Ouray, head-chief of the Ute Nation, he shall receive a salary of one thousand dollars [per year] for the term of ten years [for his help as negotiator].
[Source: United States Congress, Acts of the Forty-Third Congress—First Session, Chapter 136 (1874). Available online at: http://digital.library.okstate.edu/kappler/vol1/html_files/ses0151.html]
- Find the counties from Part One on a Colorado map. Is this a big or small area that the Utes relinquished with this treat?
- What rights did Ute Indians have on these lands, once they were relinquished?
- In Part Three, the U.S. government gave the Ute Indians about $1.25 per acre of land. Did that seem a fair price? How could we find out what $1.25 could buy in 1870?
- Why do you think this treaty did not use the word “sell” or “sale” of land?
- Would other Ute leaders possibly be jealous or resentful of Ouray, since only he received $1,000 per year?
- How might those Ute Indians who disliked this treaty respond to all the new white miners moving into their lands?
Doc. 10: Newspaper list of Colorado Millionaires, 1892
Mining in Colorado between 1859 and 1929 created over $1 billion in wealth. By 1892, an Aspen newspaper reported that there were thirty-nine millionaires in Colorado. Many of these men had made their fortunes through mining or by supporting the development of mining in the state. Note that this list does not include those millionaires who later benefited from the Cripple Creek gold boom of the 1890s. The average mine worker earned $3.00 per day in 1892:
There are few who have any idea of the number of millionaires in Denver and in Colorado. One would hardly believe that there are thirty-three . . . in this city. . . . Besides there are six millionaires in the state outside of Denver. . . . [Horace] Tabor heads the list with several millions, all made in mining. Then comes [Nathaniel] Hill, whose money was made in the mining [and smelter] business. . . . David Moffatt accumulated his money in
Accumulate: earned or made
the banking and railroad business. . . . Henry Wolcott has dealt in mines. . . . Dennis Sullivan was a miner. . . . [James] Grant was a miner. . . E. Eildy is another mining and smelting man. Charles Kountze…is in mining. . . . William James accumulated his wealth in the same business. John Reithmann . . . is also in the mining business. Walter Cheesman made his money in mining. . . . Samuel Morgan was a miner. . . . Jerome Chaffee was . . . in the mining business. . . . Outside of Denver, J.J. Hagerman accumulated over a million in mining; [Nicholas] Creede…got his money in the same manner; H.M. Griffin of Georgetown was also a miner.”
[Source: Aspen Evening Chronicle (October 5, 1892)]
- How many names on the list of millionaires above made their money in mining? If there were 39 millionaires in Colorado in 1892, what percentage were mining related?
- Can we assume from this list that all miners became millionaires? Why or why not?
- A millionaire mine owner might have hundreds of miners working for him. What kinds of skills did mine owners need to manage that many workers?
- Many of these mine owners chose to live in Denver or Colorado Springs rather than close to the mines. What kinds of houses might they build in those cities? How might mine worker houses look different?
- Some of these millionaires made their fortunes in the San Juan Mountains, where Ute Indians formerly controlled the land. Remember document 9? Why didn’t Ute leaders dig mines and refine gold in that area instead of white miners?
- Have you heard of any names on this list? How are those names connected to buildings or places today?
Doc. 11: Modern Map of Historic Gold and Silver Districts in Colorado
In recent years geographers have created maps that tell history stories. Here is a recent map from the Colorado Geological Survey that includes historical and modern information.
[Source: Colorado Geological Survey: http://coloradogeologicalsurvey.org/mineral-resources/historic-mining-districts/]
- Look at the different colored areas on this map. The reddish-orange areas were mining districts in the past. In addition to having that common feature, in what other ways are these districts similar?
- The tan counties had gold or silver mining histories. The blue counties did not. How else are the blue counties similar to each other?
- The red lines on the map mark interstate highways like I-70 or I-25 or I-76. Those highways were built in the 1950s, 1960s, and 1970s. Were they somehow used to transport gold or silver in the 1800s? Why might the mapmaker include them here?
- If this modern map included railroads, which did exist in the 1800s, what places would those railroads likely connect?
How to Use These Sources:
Option 1: A short lesson could involve first a quick discussion of the state seal. It suggests the importance of mining for Colorado founders. Then students might examine the recent map of gold and silver mining districts (Doc. 11) to get a sense of a basic geographic difference between the mountain counties and those on the plains. Students could also brainstorm what other industries or symbols might have been appropriate for the state seal in 1876.
Then students could begin tracing the developing and impact of mining in the state by comparing Documents 2–4. Documents 2, 3, and 4 provide some detail about the early gold camps, but they don’t always line up neatly together. The newspaper story (Doc. 2) suggests the dangers of gold fever. Document 3 features an eyewitness description of the Central City area. Document 4 was a drawing created years after a visit to Colorado’s gold camps by an artist who did not make the trip. This should raise some questions for students about reliability.
Students could also review the map from Chapter One titled “Route to the Colorado Gold Regions.” This can allow students to get a sense of how prospectors moved into Colorado in an age before paved roads or even railroads. All of these sources can help students describe the early days of the gold rush and the changes in made in Colorado’s Mountains. Students could create an early Colorado map in answer to this question: How might someone who was not a miner draw a map of the state?
Option 2: After completing the initial review of sources 1–4 in Option 1, students might begin to explore the changes from prospecting to industrial mining. Sources 5 and 6 can offer students a chance to consider the environmental changes that mining created in the Central City district. The photograph in Document 6 reveals what historians have called an “instant city.” They could discuss how few city services would exist and how remote this urban outpost was before railroad links arrived in the 1870s. Documents 7 and 8 feature different aspects of industrial mining and highlight how different mining work had become since the early prospecting days.
Option 3: Building on the previous options, students could explore some of the consequences mining had on Native Americans in Colorado as well as the economic changes to the state. Pressure from white settlers in Colorado led to Ute removal after 1879, which enabled miners to move in to the San Juan Mountains. Document 9 includes some key provisions of the Brunot Treaty that the U.S. Government negotiated with Ute Chief Ouray. He was only one of several chiefs, however, and other Ute leaders bitterly resented the loss of Ute lands under the terms of these treaty. They were not fairly consulted by the U.S. government. Even Ouray himself was likely trying to make the best of a bad situation by agreeing to these treaty terms. History Colorado features a collection of documents about the Ute bands in Colorado that could connect with this chapter on Mining.
The newspaper story on millionaires (Doc. 10) can allow students to explore the links between mining and other industrial developments like railroad construction and coal mining. The historical map (Doc. 11) will require some careful study online to find the gold and silver districts and trace their links to Front Range urban centers like Denver or Colorado Springs. Mining did not affect every county in the Colorado, but rather chiefly mountain ones. This mean early population concentrations in remote high altitude communities emerged before most farming towns in the state. Settlement was uneven in the first decades of Colorado history.
- Duane Smith, The Trail of Gold and Silver (Boulder: University Press of Colorado, 2009) 259. ↵ | https://coloradohistorydetectives.pressbooks.com/chapter/chapter-3/ | 21 |
22 | What Is a Dollar Shortage?
A dollar shortage occurs when a country lacks a sufficient supply of U.S. dollars (USD) to manage its international trade effectively. This happens when a country has to pay out more USD for its imports than the USD it receives from its exports.
- A dollar shortage occurs when a country spends more U.S. dollars on imports than it receives on exports.
- Since the USD is used to price many goods globally, and is used in many international trade transactions, a dollar shortage can limit a country's ability to grow or trade effectively.
- Most countries try to maintain a reserve of currencies, such as U.S. dollars or other major currencies, which can be used to buy imported goods, manage the country's exchange rate, pay international debts, or make international transactions or investments.
Understanding a Dollar Shortage
Dollar shortages impact global trade because as the currency of the world’s largest economy, the USD acts as a peg for the value of other currencies. Even when two countries other than the United States engage in foreign trade, the status of the dollar as a reserve currency, with a reputation for stability, makes it widely used for pricing assets. For example, oil is typically priced in USD, even if two countries engaged in an import/export oil deal don't use the USD as their domestic currency.
A reserve currency is a large quantity of currency maintained by central banks and other major financial institutions to be used for investments, transactions, international debt obligations, or to influence their domestic exchange rate. Because the USD is the world’s most widely traded currency, many nations must hold assets in dollars to maintain a steadily growing economy and to trade effectively with other countries that use the currency.
USD is accumulated by a country when its balance of payments (BOP) shows it receives more dollars for exported goods compared to dollars spent on goods the nation imports. These countries are known as net exporters.
Countries are known as net importers when they do not accumulate sufficient dollars through their BOP. When the value of imported products and services is higher than the cost of those exported, a nation will be a net importer. If a dollar shortage becomes too severe, a country may ask for assistance from other countries or international organizations to maintain liquidity and improve its economy.
The term dollar shortage was coined after World War II when the world’s economies were struggling to recover, yet stable currencies were in short supply. Part of the U.S.-sponsored Marshall Plan that began just after the war helped European countries rebuild their economies by providing enough USD to relieve that shortage.
Although the global economy today is not nearly as reliant on the United States for assistance, international organizations such as the International Monetary Fund (IMF) may assist nations facing dollar shortages.
Dollar Shortage Examples
Shortages of USD often begin when countries become more isolated from others, perhaps because of sanctions by other nations. These and other political issues can impact international trade and reduce demand for exported goods in exchange for dollars.
In 2017, Qatar suffered a dollar shortage when other Arab nations accused Qatari banks of supporting blacklisted terrorist groups. Although the country had already accumulated substantial financial reserves, it was forced to access more than $30 billion of those reserves to compensate for a net outflow of USD.
In another incident, in late 2017 into early 2018, a shortage of dollars in Sudan caused that nation’s currency to weaken, which resulted in rapidly climbing prices. Bread prices doubled in a week, causing protests and riots in a country whose economy was already subject to disruption caused in part by new economic reform measures.
At the start of 2019, the situation hadn't improved, with the Sudanese pound (SDP) falling to record lows as people were willing to spend more and more SDP in order to buy the more stable USD. | https://www.investopedia.com/terms/d/dollar-shortage.asp | 21 |
17 | The Universal Declaration of Human Rights is a milestone document in the history of human rights. Drafted by representatives with different legal and cultural backgrounds from all regions of the world and it was after the world war 2 that they formed UDHR with the promise of war of such kind would never exist. The chairperson of the human rights commission and group of researchers prepared the document with aim of preventing such world war and to come up with ways on how humans should treat each other, the Declaration was proclaimed by the United Nations General Assembly in Paris on 10 December 1948 General Assembly resolution 217A as a common standard of achievements for all peoples and all nations. It sets out, for the first time, fundamental human rights to be universally protected. It was adopted by the United Nations General Assembly in the midst of an especially bitter phase of the Cold War. Many people contributed to this remarkable achievement, but most observers believe that the UN Commission on Human Rights, which drafted the Declaration, would not have succeeded in reaching agreement without the leadership of the Commissions chair Eleanor Roosevelt.
In 1946, Roosevelt was appointed as a delegate to the United Nations by President Harry Truman, who had succeeded to the White House after the death of Franklin Roosevelt in 1945. As head of the Human Rights Commission, she was instrumental in formulating the Universal Declaration of Human Rights, which she submitted to the United Nations General Assembly. This declaration may well become the international great charter for all men everywhere. Eleanor Roosevelt added by stating that one should do what they feel is right because people at the end of the day will criticize what a person does.
Eleanor Roosevelt played a major role as she chaired the subcommittee of the Commission on Human Rights responsible for drafting the document as she did not encourage meaningless argument and pushed people to work tirelessly till late night so as to finish the work and this caused some of the people to hate her. Even more important was her realization that the declaration must be adopted quickly and not disintegrate into prolonged debate where egos and national concerns could have some malice and hence derail the document. She urged that the subcommittee separate the legally binding covenants from the declaration as the declaration was only a formal statement with no legal binding, she supported the subcommittee responsible for drafting the covenants on civil and political rights and economic and social rights, and convinced the State Department not to oppose this approach.
Eleanor Roosevelt herself regarded her role in drafting and securing adoption of the Declaration as her greatest achievement as the social economic rights were adhered to and the rights of mankind were also addressed. As she readily admitted due to lack of legal training or expert knowledge of parliamentary procedure she only had her political activist skills and some advocate skills which she had been practicing that guided her through the preparation of the document and also the understanding of the meaning of freedom earned through a deep engagement in the struggle in her own country for social and economic justice, civil rights, and womens rights also helped her.
Eleanor had travelled the world and witnessed the poverty and mistreatment of humans and also violation of their rights and she felt her position on the Commission was to be the ambassador for the common people which included the poor and the slaves who were being tortured because she understood the rights of mankind. Eleanor encouraged equal treatment as she wanted every member of the Commission to be recognized and their ideas to be heard and used. For example, although she did not want to be picky with the wording of the UDHR, she agreed with the Indian delegate who was adamant in creating a completely equal document with references to people rather than using the word man. She possessed not only a passionate commitment to human rights, but a hard earned knowledge of the political and cultural obstacles to securing them in a divided world.
Mrs. Roosevelt soon found herself embroiled in bitter confrontations with the Russian who wanted a provision after each article saying it was up to the state to determine whether a specific right was being observed. But Mrs. Roosevelt was not of the opinion and hence she pushed for the inclusion of economic and social rights which included the rights to employment and equal pay, education which is compulsory, health care which included children out of wedlock which the economic rights should not be less important than political rights and should be included in the declaration. Despite this move to meet them part way, the Russians were very upset. They had decided that the Universal Declaration would not be to their liking. They made some accusation on racial discrimination and unemployment in the United States and Mrs. Roosevelt send a team of Russians to observe the racial problems in the United States on blacks if only she could do the same to the Soviet Union.
The Universal Declaration by 1948 had been finalized and framed as Mrs. Roosevelt wanted, in simple and understandable language, it drew heavily on the American Bill of Rights which provides a broad area in their constitution on the bill of rights especially on political and civil rights but has failed to cater for social economic rights in some state, the British great charter mainly on the 1998 human rights act which included the right to life ,the right to fair trial and freedom of expression and the French Declaration of the Rights of Man.
It consisted of a preamble which discourages the contempt of human right but encourages a world where human rights are enjoyed this include the freedom of speech and belief and freedom from fear and 30 articles setting forth fundamental rights and freedoms. Article 1 set the basic philosophy of the Declaration which states that all human beings are born free and equal in dignity and rights and should encourage the spirit of brotherhood. Article 2 set out the principle of non-discrimination in the enjoyment of human rights and it sets out that one should not be discriminated on race, sex, language, religion and one opinion.
Articles 3 through 21 laid down political and civil rights, including the right to life under article 3 and security of person, article 4 prohibits slave trade in all forms or servitude, article 5 talks on freedom from torture or degrading treatment or punishment, article 9 talks on freedom from arbitrary arrest, detention or exile, the right to a fair and public hearing by an independent and impartial tribunal which is provided under article 8 and one should be provided for with a remedy on any violation of their rights, freedom of thought and religion, freedom of expression, the right to peaceful assembly and association.
Articles 22 through 27 established economic, social and cultural rights. These included under article 22 which stated that every member is entitled to social security through the national and international cooperation, article 13 right to work with no discrimination and free to join any trade union with fare remuneration, article 25 the right to a standard of living with adequate health and this include at the time of employment and it also caters for children born out of wedlock, article 24 talks about the right to rest and leisure and limitation of working hours and this for example can be taken as the Kenyan system whereas everyone works from 8-5 and free during the public holidays, article 26 talks on the right of education which is compulsory and parents have the obligation of choosing what kind of school the child should attend and article 27 protects the culture and the material interest . The majority of this rights today have been adopted by various countries such as the south African constitution recognizes the social economic rights and also the Kenyan constitution which has its provision under article 43 of the constitution of Kenya and it has also provided on the measure on which the government can take on social economic right under article 22
In conclusion the UDHR, adopted in 1948, does not carry the influence of international law. However, it does set forth international norms of behaviour for governments to their citizens and foreigners. It states that every person is equal regardless to sex, race, language, religion, opinion or politics. It has impacted the constitutions created after world war 2 and this included the Kenyan constitution which was formulated in 2010 has adopted many articles and rights from it which included article 28 on the human dignity and article 43 on social economic rights and also the group rights on the women and the marginalized group in the society, and it has been the inspiration for over twenty legal binding human rights treaties that were created post World War II. Many governments look to the UDHR for guidance in setting ethical ordinances in their nations. However, most importantly, the UDHR entered the consciousness of people around the world about the importance of human rights for all. UDHR today stands as the most widely recognized statement of human rights every created and it led to the formation of various international convention such as the international convention of social economic right of 1966, the convention on the elimination of all form of discrimination against women of 1979.
If you are the original author of this essay and no longer wish to have it published on the SuperbGrade website, please click below to request its removal:
- Organization Assessment: SWOT Analysis
- Theater Live Performance Analysis on Oliver
- Essay on Travel and Tourism Marketing
- Analysis of Self Image
- Taken 2: Average Thrilling Film with a Captivating Storyline
- Idealization of Marriage
- The Effect of the Omega 3 on the Mental Health
- Financial Management Problems
- Public Relations Journalism
- Role of Women in World War 1
- Essay Sample on Cessation of Smoking and Lung Cancer
- Image Meaning and its Importance in Visual Culture | https://superbgrade.com/essays/eleanor-roosevelt-universal-declaration-of-human-rights | 21 |
69 | Class 12 Economics is divided into two groups, Macroeconomics and Indian Economic Growth, as we all know. Class 12’s Balance of Payments Chapter is an important part of the macroeconomics section. This blog on Balance of Payment Class 12 study notes will give you a thorough description of what balance of payment is, how it works, and how it is divided into current and capital accounts. Class 12 of the balance of payment tries to explain the balance of trade and how it differs from the balance of payment, as well as autonomous products, accommodating items, and other topics. So if you wish to know or want to do a quick revision on this chapter of Balance of Payment Class 12, then must read this blog till the end.
Table of Contents
What is Balance of Payment or BOP?
A country’s balance of payments is a comprehensive record of all economic transactions between its citizens and residents of other countries for a particular time span.
Simply put, the balance of payment is a statement that records all transactions between companies, government bodies, and individuals from one country to another over a specified period of time. The statement contains all transaction information, providing the authority with a good picture of the fund flow.
Structure of Balance of Payment accounting
- Transactions are reported in the balance of payments accounts.
- Any foreign transaction that a country conducts results in an equal sum of credit and debit entries.
- The BOP accounting must always balance i.e., debit must be equal to credit, as international transactions are recorded.
- To “balance” the BOP accounts, the balancing item Errors and omissions must be added.
- By convention, debit items are denoted by a minus sign and credit items are denoted by a plus sign respectively.
- In the Balance Of Payment, the transactions can be categorised as:
- Goods and services account
- Unilateral transfer account
- Long-term capital account
- Short-term private capital account
- Short-term official capital account
Accounts of Balance of Payments
- Current Account – The current account is basically a record of export and import of goods and services.
- Capital Account – The Capital Account is a record of all such transactions between normal residents of a country and rest of the world which includes sale and purchase of foreign assets and liabilities during a given accounting year.
- Balance of trade – Balance of trade can be defined as the net difference of import and export of items between the residents of a country and the rest of the world.
- Autonomous items – Can be defined as those items of balance of payment which are related to such transactions that are determined by the motive of profit maximisation and not to maintain equilibrium in balance of payments. These items are recorded as the first items before calculating the deficit or surplus in the balance of payment a/c. These items are also known as ‘Above the Line items’ in balance of payment.
- Accommodating items – Accommodating items include those transactions that take place as a result of other activity in balance of payment. Also known as ‘Below the Line items’ in balance of payment.
- Deficit of Balance of Payments Account – A situation wherein the total inflow of foreign exchange on account of autonomous transactions is less than the total outflow on account of such transaction.
- Foreign exchange rate – In simple terms, foreign exchange rate may be defined as the rate at which other currencies are bought by a country. The system of exchange has been listed as follows in Balance of Payments Class 12:
- Fixed exchange rate
- Flexible exchange rate.
Note: When we discuss the system of fixed exchange rate, it is important to know that, in a system of fixed exchange rate, rate of exchange is determined by the government in power or the Monetary Authority of the country. In contrast, in a flexible exchange rate system, the exchange rate is dictated by market forces. Foreign exchange demand is inversely proportional to the flexible exchange rate, which means that as the flexible exchange rate increases, so does foreign exchange demand, or vice versa.
Merits and Demerits of System of Exchange
Tabulated below are the certain merits and demerits of the system of exchange –
|Attracts foreign capital||Neglects the concept of free market|
|Makes sure inflation is in check||Often results in over evaluation or undervaluation of currency.|
|Stabilises exchange rate||No automatic adjustment in BOP|
|Involves promotion of international trade and capital movement||Need to hold foreign exchange reserves.|
What are the main sources of demands of foreign exchange?
- To access goods and services from across the world.
- To invest financial assets like bonds and equity shares in a foreign country.
- To speculate the value of the foreign currency.
- To invest directly in shops, factories, buildings in foreign countries.
- The demand for foreign tours.
Now, the supply of foreign exchange is directly proportional to the foreign exchange rate i.e., if the foreign exchange rate rises, the supply of foreign exchange also rises.
Sources of Supply of Foreign Exchange
- International purchase in the domestic market.
- International investment in the domestic market.
- Remittances by non-residents living abroad.
- Export of necessary goods and services.
- Flow of foreign exchange due to speculative purchases by N.R.I.
- Foreign direct investment and portfolio investment.
Tabulated below are the merits and demerits of the system of flexible exchange rate –
|Automatic adjustment in the ‘balance of payments.
Results inefficiency in the allocation of resources.
Ensures easy transfer of capital and trade.
Promotes venture capital and overcomes the problems of over-evaluation or undervaluation of currency.
|Creates market instability. Does not promote international trade or investment.
Encourages speculations and fluctuation in future exchange rates.
Some Important Definitions
- Determination of Equilibrium Foreign Exchange Rate: Equilibrium FER is the rate at which demand for and supply of foreign exchange is equal. In a free market situation, equilibrium FER is determined by market forces i.e., demand for and supply of foreign exchange.
- Devaluation of a currency: The lowering of the external value of a domestic currency by the government or monetary authority of a country officially, is known as the devaluation of the currency.
- Revaluation of a currency: The rise in the external value of a domestic currency by the government or monetary authority of a country officially, is known as revaluation of the currency.
- In currency depreciation: A situation wherein there is a fall in the value of the domestic currency, in terms of foreign currency due to change in demand and supply of the currency under a flexible exchange rate system.
- In currency appreciation: A situation wherein there is a rise in the value of the domestic currency in terms of foreign currency due to the change in demand and supply of the currency under a flexible exchange rate system.
- The managed floating system is a system in which the central bank allows the exchange rate to be determined by market forces but also makes sure it intervenes at times to influence the rate. For instance, when the central bank finds the rate is too high, it starts selling foreign exchange from its reserve to bring it down.
So, this was all about Balance of Payments Class 12. We hope that your blog on study notes related to Balance of Payment class 12 has given you all the relevant information that would help you to fetch some extra marks in your economics exam. For more such informative content, stay connected to Leverage Edu! | https://leverageedu.com/blog/balance-of-payments/ | 21 |
20 | Tide tables can be used for any given locale to find the predicted times and amplitude (or "tidal range"). The predictions are influenced by many factors including the alignment of the Sun and Moon, the phase and amplitude of the tide (pattern of tides in the deep ocean), the amphidromic systems of the oceans, and the shape of the coastline and near-shore bathymetry (see Timing). They are however only predictions, the actual time and height of the tide is affected by wind and atmospheric pressure. Many shorelines experience semi-diurnal tides—two nearly equal high and low tides each day. Other locations have a diurnal tide—one high and low tide each day. A "mixed tide"—two uneven magnitude tides a day—is a third regular category.[a]
Tides vary on timescales ranging from hours to years due to a number of factors, which determine the lunitidal interval. To make accurate records, tide gauges at fixed stations measure water level over time. Gauges ignore variations caused by waves with periods shorter than minutes. These data are compared to the reference (or datum) level usually called mean sea level.
While tides are usually the largest source of short-term sea-level fluctuations, sea levels are also subject to forces such as wind and barometric pressure changes, resulting in storm surges, especially in shallow seas and near coasts.
Tidal phenomena are not limited to the oceans, but can occur in other systems whenever a gravitational field that varies in time and space is present. For example, the shape of the solid part of the Earth is affected slightly by Earth tide, though this is not as easily seen as the water tidal movements.
Tide changes proceed via the following stages:
- Sea level rises over several hours, covering the intertidal zone; flood tide.
- The water rises to its highest level, reaching high tide.
- Sea level falls over several hours, revealing the intertidal zone; ebb tide.
- The water stops falling, reaching low tide.
Oscillating currents produced by tides are known as tidal streams. The moment that the tidal current ceases is called slack water or slack tide. The tide then reverses direction and is said to be turning. Slack water usually occurs near high water and low water. But there are locations where the moments of slack tide differ significantly from those of high and low water.
Tides are commonly semi-diurnal (two high waters and two low waters each day), or diurnal (one tidal cycle per day). The two high waters on a given day are typically not the same height (the daily inequality); these are the higher high water and the lower high water in tide tables. Similarly, the two low waters each day are the higher low water and the lower low water. The daily inequality is not consistent and is generally small when the Moon is over the Equator.[b]
From the highest level to the lowest:
- Highest astronomical tide (HAT) – The highest tide which can be predicted to occur. Note that meteorological conditions may add extra height to the HAT.
- Mean high water springs (MHWS) – The average of the two high tides on the days of spring tides.
- Mean high water neaps (MHWN) – The average of the two high tides on the days of neap tides.
- Mean sea level (MSL) – This is the average sea level. The MSL is constant for any location over a long period.
- Mean low water neaps (MLWN) – The average of the two low tides on the days of neap tides.
- Mean low water springs (MLWS) – The average of the two low tides on the days of spring tides.
- Lowest astronomical tide (LAT) and chart datum (CD) – The lowest tide which can be predicted to occur. Some charts use this as the chart datum. Note that under certain meteorological conditions the water may fall lower than this meaning that there is less water than shown on charts.
Tidal constituents are the net result of multiple influences impacting tidal changes over certain periods of time. Primary constituents include the Earth's rotation, the position of the Moon and Sun relative to the Earth, the Moon's altitude (elevation) above the Earth's Equator, and bathymetry. Variations with periods of less than half a day are called harmonic constituents. Conversely, cycles of days, months, or years are referred to as long period constituents.
Tidal forces affect the entire earth, but the movement of solid Earth occurs by mere centimeters. In contrast, the atmosphere is much more fluid and compressible so its surface moves by kilometers, in the sense of the contour level of a particular low pressure in the outer atmosphere.
Principal lunar semi-diurnal constituent
In most locations, the largest constituent is the principal lunar semi-diurnal, also known as the M2 tidal constituent or M2 tidal constituent. Its period is about 12 hours and 25.2 minutes, exactly half a tidal lunar day, which is the average time separating one lunar zenith from the next, and thus is the time required for the Earth to rotate once relative to the Moon. Simple tide clocks track this constituent. The lunar day is longer than the Earth day because the Moon orbits in the same direction the Earth spins. This is analogous to the minute hand on a watch crossing the hour hand at 12:00 and then again at about 1:05½ (not at 1:00).
The Moon orbits the Earth in the same direction as the Earth rotates on its axis, so it takes slightly more than a day—about 24 hours and 50 minutes—for the Moon to return to the same location in the sky. During this time, it has passed overhead (culmination) once and underfoot once (at an hour angle of 00:00 and 12:00 respectively), so in many places the period of strongest tidal forcing is the above-mentioned, about 12 hours and 25 minutes. The moment of highest tide is not necessarily when the Moon is nearest to zenith or nadir, but the period of the forcing still determines the time between high tides.
Because the gravitational field created by the Moon weakens with distance from the Moon, it exerts a slightly stronger than average force on the side of the Earth facing the Moon, and a slightly weaker force on the opposite side. The Moon thus tends to "stretch" the Earth slightly along the line connecting the two bodies. The solid Earth deforms a bit, but ocean water, being fluid, is free to move much more in response to the tidal force, particularly horizontally (see equilibrium tide).
As the Earth rotates, the magnitude and direction of the tidal force at any particular point on the Earth's surface change constantly; although the ocean never reaches equilibrium—there is never time for the fluid to "catch up" to the state it would eventually reach if the tidal force were constant—the changing tidal force nonetheless causes rhythmic changes in sea surface height.
When there are two high tides each day with different heights (and two low tides also of different heights), the pattern is called a mixed semi-diurnal tide.
Range variation: springs and neaps
The semi-diurnal range (the difference in height between high and low waters over about half a day) varies in a two-week cycle. Approximately twice a month, around new moon and full moon when the Sun, Moon, and Earth form a line (a configuration known as a syzygy), the tidal force due to the Sun reinforces that due to the Moon. The tide's range is then at its maximum; this is called the spring tide. It is not named after the season, but, like that word, derives from the meaning "jump, burst forth, rise", as in a natural spring.
When the Moon is at first quarter or third quarter, the Sun and Moon are separated by 90° when viewed from the Earth, and the solar tidal force partially cancels the Moon's tidal force. At these points in the lunar cycle, the tide's range is at its minimum; this is called the neap tide, or neaps. Neap is an Anglo-Saxon word meaning "without the power", as in forðganges nip (forth-going without-the-power).
Spring tides result in high waters that are higher than average, low waters that are lower than average, "slack water" time that is shorter than average, and stronger tidal currents than average. Neaps result in less extreme tidal conditions. There is about a seven-day interval between springs and neaps.
The changing distance separating the Moon and Earth also affects tide heights. When the Moon is closest, at perigee, the range increases, and when it is at apogee, the range shrinks. Every 7+1⁄2 lunations (the full cycles from full moon to new to full), perigee coincides with either a new or full moon causing perigean spring tides with the largest tidal range. Even at its most powerful this force is still weak, causing tidal differences of inches at most.
These include solar gravitational effects, the obliquity (tilt) of the Earth's Equator and rotational axis, the inclination of the plane of the lunar orbit and the elliptical shape of the Earth's orbit of the Sun.
A compound tide (or overtide) results from the shallow-water interaction of its two parent waves.
Phase and amplitude
Because the M2 tidal constituent dominates in most locations, the stage or phase of a tide, denoted by the time in hours after high water, is a useful concept. Tidal stage is also measured in degrees, with 360° per tidal cycle. Lines of constant tidal phase are called cotidal lines, which are analogous to contour lines of constant altitude on topographical maps, and when plotted form a cotidal map or cotidal chart. High water is reached simultaneously along the cotidal lines extending from the coast out into the ocean, and cotidal lines (and hence tidal phases) advance along the coast. Semi-diurnal and long phase constituents are measured from high water, diurnal from maximum flood tide. This and the discussion that follows is precisely true only for a single tidal constituent.
For an ocean in the shape of a circular basin enclosed by a coastline, the cotidal lines point radially inward and must eventually meet at a common point, the amphidromic point. The amphidromic point is at once cotidal with high and low waters, which is satisfied by zero tidal motion. (The rare exception occurs when the tide encircles an island, as it does around New Zealand, Iceland and Madagascar.) Tidal motion generally lessens moving away from continental coasts, so that crossing the cotidal lines are contours of constant amplitude (half the distance between high and low water) which decrease to zero at the amphidromic point. For a semi-diurnal tide the amphidromic point can be thought of roughly like the center of a clock face, with the hour hand pointing in the direction of the high water cotidal line, which is directly opposite the low water cotidal line. High water rotates about the amphidromic point once every 12 hours in the direction of rising cotidal lines, and away from ebbing cotidal lines. This rotation, caused by the Coriolis effect, is generally clockwise in the southern hemisphere and counterclockwise in the northern hemisphere. The difference of cotidal phase from the phase of a reference tide is the epoch. The reference tide is the hypothetical constituent "equilibrium tide" on a landless Earth measured at 0° longitude, the Greenwich meridian.
In the North Atlantic, because the cotidal lines circulate counterclockwise around the amphidromic point, the high tide passes New York Harbor approximately an hour ahead of Norfolk Harbor. South of Cape Hatteras the tidal forces are more complex, and cannot be predicted reliably based on the North Atlantic cotidal lines.
History of tidal theory
Investigation into tidal physics was important in the early development of celestial mechanics, with the existence of two daily tides being explained by the Moon's gravity. Later the daily tides were explained more precisely by the interaction of the Moon's and the Sun's gravity.
In De temporum ratione (The Reckoning of Time) of 725 Bede linked semidurnal tides and the phenomenon of varying tidal heights to the Moon and its phases. Bede starts by noting that the tides rise and fall 4/5 of an hour later each day, just as the Moon rises and sets 4/5 of an hour later. He goes on to emphasise that in two lunar months (59 days) the Moon circles the Earth 57 times and there are 114 tides. Bede then observes that the height of tides varies over the month. Increasing tides are called malinae and decreasing tides ledones and that the month is divided into four parts of seven or eight days with alternating malinae and ledones. In the same passage he also notes the effect of winds to hold back tides. Bede also records that the time of tides varies from place to place. To the north of Bede's location (Monkwearmouth) the tides are earlier, to the south later. He explains that the tide "deserts these shores in order to be able all the more to be able to flood other [shores] when it arrives there" noting that "the Moon which signals the rise of tide here, signals its retreat in other regions far from this quarter of the heavens".
Medieval understanding of the tides was primarily based on works of Muslim astronomers, which became available through Latin translation starting from the 12th century. Abu Ma'shar (d. circa 886), in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji (d. circa 1204) contributed the notion that the tides were caused by the general circulation of the heavens.
Simon Stevin in his 1608 De spiegheling der Ebbenvloet, The theory of ebb and flood, dismissed a large number of misconceptions that still existed about ebb and flood. Stevin pleaded for the idea that the attraction of the Moon was responsible for the tides and spoke in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made.
Galileo Galilei in his 1632 Dialogue Concerning the Two Chief World Systems, whose working title was Dialogue on the Tides, gave an explanation of the tides. The resulting theory, however, was incorrect as he attributed the tides to the sloshing of water caused by the Earth's movement around the Sun. He hoped to provide mechanical proof of the Earth's movement. The value of his tidal theory is disputed. Galileo rejected Kepler's explanation of the tides.
Isaac Newton (1642–1727) was the first person to explain tides as the product of the gravitational attraction of astronomical masses. His explanation of the tides (and many other phenomena) was published in the Principia (1687) and used his theory of universal gravitation to explain the lunar and solar attractions as the origin of the tide-generating forces.[e] Newton and others before Pierre-Simon Laplace worked the problem from the perspective of a static system (equilibrium theory), that provided an approximation that described the tides that would occur in a non-inertial ocean evenly covering the whole Earth. The tide-generating force (or its corresponding potential) is still relevant to tidal theory, but as an intermediate quantity (forcing function) rather than as a final result; theory must also consider the Earth's accumulated dynamic tidal response to the applied forces, which response is influenced by ocean depth, the Earth's rotation, and other factors.
In 1740, the Académie Royale des Sciences in Paris offered a prize for the best theoretical essay on tides. Daniel Bernoulli, Leonhard Euler, Colin Maclaurin and Antoine Cavalleri shared the prize.
Maclaurin used Newton's theory to show that a smooth sphere covered by a sufficiently deep ocean under the tidal force of a single deforming body is a prolate spheroid (essentially a three-dimensional oval) with major axis directed toward the deforming body. Maclaurin was the first to write about the Earth's rotational effects on motion. Euler realized that the tidal force's horizontal component (more than the vertical) drives the tide. In 1744 Jean le Rond d'Alembert studied tidal equations for the atmosphere which did not include rotation.
In 1770 James Cook's barque HMS Endeavour grounded on the Great Barrier Reef. Attempts were made to refloat her on the following tide which failed, but the tide after that lifted her clear with ease. Whilst she was being repaired in the mouth of the Endeavour River Cook observed the tides over a period of seven weeks. At neap tides both tides in a day were similar, but at springs the tides rose 7 feet (2.1 m) in the morning but 9 feet (2.7 m) in the evening.
Pierre-Simon Laplace formulated a system of partial differential equations relating the ocean's horizontal flow to its surface height, the first major dynamic theory for water tides. The Laplace tidal equations are still in use today. William Thomson, 1st Baron Kelvin, rewrote Laplace's equations in terms of vorticity which allowed for solutions describing tidally driven coastally trapped waves, known as Kelvin waves.
Others including Kelvin and Henri Poincaré further developed Laplace's theory. Based on these developments and the lunar theory of E W Brown describing the motions of the Moon, Arthur Thomas Doodson developed and published in 1921 the first modern development of the tide-generating potential in harmonic form: Doodson distinguished 388 tidal frequencies. Some of his methods remain in use.
History of tidal observation
From ancient times, tidal observation and discussion has increased in sophistication, first marking the daily recurrence, then tides' relationship to the Sun and moon. Pytheas travelled to the British Isles about 325 BC and seems to be the first to have related spring tides to the phase of the moon.
In the 2nd century BC, the Hellenistic astronomer Seleucus of Seleucia correctly described the phenomenon of tides in order to support his heliocentric theory. He correctly theorized that tides were caused by the moon, although he believed that the interaction was mediated by the pneuma. He noted that tides varied in time and strength in different parts of the world. According to Strabo (1.1.9), Seleucus was the first to link tides to the lunar attraction, and that the height of the tides depends on the moon's position relative to the Sun.
The Naturalis Historia of Pliny the Elder collates many tidal observations, e.g., the spring tides are a few days after (or before) new and full moon and are highest around the equinoxes, though Pliny noted many relationships now regarded as fanciful. In his Geography, Strabo described tides in the Persian Gulf having their greatest range when the moon was furthest from the plane of the Equator. All this despite the relatively small amplitude of Mediterranean basin tides. (The strong currents through the Euripus Strait and the Strait of Messina puzzled Aristotle.) Philostratus discussed tides in Book Five of The Life of Apollonius of Tyana. Philostratus mentions the moon, but attributes tides to "spirits". In Europe around 730 AD, the Venerable Bede described how the rising tide on one coast of the British Isles coincided with the fall on the other and described the time progression of high water along the Northumbrian coast.
The first tide table in China was recorded in 1056 AD primarily for visitors wishing to see the famous tidal bore in the Qiantang River. The first known British tide table is thought to be that of John Wallingford, who died Abbot of St. Albans in 1213, based on high water occurring 48 minutes later each day, and three hours earlier at the Thames mouth than upriver at London.
William Thomson (Lord Kelvin) led the first systematic harmonic analysis of tidal records starting in 1867. The main result was the building of a tide-predicting machine using a system of pulleys to add together six harmonic time functions. It was "programmed" by resetting gears and chains to adjust phasing and amplitudes. Similar machines were used until the 1960s.
The first known sea-level record of an entire spring–neap cycle was made in 1831 on the Navy Dock in the Thames Estuary. Many large ports had automatic tide gauge stations by 1850.
William Whewell first mapped co-tidal lines ending with a nearly global chart in 1836. In order to make these maps consistent, he hypothesized the existence of amphidromes where co-tidal lines meet in the mid-ocean. These points of no tide were confirmed by measurement in 1840 by Captain Hewett, RN, from careful soundings in the North Sea.
The tidal force produced by a massive object (Moon, hereafter) on a small particle located on or in an extensive body (Earth, hereafter) is the vector difference between the gravitational force exerted by the Moon on the particle, and the gravitational force that would be exerted on the particle if it were located at the Earth's center of mass.
Whereas the gravitational force subjected by a celestial body on Earth varies inversely as the square of its distance to the Earth, the maximal tidal force varies inversely as, approximately, the cube of this distance. If the tidal force caused by each body were instead equal to its full gravitational force (which is not the case due to the free fall of the whole Earth, not only the oceans, towards these bodies) a different pattern of tidal forces would be observed, e.g. with a much stronger influence from the Sun than from the Moon: The solar gravitational force on the Earth is on average 179 times stronger than the lunar, but because the Sun is on average 389 times farther from the Earth, its field gradient is weaker. The tidal force is proportional to
where M is the mass of the heavenly body, d is its distance, ρ is its average density, and r is its radius. The ratio r/d is related to the angle subtended by the object in the sky. Since the sun and the moon have practically the same diameter in the sky, the tidal force of the sun is less than that of the moon because its average density is much less, and it is only 46% as large as the lunar.[f] More precisely, the lunar tidal acceleration (along the Moon–Earth axis, at the Earth's surface) is about 1.1 × 10−7 g, while the solar tidal acceleration (along the Sun–Earth axis, at the Earth's surface) is about 0.52 × 10−7 g, where g is the gravitational acceleration at the Earth's surface.[g] The effects of the other planets vary as their distances from Earth vary. When Venus is closest to Earth, its effect is 0.000113 times the solar effect. At other times, Jupiter or Mars may have the most effect.
The ocean's surface is approximated by a surface referred to as the geoid, which takes into consideration the gravitational force exerted by the earth as well as centrifugal force due to rotation. Now consider the effect of massive external bodies such as the Moon and Sun. These bodies have strong gravitational fields that diminish with distance and cause the ocean's surface to deviate from the geoid. They establish a new equilibrium ocean surface which bulges toward the moon on one side and away from the moon on the other side. The earth's rotation relative to this shape causes the daily tidal cycle. The ocean surface tends toward this equilibrium shape, which is constantly changing, and never quite attains it. When the ocean surface is not aligned with it, it's as though the surface is sloping, and water accelerates in the down-slope direction.
The equilibrium tide is the idealized tide assuming a landless Earth. It would produce a tidal bulge in the ocean, with the shape of an ellipsoid elongated towards the attracting body (Moon or Sun). It is not caused by the vertical pull nearest or farthest from the body, which is very weak; rather, it is caused by the tangent or "tractive" tidal force, which is strongest at about 45 degrees from the body, resulting in a horizontal tidal current.[h][i][j]
Laplace's tidal equations
- The vertical (or radial) velocity is negligible, and there is no vertical shear—this is a sheet flow.
- The forcing is only horizontal (tangential).
- The Coriolis effect appears as an inertial force (fictitious) acting laterally to the direction of flow and proportional to velocity.
- The surface height's rate of change is proportional to the negative divergence of velocity multiplied by the depth. As the horizontal velocity stretches or compresses the ocean as a sheet, the volume thins or thickens, respectively.
The boundary conditions dictate no flow across the coastline and free slip at the bottom.
The Coriolis effect (inertial force) steers flows moving towards the Equator to the west and flows moving away from the Equator toward the east, allowing coastally trapped waves. Finally, a dissipation term can be added which is an analog to viscosity.
Amplitude and cycle time
The theoretical amplitude of oceanic tides caused by the Moon is about 54 centimetres (21 in) at the highest point, which corresponds to the amplitude that would be reached if the ocean possessed a uniform depth, there were no landmasses, and the Earth were rotating in step with the Moon's orbit. The Sun similarly causes tides, of which the theoretical amplitude is about 25 centimetres (9.8 in) (46% of that of the Moon) with a cycle time of 12 hours. At spring tide the two effects add to each other to a theoretical level of 79 centimetres (31 in), while at neap tide the theoretical level is reduced to 29 centimetres (11 in). Since the orbits of the Earth about the Sun, and the Moon about the Earth, are elliptical, tidal amplitudes change somewhat as a result of the varying Earth–Sun and Earth–Moon distances. This causes a variation in the tidal force and theoretical amplitude of about ±18% for the Moon and ±5% for the Sun. If both the Sun and Moon were at their closest positions and aligned at new moon, the theoretical amplitude would reach 93 centimetres (37 in).
Real amplitudes differ considerably, not only because of depth variations and continental obstacles, but also because wave propagation across the ocean has a natural period of the same order of magnitude as the rotation period: if there were no land masses, it would take about 30 hours for a long wavelength surface wave to propagate along the Equator halfway around the Earth (by comparison, the Earth's lithosphere has a natural period of about 57 minutes). Earth tides, which raise and lower the bottom of the ocean, and the tide's own gravitational self attraction are both significant and further complicate the ocean's response to tidal forces.
Earth's tidal oscillations introduce dissipation at an average rate of about 3.75 terawatts. About 98% of this dissipation is by marine tidal movement. Dissipation arises as basin-scale tidal flows drive smaller-scale flows which experience turbulent dissipation. This tidal drag creates torque on the moon that gradually transfers angular momentum to its orbit, and a gradual increase in Earth–moon separation. The equal and opposite torque on the Earth correspondingly decreases its rotational velocity. Thus, over geologic time, the moon recedes from the Earth, at about 3.8 centimetres (1.5 in)/year, lengthening the terrestrial day.[k]Day length has increased by about 2 hours in the last 600 million years. Assuming (as a crude approximation) that the deceleration rate has been constant, this would imply that 70 million years ago, day length was on the order of 1% shorter with about 4 more days per year.
The shape of the shoreline and the ocean floor changes the way that tides propagate, so there is no simple, general rule that predicts the time of high water from the Moon's position in the sky. Coastal characteristics such as underwater bathymetry and coastline shape mean that individual location characteristics affect tide forecasting; actual high water time and height may differ from model predictions due to the coastal morphology's effects on tidal flow. However, for a given location the relationship between lunar altitude and the time of high or low tide (the lunitidal interval) is relatively constant and predictable, as is the time of high or low tide relative to other points on the same coast. For example, the high tide at Norfolk, Virginia, U.S., predictably occurs approximately two and a half hours before the Moon passes directly overhead.
Land masses and ocean basins act as barriers against water moving freely around the globe, and their varied shapes and sizes affect the size of tidal frequencies. As a result, tidal patterns vary. For example, in the U.S., the East coast has predominantly semi-diurnal tides, as do Europe's Atlantic coasts, while the West coast predominantly has mixed tides. Human changes to the landscape can also significantly alter local tides.
Observation and prediction
The tidal forces due to the Moon and Sun generate very long waves which travel all around the ocean following the paths shown in co-tidal charts. The time when the crest of the wave reaches a port then gives the time of high water at the port. The time taken for the wave to travel around the ocean also means that there is a delay between the phases of the Moon and their effect on the tide. Springs and neaps in the North Sea, for example, are two days behind the new/full moon and first/third quarter moon. This is called the tide's age.
The ocean bathymetry greatly influences the tide's exact time and height at a particular coastal point. There are some extreme cases; the Bay of Fundy, on the east coast of Canada, is often stated to have the world's highest tides because of its shape, bathymetry, and its distance from the continental shelf edge. Measurements made in November 1998 at Burntcoat Head in the Bay of Fundy recorded a maximum range of 16.3 metres (53 ft) and a highest predicted extreme of 17 metres (56 ft). Similar measurements made in March 2002 at Leaf Basin, Ungava Bay in northern Quebec gave similar values (allowing for measurement errors), a maximum range of 16.2 metres (53 ft) and a highest predicted extreme of 16.8 metres (55 ft). Ungava Bay and the Bay of Fundy lie similar distances from the continental shelf edge, but Ungava Bay is free of pack ice for about four months every year while the Bay of Fundy rarely freezes.
Southampton in the United Kingdom has a double high water caused by the interaction between the M2 and M4 tidal constituents (Shallow water overtides of principal lunar). Portland has double low waters for the same reason. The M4 tide is found all along the south coast of the United Kingdom, but its effect is most noticeable between the Isle of Wight and Portland because the M2 tide is lowest in this region.
Because the oscillation modes of the Mediterranean Sea and the Baltic Sea do not coincide with any significant astronomical forcing period, the largest tides are close to their narrow connections with the Atlantic Ocean. Extremely small tides also occur for the same reason in the Gulf of Mexico and Sea of Japan. Elsewhere, as along the southern coast of Australia, low tides can be due to the presence of a nearby amphidrome.
Isaac Newton's theory of gravitation first enabled an explanation of why there were generally two tides a day, not one, and offered hope for a detailed understanding of tidal forces and behavior. Although it may seem that tides could be predicted via a sufficiently detailed knowledge of instantaneous astronomical forcings, the actual tide at a given location is determined by astronomical forces accumulated by the body of water over many days. In addition, accurate results would require detailed knowledge of the shape of all the ocean basins—their bathymetry, and coastline shape.
Current procedure for analysing tides follows the method of harmonic analysis introduced in the 1860s by William Thomson. It is based on the principle that the astronomical theories of the motions of Sun and Moon determine a large number of component frequencies, and at each frequency there is a component of force tending to produce tidal motion, but that at each place of interest on the Earth, the tides respond at each frequency with an amplitude and phase peculiar to that locality. At each place of interest, the tide heights are therefore measured for a period of time sufficiently long (usually more than a year in the case of a new port not previously studied) to enable the response at each significant tide-generating frequency to be distinguished by analysis, and to extract the tidal constants for a sufficient number of the strongest known components of the astronomical tidal forces to enable practical tide prediction. The tide heights are expected to follow the tidal force, with a constant amplitude and phase delay for each component. Because astronomical frequencies and phases can be calculated with certainty, the tide height at other times can then be predicted once the response to the harmonic components of the astronomical tide-generating forces has been found.
The main patterns in the tides are
- the twice-daily variation
- the difference between the first and second tide of a day
- the spring–neap cycle
- the annual variation
The Highest Astronomical Tide is the perigean spring tide when both the Sun and Moon are closest to the Earth.
When confronted by a periodically varying function, the standard approach is to employ Fourier series, a form of analysis that uses sinusoidal functions as a basis set, having frequencies that are zero, one, two, three, etc. times the frequency of a particular fundamental cycle. These multiples are called harmonics of the fundamental frequency, and the process is termed harmonic analysis. If the basis set of sinusoidal functions suit the behaviour being modelled, relatively few harmonic terms need to be added. Orbital paths are very nearly circular, so sinusoidal variations are suitable for tides.
For the analysis of tide heights, the Fourier series approach has in practice to be made more elaborate than the use of a single frequency and its harmonics. The tidal patterns are decomposed into many sinusoids having many fundamental frequencies, corresponding (as in the lunar theory) to many different combinations of the motions of the Earth, the Moon, and the angles that define the shape and location of their orbits.
For tides, then, harmonic analysis is not limited to harmonics of a single frequency.[l] In other words, the harmonies are multiples of many fundamental frequencies, not just of the fundamental frequency of the simpler Fourier series approach. Their representation as a Fourier series having only one fundamental frequency and its (integer) multiples would require many terms, and would be severely limited in the time-range for which it would be valid.
The study of tide height by harmonic analysis was begun by Laplace, William Thomson (Lord Kelvin), and George Darwin. A.T. Doodson extended their work, introducing the Doodson Number notation to organise the hundreds of resulting terms. This approach has been the international standard ever since, and the complications arise as follows: the tide-raising force is notionally given by sums of several terms. Each term is of the form
where A is the amplitude, ω is the angular frequency usually given in degrees per hour corresponding to t measured in hours, and p is the phase offset with regard to the astronomical state at time t = 0 . There is one term for the Moon and a second term for the Sun. The phase p of the first harmonic for the Moon term is called the lunitidal interval or high water interval. The next step is to accommodate the harmonic terms due to the elliptical shape of the orbits. Accordingly, the value of A is not a constant but also varying with time, slightly, about some average figure. Replace it then by A(t) where A is another sinusoid, similar to the cycles and epicycles of Ptolemaic theory. Accordingly,
which is to say an average value A with a sinusoidal variation about it of magnitude Aa, with frequency ωa and phase pa. Thus the simple term is now the product of two cosine factors:
Given that for any x and y
it is clear that a compound term involving the product of two cosine terms each with their own frequency is the same as three simple cosine terms that are to be added at the original frequency and also at frequencies which are the sum and difference of the two frequencies of the product term. (Three, not two terms, since the whole expression is .) Consider further that the tidal force on a location depends also on whether the Moon (or the Sun) is above or below the plane of the Equator, and that these attributes have their own periods also incommensurable with a day and a month, and it is clear that many combinations result. With a careful choice of the basic astronomical frequencies, the Doodson Number annotates the particular additions and differences to form the frequency of each simple cosine term.
Remember that astronomical tides do not include weather effects. Also, changes to local conditions (sandbank movement, dredging harbour mouths, etc.) away from those prevailing at the measurement time affect the tide's actual timing and magnitude. Organisations quoting a "highest astronomical tide" for some location may exaggerate the figure as a safety factor against analytical uncertainties, distance from the nearest measurement point, changes since the last observation time, ground subsidence, etc., to avert liability should an engineering work be overtopped. Special care is needed when assessing the size of a "weather surge" by subtracting the astronomical tide from the observed tide.
Careful Fourier data analysis over a nineteen-year period (the National Tidal Datum Epoch in the U.S.) uses frequencies called the tidal harmonic constituents. Nineteen years is preferred because the Earth, Moon and Sun's relative positions repeat almost exactly in the Metonic cycle of 19 years, which is long enough to include the 18.613 year lunar nodal tidal constituent. This analysis can be done using only the knowledge of the forcing period, but without detailed understanding of the mathematical derivation, which means that useful tidal tables have been constructed for centuries. The resulting amplitudes and phases can then be used to predict the expected tides. These are usually dominated by the constituents near 12 hours (the semi-diurnal constituents), but there are major constituents near 24 hours (diurnal) as well. Longer term constituents are 14 day or fortnightly, monthly, and semiannual. Semi-diurnal tides dominated coastline, but some areas such as the South China Sea and the Gulf of Mexico are primarily diurnal. In the semi-diurnal areas, the primary constituents M2 (lunar) and S2 (solar) periods differ slightly, so that the relative phases, and thus the amplitude of the combined tide, change fortnightly (14 day period).
In the M2 plot above, each cotidal line differs by one hour from its neighbors, and the thicker lines show tides in phase with equilibrium at Greenwich. The lines rotate around the amphidromic points counterclockwise in the northern hemisphere so that from Baja California Peninsula to Alaska and from France to Ireland the M2 tide propagates northward. In the southern hemisphere this direction is clockwise. On the other hand, M2 tide propagates counterclockwise around New Zealand, but this is because the islands act as a dam and permit the tides to have different heights on the islands' opposite sides. (The tides do propagate northward on the east side and southward on the west coast, as predicted by theory.)
The exception is at Cook Strait where the tidal currents periodically link high to low water. This is because cotidal lines 180° around the amphidromes are in opposite phase, for example high water across from low water at each end of Cook Strait. Each tidal constituent has a different pattern of amplitudes, phases, and amphidromic points, so the M2 patterns cannot be used for other tide components.
Because the Moon is moving in its orbit around the Earth and in the same sense as the Earth's rotation, a point on the Earth must rotate slightly further to catch up so that the time between semidiurnal tides is not twelve but 12.4206 hours—a bit over twenty-five minutes extra. The two peaks are not equal. The two high tides a day alternate in maximum heights: lower high (just under three feet), higher high (just over three feet), and again lower high. Likewise for the low tides.
When the Earth, Moon, and Sun are in line (Sun–Earth–Moon, or Sun–Moon–Earth) the two main influences combine to produce spring tides; when the two forces are opposing each other as when the angle Moon–Earth–Sun is close to ninety degrees, neap tides result. As the Moon moves around its orbit it changes from north of the Equator to south of the Equator. The alternation in high tide heights becomes smaller, until they are the same (at the lunar equinox, the Moon is above the Equator), then redevelop but with the other polarity, waxing to a maximum difference and then waning again.
The tides' influence on current flow is much more difficult to analyse, and data is much more difficult to collect. A tidal height is a simple number which applies to a wide region simultaneously. A flow has both a magnitude and a direction, both of which can vary substantially with depth and over short distances due to local bathymetry. Also, although a water channel's center is the most useful measuring site, mariners object when current-measuring equipment obstructs waterways. A flow proceeding up a curved channel is the same flow, even though its direction varies continuously along the channel. Surprisingly, flood and ebb flows are often not in opposite directions. Flow direction is determined by the upstream channel's shape, not the downstream channel's shape. Likewise, eddies may form in only one flow direction.
Nevertheless, current analysis is similar to tidal analysis: in the simple case, at a given location the flood flow is in mostly one direction, and the ebb flow in another direction. Flood velocities are given positive sign, and ebb velocities negative sign. Analysis proceeds as though these are tide heights.
In more complex situations, the main ebb and flood flows do not dominate. Instead, the flow direction and magnitude trace an ellipse over a tidal cycle (on a polar plot) instead of along the ebb and flood lines. In this case, analysis might proceed along pairs of directions, with the primary and secondary directions at right angles. An alternative is to treat the tidal flows as complex numbers, as each value has both a magnitude and a direction.
Tide flow information is most commonly seen on nautical charts, presented as a table of flow speeds and bearings at hourly intervals, with separate tables for spring and neap tides. The timing is relative to high water at some harbour where the tidal behaviour is similar in pattern, though it may be far away.
As with tide height predictions, tide flow predictions based only on astronomical factors do not incorporate weather conditions, which can completely change the outcome.
The tidal flow through Cook Strait between the two main islands of New Zealand is particularly interesting, as the tides on each side of the strait are almost exactly out of phase, so that one side's high water is simultaneous with the other's low water. Strong currents result, with almost zero tidal height change in the strait's center. Yet, although the tidal surge normally flows in one direction for six hours and in the reverse direction for six hours, a particular surge might last eight or ten hours with the reverse surge enfeebled. In especially boisterous weather conditions, the reverse surge might be entirely overcome so that the flow continues in the same direction through three or more surge periods.
A further complication for Cook Strait's flow pattern is that the tide at the south side (e.g. at Nelson) follows the common bi-weekly spring–neap tide cycle (as found along the west side of the country), but the north side's tidal pattern has only one cycle per month, as on the east side: Wellington, and Napier.
The graph of Cook Strait's tides shows separately the high water and low water height and time, through November 2007; these are not measured values but instead are calculated from tidal parameters derived from years-old measurements. Cook Strait's nautical chart offers tidal current information. For instance the January 1979 edition for 41°13·9’S 174°29·6’E (north west of Cape Terawhiti) refers timings to Westport while the January 2004 issue refers to Wellington. Near Cape Terawhiti in the middle of Cook Strait the tidal height variation is almost nil while the tidal current reaches its maximum, especially near the notorious Karori Rip. Aside from weather effects, the actual currents through Cook Strait are influenced by the tidal height differences between the two ends of the strait and as can be seen, only one of the two spring tides at the north west end of the strait near Nelson has a counterpart spring tide at the south east end (Wellington), so the resulting behaviour follows neither reference harbour.
Tidal energy can be extracted by two means: inserting a water turbine into a tidal current, or building ponds that release/admit water through a turbine. In the first case, the energy amount is entirely determined by the timing and tidal current magnitude. However, the best currents may be unavailable because the turbines would obstruct ships. In the second, the impoundment dams are expensive to construct, natural water cycles are completely disrupted, ship navigation is disrupted. However, with multiple ponds, power can be generated at chosen times. So far, there are few installed systems for tidal power generation (most famously, La Rance at Saint Malo, France) which face many difficulties. Aside from environmental issues, simply withstanding corrosion and biological fouling pose engineering challenges.
Tidal power proponents point out that, unlike wind power systems, generation levels can be reliably predicted, save for weather effects. While some generation is possible for most of the tidal cycle, in practice turbines lose efficiency at lower operating rates. Since the power available from a flow is proportional to the cube of the flow speed, the times during which high power generation is possible are brief.
Tidal flows are important for navigation, and significant errors in position occur if they are not accommodated. Tidal heights are also important; for example many rivers and harbours have a shallow "bar" at the entrance which prevents boats with significant draft from entering at low tide.
Until the advent of automated navigation, competence in calculating tidal effects was important to naval officers. The certificate of examination for lieutenants in the Royal Navy once declared that the prospective officer was able to "shift his tides".
Tidal flow timings and velocities appear in tide charts or a tidal stream atlas. Tide charts come in sets. Each chart covers a single hour between one high water and another (they ignore the leftover 24 minutes) and show the average tidal flow for that hour. An arrow on the tidal chart indicates the direction and the average flow speed (usually in knots) for spring and neap tides. If a tide chart is not available, most nautical charts have "tidal diamonds" which relate specific points on the chart to a table giving tidal flow direction and speed.
The standard procedure to counteract tidal effects on navigation is to (1) calculate a "dead reckoning" position (or DR) from travel distance and direction, (2) mark the chart (with a vertical cross like a plus sign) and (3) draw a line from the DR in the tide's direction. The distance the tide moves the boat along this line is computed by the tidal speed, and this gives an "estimated position" or EP (traditionally marked with a dot in a triangle).
Nautical charts display the water's "charted depth" at specific locations with "soundings" and the use of bathymetric contour lines to depict the submerged surface's shape. These depths are relative to a "chart datum", which is typically the water level at the lowest possible astronomical tide (although other datums are commonly used, especially historically, and tides may be lower or higher for meteorological reasons) and are therefore the minimum possible water depth during the tidal cycle. "Drying heights" may also be shown on the chart, which are the heights of the exposed seabed at the lowest astronomical tide.
Tide tables list each day's high and low water heights and times. To calculate the actual water depth, add the charted depth to the published tide height. Depth for other times can be derived from tidal curves published for major ports. The rule of twelfths can suffice if an accurate curve is not available. This approximation presumes that the increase in depth in the six hours between low and high water is: first hour — 1/12, second — 2/12, third — 3/12, fourth — 3/12, fifth — 2/12, sixth — 1/12.
Intertidal ecology is the study of ecosystems between the low- and high-water lines along a shore. At low water, the intertidal zone is exposed (or emersed), whereas at high water, it is underwater (or immersed). Intertidal ecologists therefore study the interactions between intertidal organisms and their environment, as well as among the different species. The most important interactions may vary according to the type of intertidal community. The broadest classifications are based on substrates — rocky shore or soft bottom.
Intertidal organisms experience a highly variable and often hostile environment, and have adapted to cope with and even exploit these conditions. One easily visible feature is vertical zonation, in which the community divides into distinct horizontal bands of specific species at each elevation above low water. A species' ability to cope with desiccation determines its upper limit, while competition with other species sets its lower limit.
Humans use intertidal regions for food and recreation. Overexploitation can damage intertidals directly. Other anthropogenic actions such as introducing invasive species and climate change have large negative effects. Marine Protected Areas are one option communities can apply to protect these areas and aid scientific research.
The approximately fortnightly tidal cycle has large effects on intertidal and marine organisms. Hence their biological rhythms tend to occur in rough multiples of this period. Many other animals such as the vertebrates, display similar rhythms. Examples include gestation and egg hatching. In humans, the menstrual cycle lasts roughly a lunar month, an even multiple of the tidal period. Such parallels at least hint at the common descent of all animals from a marine ancestor.
Shallow areas in otherwise open water can experience rotary tidal currents, flowing in directions that continually change and thus the flow direction (not the flow) completes a full rotation in 12+1⁄2 hours (for example, the Nantucket Shoals).
In addition to oceanic tides, large lakes can experience small tides and even planets can experience atmospheric tides and Earth tides. These are continuum mechanical phenomena. The first two take place in fluids. The third affects the Earth's thin solid crust surrounding its semi-liquid interior (with various modifications).
Large lakes such as Superior and Erie can experience tides of 1 to 4 cm (0.39 to 1.6 in), but these can be masked by meteorologically induced phenomena such as seiche. The tide in Lake Michigan is described as 1.3 to 3.8 cm (0.5 to 1.5 in) or 4.4 cm (1+3⁄4 in). This is so small that other larger effects completely mask any tide, and as such these lakes are considered non-tidal.
Atmospheric tides are negligible at ground level and aviation altitudes, masked by weather's much more important effects. Atmospheric tides are both gravitational and thermal in origin and are the dominant dynamics from about 80 to 120 kilometres (50 to 75 mi), above which the molecular density becomes too low to support fluid behavior.
Earth tides or terrestrial tides affect the entire Earth's mass, which acts similarly to a liquid gyroscope with a very thin crust. The Earth's crust shifts (in/out, east/west, north/south) in response to lunar and solar gravitation, ocean tides, and atmospheric loading. While negligible for most human activities, terrestrial tides' semi-diurnal amplitude can reach about 55 centimetres (22 in) at the Equator—15 centimetres (5.9 in) due to the Sun—which is important in GPS calibration and VLBI measurements. Precise astronomical angular measurements require knowledge of the Earth's rotation rate and polar motion, both of which are influenced by Earth tides. The semi-diurnal M2 Earth tides are nearly in phase with the Moon with a lag of about two hours.
Galactic tides are the tidal forces exerted by galaxies on stars within them and satellite galaxies orbiting them. The galactic tide's effects on the Solar System's Oort cloud are believed to cause 90 percent of long-period comets.
Tsunamis, the large waves that occur after earthquakes, are sometimes called tidal waves, but this name is given by their resemblance to the tide, rather than any causal link to the tide. Other phenomena unrelated to tides but using the word tide are rip tide, storm tide, hurricane tide, and black or red tides. Many of these usages are historic and refer to the earlier meaning of tide as "a portion of time, a season".
- Aquaculture – Farming of aquatic organisms
- Clairaut's theorem
- Coastal erosion – The loss or displacement of land along the coastline due to the action of waves, currents, tides. wind-driven water, waterborne ice, or other impacts of storms
- Establishment of a port
- Head of tide, also known as tidal reach, or tidal limit – The farthest point upstream where a river is affected by tidal fluctuations
- Hough function – The eigenfunctions of Laplace's tidal equations which govern fluid motion on a rotating sphere
- King tide – colloquial term for an especially high spring tide, such as a perigean spring tide.
- Lunar Laser Ranging experiment – Measuring the distance between the Earth and the Moon with laser light
- Lunar phase – the shape of the Moon's directly sunlit portion as viewed from Earth
- Raised beach, also known as Marine terrace – A beach or wave-cut platform raised above the shoreline by a relative fall in the sea level
- Mean high water spring
- Mean low water spring – Average level of the spring low tides over a fairly long period of time
- Orbit of the Moon – The Moon's circuit around the Earth
- Primitive equations – equations to approximate global atmospheric flow
- Tidal island – Island accessible by foot at low tide
- Tidal locking – Situation in which an astronomical object's orbital period matches its rotational period
- Tidal prism – The volume of water in an estuary or inlet between mean high tide and mean low tide
- Tidal resonance – Phenomenon that occurs when the tide excites a resonant mode of a part of an ocean, producing a higher tidal range
- Tidal river – River where flow and level are influenced by tides
- Tidal triggering of earthquakes – The idea that tidal forces may induce seismicity
- Tide pool – A rocky pool on a seashore, separated from the sea at low tide, filled with seawater
- Tideline – Surface border where two currents in the ocean converge. Driftwood, floating seaweed, foam, and other floating debris may accumulate
- Tides in marginal seas – Dynamics of tidal wave deformation in the shallow waters of the marginal seas
- Coastal orientation and geometry affects the phase, direction, and amplitude of amphidromic systems, coastal Kelvin waves as well as resonant seiches in bays. In estuaries, seasonal river outflows influence tidal flow.
- Tide tables usually list mean lower low water (mllw, the 19 year average of mean lower low waters), mean higher low water (mhlw), mean lower high water (mlhw), mean higher high water (mhhw), as well as perigean tides. These are mean values in the sense that they derive from mean data.
- "The moon, too, as the heavenly body nearest the earth, bestows her effluence most abundantly upon mundane things, for most of them, animate or inanimate, are sympathetic to her and change in company with her; the rivers increase and diminish their streams with her light, the seas turn their own tides with her rising and setting, … "
- "Orbis virtutis tractoriæ, quæ est in Luna, porrigitur utque ad Terras, & prolectat aquas sub Zonam Torridam, … Celeriter vero Luna verticem transvolante, cum aquæ tam celeriter sequi non possint, fluxus quidem fit Oceani sub Torrida in Occidentem, … " (The sphere of the lifting power, which is [centered] in the moon, is extended as far as to the earth and attracts the waters under the torrid zone, … However the moon flies swiftly across the zenith ; because the waters cannot follow so quickly, the tide of the ocean under the torrid [zone] is indeed made to the west, …"
- See for example, in the 'Principia' (Book 1) (1729 translation), Corollaries 19 and 20 to Proposition 66, on pages 251–254, referring back to page 234 et seq.; and in Book 3 Propositions 24, 36 and 37, starting on page 255.
- According to NASA the lunar tidal force is 2.21 times larger than the solar.
- See Tidal force – Mathematical treatment and sources cited there.
- "The ocean does not produce tides as a direct response to the vertical forces at the bulges. The tidal force is only about 1 ten millionth the size of the gravitational force owing to the Earth’s gravity. It is the horizontal component of the tidal force that produces the tidal ellipsoid, causing fluid to converge (and bulge) at the sublunar and antipodal points and move away from the poles, causing a contraction there." (...) "The projection of the tidal force onto the horizontal direction is called the tractive force (see Knauss, Fig. 10.11). This force causes an acceleration of water towards the sublunar and antipodal points, building up water until the pressure gradient force from the bulging sea surface exactly balances the tractive force field."
- "While the solar and lunar envelopes are thought of as representing the actual ocean waters, another very important factor must be recognized. The components of the tide-generating forces acting tangentially along the water surface turn out to be the most important. Just as it is easier to slide a bucket of water across a floor rather than to lift it, the horizontal tractive components move the waters toward the points directly beneath and away from the sun or moon far more effectively than the vertical components can lift them. These tractive forces are most responsible for trying to form the ocean into the symmetrical egg-shaped distensions (the tide potential, the equilibrium tide). They reach their maximums in rings 45° from the points directly beneath and away from the sun or moon."
- "... the gravitational effect that causes the tides is much too weak to lift the oceans 12 inches vertically away from the earth. It is possible, however, to move the oceans horizontally within the earth's gravitational field. This gathers the oceans toward two points where the height of the water becomes elevated by the converging volume of water."
- The day is currently lengthening at a rate of about 0.002 seconds per century.
- To demonstrate this Tides Home Page offers a tidal height pattern converted into an .mp3 sound file, and the rich sound is quite different from a pure tone.
- Reddy, M.P.M. & Affholder, M. (2002). Descriptive physical oceanography: State of the Art. Taylor and Francis. p. 249. ISBN 90-5410-706-5. OCLC 223133263.
- Hubbard, Richard (1893). Boater's Bowditch: The Small Craft American Practical Navigator. McGraw-Hill Professional. p. 54. ISBN 0-07-136136-7. OCLC 44059064.
- "Tidal lunar day". NOAA. Do not confuse with the astronomical lunar day on the Moon. A lunar zenith is the Moon's highest point in the sky.
- Mellor, George L. (1996). Introduction to physical oceanography. Springer. p. 169. ISBN 1-56396-210-1.
- "Glossary of Coastal Terminology: H–M". Washington Department of Ecology, State of Washington. Retrieved 5 April 2007.
- "Definitions of tidal terms". Land Information New Zealand. Retrieved 20 February 2017.
- "A tutorial on Datums". National Oceanic and Atmospheric Administration (U.S.). Retrieved 29 August 2019.
- Ocean Tides and Magnetic Fields NASA Visualization Studio, 30 December 2016.
- "Types and causes of tidal cycles". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- Swerdlow, Noel M.; Neugebauer, Otto (1984). Mathematical astronomy in Copernicus's De revolutionibus. 1. Springer-Verlag. p. 76. ISBN 0-387-90939-7.
- "neap²". Oxford English Dictionary (2nd ed.). Oxford University Press. 1989. Old English (example given from AD 469: forðganges nip – without the power of advancing). The Danish niptid is probably from the English. The English term neap-flood (from which neap tide comes) seems to have been in common use by AD 725.
- Plait, Phil (11 March 2011). "No, the "supermoon" didn't cause the Japanese earthquake". Discover Magazine. Retrieved 16 May 2012.
- Rice, Tony (4 May 2012). "Super moon looms Saturday". WRAL-TV. Retrieved 5 May 2012.
- Le Provost, Christian (1991). Generation of Overtides and compound tides (review). In Parker, Bruce B. (ed.) Tidal Hydrodynamics. John Wiley and Sons, ISBN 978-0-471-51498-5
- Accad, Y. & Pekeris, C.L. (November 28, 1978). "Solution of the Tidal Equations for the M2 and S2 Tides in the World Oceans from a Knowledge of the Tidal Potential Alone". Philosophical Transactions of the Royal Society of London A. 290 (1368): 235–266. Bibcode:1978RSPTA.290..235A. doi:10.1098/rsta.1978.0083. S2CID 119526571.
- "Tide forecasts". New Zealand: National Institute of Water & Atmospheric Research. Archived from the original on 2008-10-14. Retrieved 2008-11-07. Including animations of the M2, S2 and K1 tides for New Zealand.
- Marchuk, Guri I.; Kagan, B. A. (6 December 2012). Dynamics of Ocean Tides. ISBN 9789400925717.
- Schureman, Paul (1971). Manual of harmonic analysis and prediction of tides. U.S. Coast and geodetic survey. p. 204.
- Ptolemy with Frank E. Robbins, trans., Tetrabiblos (Cambridge, Massachusetts: Harvard University Press, 1940), Book 1, chapter 2.
- Bede (1999). The Reckoning of Time. Translated by Wallis, Faith. Liverpool University Press. p. 82. ISBN 0-85323-693-3. Retrieved 1 June 2018.
- Bede 1999, p. 83.
- Bede 1999, p. 84.
- Bede 1999, p. 85.
- Marina Tolmacheva (2014). Glick, Thomas F. (ed.). Geography, Chorography. Medieval Science, Technology, and Medicine: An Encyclopedia. Routledge. p. 188. ISBN 978-1135459321.
- Simon Stevin – Flanders Marine Institute (pdf, in Dutch)
- Palmerino, The Reception of the Galilean Science of Motion in Seventeenth-Century Europe, pp. 200 op books.google.nl
- Johannes Kepler, Astronomia nova … (1609), p. 5 of the Introductio in hoc opus (Introduction to this work). From page 5:
- Lisitzin, E. (1974). "2 "Periodical sea-level changes: Astronomical tides"". Sea-Level Changes, (Elsevier Oceanography Series). 8. p. 5.
- "What Causes Tides?". U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section).
- Wahr, J. (1995). Earth Tides in "Global Earth Physics", American Geophysical Union Reference Shelf #1. pp. 40–46.
- Leonhard Euler; Eric J. Aiton (1996). Commentationes mechanicae et astronomicae ad physicam pertinentes. Springer Science & Business Media. pp. 19–. ISBN 978-3-7643-1459-0.
- Thomson, Thomas, ed. (March 1819). "On Capt. Cook's Account of the Tides". Annals of Philosophy. London: Baldwin, Cradock and Joy. XIII: 204. Retrieved 25 July 2015.
- Zuosheng, Y.; Emery, K.O. & Yui, X. (July 1989). "Historical Development and Use of Thousand-Year-Old Tide-Prediction Tables". Limnology and Oceanography. 34 (5): 953–957. Bibcode:1989LimOc..34..953Z. doi:10.4319/lo.1989.34.5.0953.
- Cartwright, David E. (1999). Tides: A Scientific History. Cambridge, UK: Cambridge University Press.
- Case, James (March 2000). "Understanding Tides – From Ancient Beliefs to Present-day Solutions to the Laplace Equations". SIAM News. 33 (2).
- Doodson, A.T. (December 1921). "The Harmonic Development of the Tide-Generating Potential". Proceedings of the Royal Society of London A. 100 (704): 305–329. Bibcode:1921RSPSA.100..305D. doi:10.1098/rspa.1921.0088.
- Casotto, S. & Biscani, F. (April 2004). "A fully analytical approach to the harmonic development of the tide-generating potential accounting for precession, nutation, and perturbations due to figure and planetary terms". AAS Division on Dynamical Astronomy. 36 (2): 67. Bibcode:2004DDA....35.0805C.
- Moyer, T.D. (2003) "Formulation for observed and computed values of Deep Space Network data types for navigation" Archived 2004-10-16 at the Wayback Machine, vol. 3 in Deep-space communications and navigation series, Wiley, pp. 126–128, ISBN 0-471-44535-5.
- Flussi e riflussi. Milano: Feltrinelli. 2003. ISBN 88-07-10349-4.
- van der Waerden, B.L. (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences. 500 (1): 525–545 . Bibcode:1987NYASA.500..525V. doi:10.1111/j.1749-6632.1987.tb37224.x. S2CID 222087224.
- Cartwright, D.E. (1999). Tides, A Scientific History: 11, 18
- "The Doodson–Légé Tide Predicting Machine". Proudman Oceanographic Laboratory. Archived from the original on 2009-03-20. Retrieved 2008-10-03.
- Young, C.A. (1889). A Textbook of General Astronomy p. 288.
- "Equilibrium tide". AMS Glossary. 2020-09-02. Retrieved 2020-09-02.
- LuAnne Thompson (2006) Physical Processes in the Ocean
- Hicks, S.D. (2006). Understanding Tides (PDF) (Report). NOAA. Retrieved 2020-09-02.
- James Greig Mccully (2006) Beyond The Moon: A Conversational, Common Sense Guide To Understanding The Tides, World Scientific.
- "What Physics Teachers Get Wrong about Tides! - PBS Space Time". PBS LearningMedia. 2020-06-17. Retrieved 2020-06-27.
- Munk, W.; Wunsch, C. (1998). "Abyssal recipes II: energetics of tidal and wind mixing". Deep-Sea Research Part I. 45 (12): 1977. Bibcode:1998DSRI...45.1977M. doi:10.1016/S0967-0637(98)00070-3.
- Ray, R.D.; Eanes, R.J.; Chao, B.F. (1996). "Detection of tidal dissipation in the solid Earth by satellite tracking and altimetry". Nature. 381 (6583): 595. Bibcode:1996Natur.381..595R. doi:10.1038/381595a0. S2CID 4367240.
- Lecture 2: The Role of Tidal Dissipation and the Laplace Tidal Equations by Myrl Hendershott. GFD Proceedings Volume, 2004, WHOI Notes by Yaron Toledo and Marshall Ward.
- U.S. National Oceanic and Atmospheric Administration (NOAA) National Ocean Service (Education section), map showing world distribution of tide patterns, semi-diurnal, diurnal and mixed semi-diurnal.
- Thurman, H.V. (1994). Introductory Oceanography (7th ed.). New York: Macmillan. pp. 252–276.ref
- Ross, D.A. (1995). Introduction to Oceanography. New York: HarperCollins. pp. 236–242.
- Witze, Alexandra (5 July 2020). "How humans are altering the tides of the oceans". BBC Future. BBC. Retrieved 8 July 2020.
- Glossary of Meteorology American Meteorological Society.
- Webster, Thomas (1837). The elements of physics. Printed for Scott, Webster, and Geary. p. 168.
- "FAQ". Retrieved June 23, 2007.
- O'Reilly, C.T.R.; Ron Solvason & Christian Solomon (2005). Ryan, J. (ed.). "Where are the World's Largest Tides". BIO Annual Report "2004 in Review". Washington, D.C.: Biotechnol. Ind. Org.: 44–46.
- Charles T. O'reilly, Ron Solvason, and Christian Solomon. "Resolving the World's largest tides", in J.A Percy, A.J. Evans, P.G. Wells, and S.J. Rolston (Editors) 2005: The Changing Bay of Fundy-Beyond 400 years, Proceedings of the 6th Bay of Fundy Workshop, Cornwallis, Nova Scotia, Sept. 29, 2004 to October 2, 2004. Environment Canada-Atlantic Region, Occasional Report no. 23. Dartmouth, N.S. and Sackville, N.B.
- Pingree, R.D.; L. Maddock (1978). "Deep-Sea Research". 25: 53–63. Cite journal requires
- Center for Operational Oceanographic Products and Services, National Ocean Service, National Oceanic and Atmospheric Administration (January 2000). "Tide and Current Glossary" (PDF). Silver Spring, MD.CS1 maint: multiple names: authors list (link)
- Harmonic Constituents, NOAA.
- Society for Nautical Research (1958). The Mariner's Mirror. Retrieved 2009-04-28.
- Bos, A.R.; Gumanao, G.S.; van Katwijk, M.M.; Mueller, B.; Saceda, M.M. & Tejada, R.P. (2011). "Ontogenetic habitat shift, population growth, and burrowing behavior of the Indo-Pacific beach star Archaster typicus (Echinodermata: Asteroidea)". Marine Biology. 158 (3): 639–648. doi:10.1007/s00227-010-1588-0. PMC 3873073. PMID 24391259.
- Bos, A.R. & Gumanao, G.S. (2012). "The lunar cycle determines availability of coral reef fishes on fish markets". Journal of Fish Biology. 81 (6): 2074–2079. doi:10.1111/j.1095-8649.2012.03454.x. PMID 23130702.
- Darwin, Charles (1871). The Descent of Man, and Selection in Relation to Sex. London: John Murray.
- Le Lacheur, Embert A. Tidal currents in the open sea: Subsurface tidal currents at Nantucket Shoals Light Vessel Geographical Review, April 1924. Accessed: 4 February 2012.
- "Do the Great Lakes have tides?". Great Lakes Information Network. October 1, 2000. Retrieved 2010-02-10.
- Calder, Vince. "Tides on Lake Michigan". Argonne National Laboratory. Retrieved 2019-08-14.
- Dunkerson, Duane. "moon and Tides". Astronomy Briefly. Retrieved 2010-02-10.
- "Do the Great Lakes have tides?". National Ocean Service. NOAA.
- Nurmi, P.; Valtonen, M.J. & Zheng, J.Q. (2001). "Periodic variation of Oort Cloud flux and cometary impacts on the Earth and Jupiter". Monthly Notices of the Royal Astronomical Society. 327 (4): 1367–1376. Bibcode:2001MNRAS.327.1367N. doi:10.1046/j.1365-8711.2001.04854.x.
- "tide". Oxford English Dictionary. XVIII (2nd ed.). Oxford University Press. 1989. p. 64.
- 150 Years of Tides on the Western Coast: The Longest Series of Tidal Observations in the Americas NOAA (2004).
- Eugene I. Butikov: A dynamical picture of the ocean tides
- Tides and centrifugal force: Why the centrifugal force does not explain the tide's opposite lobe (with nice animations).
- O. Toledano et al. (2008): Tides in asynchronous binary systems
- Gaylord Johnson "How Moon and Sun Generate the Tides" Popular Science, April 1934
|Wikiquote has quotations related to: Tides|
|Wikimedia Commons has media related to Tides.|
- NOAA Tides and Currents information and data
- History of tide prediction
- Department of Oceanography, Texas A&M University
- UK Admiralty Easytide
- UK, South Atlantic, British Overseas Territories and Gibraltar tide times from the UK National Tidal and Sea Level Facility
- Tide Predictions for Australia, South Pacific & Antarctica
- Tide and Current Predictor, for stations around the world | https://wiki-offline.jakearchibald.com/wiki/Tides | 21 |
32 | What is meant by GDP
Gross domestic product and gross national product simply explained
Everyone has probably heard the terms “gross domestic product” and “gross national product”. But what is the difference? Basically, the gross domestic product serves as a measure of production for a country, while the gross national product aims at income levels.
However, gross domestic product (GDP) and gross national product (GNP) are interdependent. When calculating the gross national product, one generally starts from the gross domestic product.
The gross domestic product
The gross domestic product (GDP) is a measure of the economic performance of an economy over a certain period of time. It measures the total value of all goods, i.e. goods and services, which were produced within the national borders of an economy in a year and which are used for end consumption.
Goods and services that are used as inputs for the production of other goods are therefore not taken into account in GDP.
GDP is the sum of all goods that were produced in a country, regardless of whether they were produced by residents or foreigners. The gross domestic product can refer to states, but also to other administrative or geographical units, such as the European Union.
Since the gross domestic product is a measure of the economic growth of an economy, it is the most important parameter in national accounts. Typically, GDP is calculated for years and quarters.
The GDP is also used for the economic performance of a country in international comparison.
The gross national product
The gross national product (GNP) is also called gross national income (GNI). The GNP expresses the sum of all goods and services that were produced in a national economy within a year. In order to calculate the gross national product, the gross domestic product is assumed.
The GNP differs from the GDP in that the earned income and property income of residents abroad is added and the earned income and property income of foreign residents in Germany is subtracted.
Specifically, this means: when calculating the gross domestic product, the benefits and assets of residents and foreigners are recorded, while the gross national product is based only on the resident principle.
So while the gross domestic product includes all products that were created on the territory of a country, the gross domestic product includes all goods that were produced by residents, regardless of whether this production takes place abroad or domestically.
A general distinction is made between nominal and real gross national product. When calculating the nominal gross national product, all goods and services produced are included with the prices of the year of production, i.e. at current prices.
In contrast, the real gross national product is based on the prices of a certain base year. So the inflation rate is factored out. The calculation of the real GNP has the effect that an increase in the gross national product, which is due to price increases, is completely disregarded.
Since 1992, however, the Federal Statistical Office has no longer used gross national product but rather gross domestic product as a measure of growth. It is now assumed that the actual production within a country is more meaningful in terms of development dynamics than the question of who performed the service.
There is some evidence that inflation will pick up. With stocks, you are prepared for this scenario. > read more
- Why is teaching hated by most people?
- What is 99x99
- Are there television screens at United Airlines
- Who has the strongest claim to Jerusalem?
- Are adults famous on TikTok
- Has Donald Trump ever visited Bamako Mali
- Why are mothers so special
- What is the most successful song by The Flamingos
- What is the BSNL 349 Plan
- Who are you beyond your DNA
- How do I improve my Italian pronunciation?
- How to track a fake number
- What if krillin were immortal?
- Flexbox has some drawbacks in CSS
- The word means mean
- What are the best Ayurvedic products
- Alligators make noise
- Why should I follow Sanathana Darma Hinduism
- Why does Atlanta have so many trees
- When does the kinetic energy become minimal
- How is Vipassana center in Hyderabad
- Is stalking normal for a narcissist
- Ernest Hemingway was a double agent
- Why are conservatives so afraid of Russia?
- How accurate is the GPS time
- Will Trump drop another bomb
- Why digital marketing is preferred by SMEs
- What is the life of a Facebook engineer like
- Russian blue cats are aggressive
- How powerful is Dr. Fate
- Welingkar is a good institute for MBA
- Would Eragon make a good TV show
- Why are cheesy Bollywood love songs overrated
- Why do people share similar dreams 1 | https://illez.xyz/?post=348 | 21 |
26 | File Name: properties of ionic covalent and metallic bonds .zip
A covalent bond is a chemical bond that involves the sharing of electron pairs between atoms. These electron pairs are known as shared pairs or bonding pairs , and the stable balance of attractive and repulsive forces between atoms, when they share electrons , is known as covalent bonding. In organic chemistry, covalent bonds are much more common than ionic bonds. In the molecule H 2 , the hydrogen atoms share the two electrons via covalent bonding.
Covalent bond occurs between the two non-metals, metallic bond occurs between two metals and the ionic bond occurs between the metal and the non-metal. Covalent bond involves the sharing of electrons, while metallic bonds have strong attractions and ionic bonds involve the transferring and accepting of electrons from the valence shell. The adhering property of an atom, in order to arrange themselves in a most stable pattern by filling their outermost electrons orbit. This association of atoms forms the molecules, ions or crystals and is referred to as chemical bonding. There are two categories of chemical bond on the ground of their strength, these are primary or strong bonds and secondary or weak bonds. Primary bonds are covalent, metallic and ionic bonds, whereas secondary bonds are the dipole-dipole interactions, hydrogen bonds, etc. After the introduction of quantum mechanics and the electrons, the idea of the chemical bonding was put forth during the 20th century.
Define ionic bond: Chemical bond where electron s are transferred from a cation usually a metal to an anion a nonmetal or 3. Students will collaborate with their peers as they look to make bonds. The positive ion, called a cation, is listed first in an ionic compound formula, followed by the negative ion, called an anion. They are then given 10 names of compounds and they write the formulas. Ionic Compounds: Naming. Students will understand the various aspects of chemical reactions and stoichiometry. Covalent Bonding Video Lecture.
Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials. Are you getting the free resources, updates, and special offers we send out every week in our teacher newsletter? Grade Level. Resource Type. Log In Join Us. View Wish List View Cart.
Crystalline solids fall into one of four categories. All four categories involve packing discrete molecules or atoms into a lattice or repeating array, though network solids are a special case. The categories are distinguished by the nature of the interactions holding the discrete molecules or atoms together. Based on the nature of the forces that hold the component atoms, molecules, or ions together, solids may be formally classified as ionic, molecular, covalent network , or metallic. The variation in the relative strengths of these four types of interactions correlates nicely with their wide variation in properties. In ionic and molecular solids, there are no chemical bonds between the molecules, atoms, or ions. The solid consists of discrete chemical species held together by intermolecular forces that are electrostatic or Coulombic in nature.
Included in every 5E lesson is a homework assignment, assessment, and modified assessment. This is the introductory lesson to the topic on structure and bonding and so is more about establishing prior learning and beginning to understand what a bond is. Mar 14, - This is one activity from the Chemical Bonding station lab. Describe the implications of electron pair repulsions on molecular shape. Create a Lewis dot structure for an atom, covalent compound, and ionic compound. In this activity students build on this knowledge using a Gizmo ExploreLearning that has them walk through a series of steps that shows electrons being removed by one atom and gain by another.
Ionic bond , also called electrovalent bond , type of linkage formed from the electrostatic attraction between oppositely charged ions in a chemical compound. Such a bond forms when the valence outermost electrons of one atom are transferred permanently to another atom. The atom that loses the electrons becomes a positively charged ion cation , while the one that gains them becomes a negatively charged ion anion. A brief treatment of ionic bonds follows. For full treatment, see chemical bonding: The formation of ionic bonds.
Ionic and covalent bonds are the two main types of chemical bonding. A chemical bond is a link formed between two or more atoms or ions. The main difference between ionic and covalent bonds is how equally the electrons are shared between atoms in the bond.
Ionic bonding is a type of chemical bonding that involves the electrostatic attraction between oppositely charged ions , or between two atoms with sharply different electronegativities , and is the primary interaction occurring in ionic compounds. It is one of the main types of bonding along with covalent bonding and metallic bonding. Ions are atoms or groups of atoms with an electrostatic charge. Atoms that gain electrons make negatively charged ions called anions.
Draw the line bond structures of following types of hydrocarbons using four carbon atoms: a. Give the molecular formula, the line bond structural formula, the condensed structural formula, and the skeletal structure for pentane. Molecular formula: C 5H 12 Line bond structural formula:. Covalent bonding worksheet. This worksheet clearly explains how to draw dot and cross diagrams for covalent compounds, using Cl2 as an example. Pupils are then asked Fusion bonding iv.
Use the concept of potential energy to describe how a covalent bond forms. Bond energy is directly proportional to bond order. Atoms form double or triple covalent bonds if they can attain a noble gas structure by doing so. How is an intermolecular force different from a bond? A neutral particle that is made up of atoms that are joined together by covalent bonds is called a n. Chapter 2 atomic structure and interatomic bonding. Chapter 20 Vocabulary continue to work on the Chapter 20 vocab as you work.
There are three types of strong chemical bond – ionic, covalent and metallic. There are also weak intermolecular bonds which hold molecules close to each other.
The properties of a solid can usually be predicted from the valence and bonding preferences of its constituent atoms. Four main bonding types are discussed here: ionic, covalent, metallic, and molecular. Hydrogen-bonded solids, such as ice , make up another category that is important in a few crystals. There are many examples of solids that have a single bonding type, while other solids have a mixture of types, such as covalent and metallic or covalent and ionic. Sodium chloride exhibits ionic bonding. The sodium atom has a single electron in its outermost shell, while chlorine needs one electron to fill its outer shell. Each ion thus attains a closed outer shell of electrons and takes on a spherical shape.
Сьюзан, - сказал. - Дай мне двадцать минут, чтобы уничтожить файлы лаборатории систем безопасности. После этого я сразу перейду к своему терминалу и выключу ТРАНСТЕКСТ.
Ionic Bonds. ▫ Two neutral atoms close to each can undergo an ionization process in order to obtain a full valence shell. ▫ Due to ionization, electrons are.Upinanun1990 09.12.2020 at 11:05
There are many types of chemical bonds and forces that bind molecules together.Aydin M. 13.12.2020 at 05:21
Honda cr v 2002 2011 haynes manual pdf torrent guide to the dissection of the dog pdfChristin S. 13.12.2020 at 17:17
Teachers Pay Teachers is an online marketplace where teachers buy and sell original educational materials.Mila G. 13.12.2020 at 21:47
Metallic bond , force that holds atoms together in a metallic substance. | https://ikafisipundip.org/and-pdf/112-properties-of-ionic-covalent-and-metallic-bonds-pdf-882-613.php | 21 |