score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
11 | NASA Scientists Identify Smallest Known Black Hole
If you want to know the universe’s ultimate tough guys, look no further than black holes. These strange objects gobble up gas from their surroundings, and sometimes swallow entire stars. But a black hole’s gravity is so powerful that nothing, not even light, can escape its grasp.
But just as Olympic boxing teams have their flyweights, somewhere out there in the depths of space exists the lightest black hole in the universe. It’s still a tough guy, but it’s smaller and lighter than all other members of its kind.
Astronomers may never find the universe’s lightest black hole, but in results announced on March 31, they have come close. Nikolai Shaposhnikov and Lev Titarchuk, who work at NASA’s Goddard Space Flight Center in Greenbelt, Md., have identified the smallest known black hole in the universe. This black hole would weigh the same as 3.8 of our Suns if it could be put on a giant scale.
The Sun is a huge object, and could contain more than a million Earths. So an object weighing the same as 3.8 Suns might sound like a lot. But it’s a pipsqueak when compared to all other known black holes. Previously, the smallest known black hole would weigh about 6.3 Suns, and some black holes tip the scales at millions or even billions of times that of our Sun.
The new record holder, known as XTE J1650, formed in the center of a dying star. The star’s core was a giant nuclear reactor, generating energy by turning light elements such as hydrogen into heavier elements such as oxygen. But eventually, the reactor ran out of fuel and shut down. The core collapsed due to its own gravity and formed a black hole.
Astronomers think that this process can form black holes down to about 3 times the weight of our Sun. If a star’s core is even smaller than that when it runs out of fuel, it will form another type of object, called a neutron star. So the XTE J1650 black hole is not only the lightest known black hole, it’s close to the smallest possible size for a black hole.
Amazingly, equations from Albert Einstein predict that a black hole with 3.8 times the mass of our Sun would be only 15 miles across -- the size of a city. “This makes the black hole one of the smallest objects ever discovered outside our solar system,” says Shaposhnikov.
Shaposhnikov and Titarchuk made their discovery by using NASA’s Rossi X-ray Timing Explorer, a small and low-cost satellite that launched in late 1995. Rossi is able to make extremely precise measurements of gas whirling around black holes. By timing the motion of the gas, the two astronomers were able to measure the strength of the black hole’s gravitational field, which tells them how much it weighs.
Shaposhnikov and Titarchuk are presenting their results on Monday, March 31, at the American Astronomical Society High-Energy Astrophysics Division meeting in Los Angeles, Calif. Titarchuk also works at George Mason University in Fairfax, Va., and the U.S. Naval Research Laboratory in Washington, D.C.
> Read the related press release
Goddard Space Flight Center | http://www.nasa.gov/topics/universe/features/smallest_blackhole.html | 13 |
27 | OBJECTIVES: 1. To investigate relationships among variables.
2. To collect and analyze data.
3. To constuct and analyze tables.
4. To acquire a notion for what are acceptable and unacceptable approximations.
5. To envision the effects on graphs by changing the parameters in the algebraic representation of the function.
6. To determine domain and range for a real world problem.
MATERIALS: Cardstock to make telescope
PREPARATION: 1. Students should work in groups of 3.
2. Form the telescope from 2 pieces of cardstock rolled up. Each rolled up piece should be about 25 cm long. Each part should be about 2 cm in diameter.
3. Insert one of the rolls into the other to make the telescope. The shortest telescope will be 25 cm long, and the longest should be approximately 50 cm long.
ACTIVITY: Their are 3 different investigations for this activity. Two of the investigations deal with direct variation (linear functions). One of the investigations deal with inverse variation (hyperbolic functions).
General Instructions for the 3 Investigations:
In all 3 investigations, we are interested in examining how changing different parameters for our telescope changes how much we are able to view through the telescope. In the experiment, a meterstick is secured to a wall. One student stands a given distance from the wall and looks through the telescope. Another student places 2 fingers on the meterstick to mark a range. The student looking through the telescope will tell what range he is able to see. The third student should record the number of centimeters seen.
For this experiment, the students should keep the diameter and the length of the telescope constant and vary the distance the student is standing from the wall from 0 to 5 meters by increments of one-half.
For this experiment, the students should keep the distance from the wall and the length of the telescope constant and vary the diameter of the telescope from 2 to 6 cm by increments of one-half.
For this experiment, the students should keep the distance from the wall and the diameter of the telescope constant and vary the length of the telescope from 25 to 50 cm by increments of 1 cm.
EXPLORATION: After recording the data, the class could be led into the discussion of what a function is and what dependent and independent variables are. Students should be able to identify and classify the variables in the 3 investigations.
Questions: * What is constant?
* What things are changing?
* What is the dependent variable?
* What is the dependent variable?
* Does the dependent variable increase, decrease, or remain constant as the independent variable increases?
Have students conjecture based on the data and their discussion of these questions what the graphs of the 3 functions would look like.
To test the conjectures about the graphs, students can plot their data from the 3 investigations on a graphing calculator or preferably using a spreadsheet application. The first 2 investigations should produce a linear graph, while the third graph should look like a hyperbolic curve. Three possible graphs follow:
From the spreadsheet, the teacher can model the investigations with a GSP demonstration. The GSP sketch could be made available for each student to change the parameters on the telescope and take note of the changes in the amount of wall that can be seen. Students could construct this demonstration on their own depending on their familiarity with GSP.
Question: How can we write an expression for the relationship between the variables in each of the investigations?
To help answer this question, students can model the telescope experiment with a GSP sketch.
In the sketch, the eye's view forms similar triangles, ELK and EOP. We then have the proportion LK / OP = EM / EN. Solving for the CM Seen on the Wall (OP) we have,
OP = (LK*EN) / EM. Let LK = d, EN = w, and EM = l.
Questions: * In each investigation, what are the constants?
* If we let x = the independent variable and y = the dependent variable, what will the expressions for the functions look like?
#1: y = (d/l)(x) with d, l constant
#2: y = (w/l)(x) with w, l constant
#3: y = (dw)/x with d, w constant
Once the students have discovered this equations, they can go back to the spreadsheet and enter the equation as a formula to generate points and also graph. With the spreadsheet they can also change the constants to notice the changes in the data and graphs.The following graphs are from Microsoft Excel:
Students can also graph the equations on Algebra Xpresser and investigate for different constant values. The graphs could lead to a discussion of domain and range for the real world problem, and the restrictions that the real world puts on the function. Here are the Xpresser graphs for the 3 investigations:
# 1 #2 #3
Cooney, Thomas J. Integrating Mathematics Pedagogy and Content in Teacher Education: Functions. 1993 | http://jwilson.coe.uga.edu/emt669/Student.Folders/Jeon.Kyungsoon/IU/rational2/Telescope.html | 13 |
12 | Thursday, March 2, 2006
Influenza viruses constantly circulate through the human population, and influenza cases occur sporadically throughout the year. Influenza epidemics, in which the number of cases peaks sharply, usually occur in winter months. These seasonal influenza epidemics cause an annual average of about 36,000 deaths in the United States, mostly among people aged 65 years and over and those with chronic health conditions.
As influenza viruses circulate, the genes that determine the structure of their surface proteins undergo small changes. As these mutations accumulate—a process called “antigenic drift”—immunity created by prior exposure to older circulating influenza viruses or by vaccination can no longer reliably and optimally ward off infection. Antigenic drift is thus the basis for the predictable patterns of seasonal influenza seen in most years, and is the reason that we must update influenza vaccines annually.
Influenza viruses also can change more dramatically. For example, viruses sometimes emerge that can infect species other than their natural animal reservoirs, typically migratory birds. These viruses may begin to infect domestic poultry, farm animals such as pigs, or, very rarely, humans. When an animal influenza virus develops the ability to infect humans, the result is usually a “dead-end” infection that cannot readily spread further in the human population. However, the virus could mutate in ways that allow human-to-human transmission of an animal influenza virus to occur more easily. Furthermore, if an animal influenza virus and a human influenza virus were to simultaneously co-infect a person or animal, the two viruses could exchange genes, resulting in a virus that is readily transmissible between humans, and against which the human population may have no pre-existing immunity. When such an “antigenic shift” occurs by either of these mechanisms, a global influenza pandemic can result. After a significant proportion of the global population has been exposed to the new virus and has thereby acquired immunity to it, the death rate would almost certainly fall and drifted variants of the new virus would become the new seasonal viruses.
Historically, pandemic influenza is a proven threat. In the 20th century, influenza pandemics occurred in 1918, 1957, and 1968. The pandemics of 1957 and 1968 were serious infectious disease events that killed approximately two million and 700,000 people worldwide, respectively. The 1918-1919 pandemic, however, was catastrophic: epidemiologists estimate that it killed more than 50 million people worldwide, including more than 500,000 people in the United States, and caused enormous social and economic disruption. In all three of these pandemics, for reasons that remain unclear, a much greater proportion of young adults were killed than is typical of seasonal influenza. Given this history, we can expect that a new influenza virus will emerge and another pandemic will occur at some point in the future. Although the precise timing of the next pandemic remains unknown, when it arises, it is likely to spread rapidly due to the speed of modern air travel. The consequences will be severe throughout the world, in developed nations and especially in underdeveloped regions that do not have adequate public health systems.
Of known influenza viruses, the highly pathogenic H5N1 avian influenza that is currently spreading among domestic and migratory birds in Asia, Africa, the Middle East and Europe is of greatest concern. Although the H5N1 virus is primarily an animal pathogen, it nonetheless has infected more than 170 people; more than half of the people diagnosed with H5N1 avian influenza infection have died. At this time, the virus does not efficiently spread from animals to humans, and it spreads even less efficiently from one person to another. However, if the H5N1 virus mutates further or exchanges genes with a human influenza virus to acquire the ability to spread from person to person as efficiently as the viruses that cause seasonal influenza epidemics, the feared human pandemic could become a reality. The degree of threat from such a virus would depend on the extent to which the virus retains its current virulence and how transmissible it becomes.
On November 1, 2005, the President announced the National Strategy for Pandemic Influenza, and the next day U.S. Department of Health and Human Services (HHS) Secretary Michael O. Leavitt released the HHS Pandemic Influenza Preparedness and Response Plan, an integral component of the National Strategy. These two documents are part of a blueprint for a coordinated national strategy to prepare for and respond to a human influenza pandemic which will include a National Implementation Strategy and preparedness and response plans from other federal agencies. Within HHS, the National Institutes of Health (NIH), and the National Institute of Allergy and Infectious Diseases (NIAID) in particular, were given primary responsibility for the conduct of scientific research and clinical trials to foster product development, particularly vaccines and antiviral drugs, to prepare our nation for a potential human influenza pandemic.
In my testimony today, I will describe some of the ongoing scientific research and development efforts of the NIH, much of which is in collaboration with the private sector to counter the threat of pandemic influenza, focusing on projects and programs that will help to ensure that effective influenza vaccines and antiviral drugs will be available to counter any human influenza virus with pandemic potential that could emerge. I will close with a brief discussion of how our efforts to prepare for pandemic influenza are closely tied to those directed at seasonal influenza.
Advanced development efforts to create an effective H5N1 influenza vaccine are currently based on an H5N1 virus isolated from a Vietnamese patient that was infected from a chicken in 2004. Since there is no pandemic among humans, this vaccine is referred to as a pre-pandemic H5N1 vaccine. Should a pandemic virus emerge that can be easily transmitted among humans, a vaccine based on that specific strain may have to be developed; until that time, we cannot delay vaccine preparedness and development of pre-pandemic H5N1 vaccine candidates is proceeding rapidly. This effort serves two important purposes. As the H5N1 virus mutates, the imperfectly matched prototype vaccines may offer enough protection to prime the immune system and reduce the severity of infection. This could buy precious time while a vaccine that closely matches the pandemic strain is produced and distributed. Producing prototype H5N1 vaccines also provides a trial run in developing the infrastructure and production capacity to manufacture enough vaccine should a worldwide pandemic ensue.
In early 2004, NIAID-supported researchers used a technology called reverse genetics to create a H5N1 reference vaccine strain from the Vietnamese isolate. NIAID then contracted with sanofi pasteur and Chiron Corporation to use this reference strain to manufacture pilot lots of inactivated virus vaccine for use in clinical trials. These vaccine candidates are now undergoing clinical testing in healthy individuals: adults, elderly people, and children.
Preliminary results from clinical trials of the H5N1 pre-pandemic vaccine provide both good and sobering news. The good news is that the vaccine is well tolerated, and induces an immune response that is predictive of being protective against the H5N1 virus. The sobering news is that two large doses were needed to elicit this level of immune response. The requirement for larger than normal doses of vaccine essentially reduces the amount of vaccine we are able to produce in a given timeframe. However, preliminary results from a Phase I clinical trial of a candidate vaccine for H9N2 influenza—another avian virus that has caused human deaths—indicate that addition to the vaccine of a substance called an adjuvant can increase the immune response and thereby reduce the required dose. Clinical trials of H5N1 pre-pandemic candidates employing adjuvants and other dose-saving strategies are now in progress.
When a pandemic virus is identified and isolated, making a sufficient quantity of pandemic vaccine as quickly as possible will be a matter of great urgency. To ensure that the manufacturing techniques, procedures, and conditions used for large-scale production yield a satisfactory product, HHS contracted with sanofi pasteur and Chiron to use standard, egg-based techniques to produce inactivated H5N1 vaccine for the Strategic National Stockpile. Moving to large-scale production of the candidate vaccine in parallel with clinical testing of pilot lots is unusual, and an indication of the urgency with which we are addressing H5N1 vaccine development. The doses of H5N1 pre-pandemic vaccine being produced could be used to vaccinate certain at risk populations in affected areas before a pandemic vaccine becomes available.
Although egg-based manufacturing methods have served us well for more than 40 years, they are logistically complex, can fail if the vaccine strain will not grow efficiently, and cannot be rapidly expanded in response to increased demand for vaccine. The best hope for building a more reliable domestic manufacturing capacity that could be rapidly mobilized in response to the emergence of a pandemic virus lies in expanding and accelerating the development of manufacturing methods that grow the vaccine strain in cell culture. It is important to note, however, that while the technology for producing influenza vaccine in cell cultures is promising, successful development of the production methods and licensure of the product are years in the future and by no means guaranteed. Moreover, how quickly we reach the production goal of 300 million doses of pandemic vaccine within a six-month time frame will depend to some extent on the success of efforts to develop adjuvants and other dose-sparing techniques that reduce the amount of vaccine needed to protect the U.S. population.
In addition to inactivated virus vaccines, NIAID is collaborating with industry to pursue several other vaccine strategies. From the mid-1970s to the early 1990s, for example, NIAID intramural and extramural researchers developed a cold-adapted, live attenuated influenza vaccine strain that later became the influenza vaccine now marketed by MedImmune, Inc. as FluMist®. NIAID intramural researchers are now working with colleagues from MedImmune under a Cooperative Research and Development Agreement to produce and test a library of similar live vaccine candidates against all known influenza strains with pandemic potential, allowing a head start and faster response should any of these strains actually appear. Other strategies under development include recombinant subunit vaccines, in which cultured cells are induced to make various influenza virus proteins that are then purified and used in a vaccine; DNA vaccines, in which influenza genetic sequences are injected directly into a person to stimulate an immune response; and vector approaches that insert the genes of influenza virus into another harmless virus (the vector) and injecting the vector vaccine as a carrier to present the influenza proteins to the vaccine recipient
The goal of a particularly important ongoing effort is to develop a vaccine that raises immunity to parts of the influenza virus that vary very little from season to season and from strain to strain. Although this is a difficult task, such a “universal” influenza vaccine would not only provide continued protection over multiple seasons, it might also offer considerable protection against a newly emerged pandemic influenza virus and thus substantially increase the immunity of the population—making the country far less vulnerable to a new influenza virus.
NIAID also supports research to identify new anti-influenza drugs through the screening of existing drug candidates in cell-culture systems and in animal models; in the past year, seven promising candidates have been identified. Efforts are also underway to design new drugs that precisely target viral proteins and inhibit their functions. NIAID is also developing novel, broad-spectrum therapeutics that might work against many influenza virus strains. Some of these target viral entry into human cells, while others specifically attack and degrade the influenza virus genome. Studies are also in progress to evaluate long-acting next-generation neuraminidase inhibitors, as are development and testing in animals of combination antiviral regimens against H5N1 and other potential pandemic influenza strains.
NIAID International Influenza Research
Because a pandemic influenza virus could emerge anywhere in the world, NIAID helps to conduct global surveillance and molecular analysis of circulating animal and human influenza viruses. For example, NIAID funds a long-standing program to detect the emergence of influenza viruses with pandemic potential, in which researchers in Hong Kong and at St. Jude Children’s Research Hospital collect and analyze influenza viruses from wild birds and other animals and generate candidate vaccines against them. A recent genetic sequence analysis of some of the viruses collected through this program yielded important clues that may explain why H5N1 avian viruses cause such severe disease in humans, which may in turn yield new avenues for the creation of effective vaccines and treatments. In 2004, NIAID launched a broader effort to determine the complete genetic sequences of thousands of influenza virus isolates from throughout the world and to rapidly provide these sequence data to the scientific community. This program, a collaboration between NIAID, the Centers for Disease Control and Prevention (CDC) and several other organizations called the Influenza Genome Sequencing Project, will enable scientists to better understand the emergence of influenza epidemics and pandemics by observing how influenza viruses evolve as they spread through the population and by matching viral genetic characteristics with virulence, ease of transmissibility, and other clinical properties. To date, the complete genetic sequences of 831 influenza viruses have be published.
Fortunately, much of the research on influenza vaccines and antivirals that has been undertaken in response to the emergence of H5N1 avian influenza is directly applicable to both seasonal and pandemic preparedness, and efforts to improve our response to one will invariably improve our ability to manage the other.
Thank you for the opportunity to testify before you today. I would be pleased to answer any questions that you may have.
Last Revised: March 3, 2006 | http://www.hhs.gov/asl/testify/t060302a.html | 13 |
24 | A vector product
x is a binary operation
with the following properties:
a x b is linear in a and b
a x b is orthogonal to both a and b
|a x b|2 + |a · b|2 = |a|2|b|2
From these properties we can in addition deduce
|a x a|2 = |a|4 - |a · a|2 = 0, so a x a = 0
0 = (a + b) x (a + b) = a x b + b x a, so x is antisymmetric
Vector products only exist in R3 and R7. In 3 dimensions it is uniquely defined, up to sign: if x is a vector product then * is a vector product iff a * b = ±a x b). By convention the sign is chosen so that
(a1, a2, a3) x (b1, b2, b3) = (a2b3-a3b2, -a1b3+a3b1, a1b2-a2b1)
In R3 the vector product (often called cross product because of the symbol used, even though sometimes a wedge is used rather than a cross) is actually quite useful, since the 2-dimensional subspace spanned by a and b is defined by a x b (in English: the vector product gives the normal to the plane containing the two vectors). Also a x b = 0 is a useful necessary and sufficient condition for a and b to be proportional.
A peculiar property of the vector product is that the result is in fact not a vector. If we make a coordinate transformation that is orientation reversing (i.e. we change to a new coordinate frame using a reflection) then the there will be an extra sign change in a x b (since a x b depends on the orientation of a and b) that would not be there if it was a vector. Therefore the vector product is in fact a vector density. The distiction is not a very important one though.
We can define a vector product on R7 by
(a1, a2, a3, a4, a5, a6, a7) x (b1, b2, b3, b4, b5, b6, b7) =
The vector product in R7 is less obviously applicable. It should be emphasized that, unlike in three dimensions, it is not unique: there are many different vector products compatible with the standard inner product. The symmetry group of the vector product is a subgroup of SO(7) called G2.
There is a connection between vector product and normed division algebras. The vector product on R3 can be regarded as the imaginary part of the multiplication of the four-dimensional algerba of quaternions, while the vector product on R7 is related to the octonions. | http://everything2.com/title/vector+product | 13 |
24 | Watch this video for a step-by-step demonstration of how to express a fraction as a decimal by doing long division
Related Games and Subjects
Use and Find the Range of a Data Set (Learn Zillion)
In this interactive lesson, use the range to describe the distribution of data.
Describe the Spread of Data by Finding Range, Interquartile Range, and Mean Absolute Deviation (Learn Zillion)
Find the range, interquartile range, and mean absolute deviation to describe the spread of data.
Summarize the Spread of Data Using Range and Mean Absolute Deviation (Learn Zillion)
Calculate and examine the range and mean absolute deviation to summarize the spread of data.
Describe the Distribution of Data Using the Mean Absolute Deviation (Learn Zillion)
In this interactive lesson, use the mean absolute deviation to describe the distribution of data.
Find the Mean of a Data Set (Learn Zillion)
In this interactive lesson, find the mean when you describe the distribution of data.
Which 2 Dimensional Shape is Being Described? (IXL)
Practice identifying the correct 2-dimensional shape using the clues given.
Summarize the Center of Data with a Single Number Using Mean, Median, and Mode (Learn Zillion)
Examine mean, median, and mode to learn how to summarize the center of data with a single number.
Describe Attributes of a Data Set by Analyzing Line Plots, Histograms, and Box Plots (Learn Zillion)
Analyze line plots, histograms, and box plots to describe attributes being measured.
Analyze the Shape of a Graph to Describe the Distribution of Data (Learn Zillion)
In this interactive lesson, analyze the shape of a graph to describe distribution of data.
Asking Statistical Questions (Learn Zillion)
In this lesson, predict variability of responses when you create a statistical question.
Determine the Number of Observation in a Set of Data by Looking at Histograms and Line Plots (LearnZillion)
Look at histograms and line plots to determine the number of observations in a data set. | http://powermylearning.org/zh-hans/content/advanced-ratio-problems-khan | 13 |
13 | Beyond "Getting the Answer": Calculators Help Learning Disabled Students Get the Concepts
By: Center for Implementing Technology in Education (CITEd) (2007)
Research has much to tell us about using calculators for instruction. In a review of studies that examined the use of calculators in K-12 classrooms, Ellington (2003) found that calculator use was associated with better operational and problem-solving skills. In addition, students who had access to calculators had better attitudes toward math. But when exactly should calculators be used and for what purpose? This Info Brief summarizes Thompson and Sproule's (2000) "Calculator Decision-Making Flow Chart" and uses the principles of Universal Design for Learning (UDL) to clarify how calculator usage helps students with learning disabilities understand math concepts.
Calculators: To use or not to use
Don't calculators just give students the answer? Unfortunately, this stance on calculators prompts many teachers, even those in early grades, to opt out of using calculators in the classroom. When the goal of instruction is to help students practice computation skills, then this decision to not use a calculator makes sense. When teachers want students to engage in higher-order thinking such as solving problems, exploring patterns, conducting investigations, and working with real-world data, the use of calculators can benefit all students, especially those with learning disabilities who might otherwise be unable to participate in these engaging activities. This decision making process is illustrated in Thompson and Sproule's (2000) flow chart (see Figure 1*).
Figure 1. Calculator Decision-Making Flow Chart*
While the performance of students with disabilities or math difficulties is rarely explicitly discussed in the studies on calculator use, CITEd has extrapolated from research to provide the following ways in which calculators can support learning math for students who struggle. (See more research at www.cited.org.) These skills are presented through the lens of UDL to clarify how this approach can help students with learning disabilities understand math concepts.
Universal Design for Learning (UDL)
Universal Design for Learning is an educational framework that optimizes opportunities for all individuals to gain knowledge, skills, and enthusiasm for learning (Meyer & Rose, 2002; Rose & Meyer, 2006; Rose, Meyer, & Hitchcock, 2005). The "universal" in Universal Design for Learning (UDL) does not imply one optimal solution for everyone, but instead underscores the need for inherently flexible, customizable content, assignments and activities, and assessments characterized by:
- Multiple representations of information — as there is no single method for the presentation of information that will provide equal access for all learners (Recognition Principle);
- Multiple methods of action and expression — as there is no single method of expression that will provide equal opportunity for all students (Strategic Principle); and
- Multiple means of engagement — as there is no single way to ensure that all children are engaged and motivated in a learning environment (Affective Principle).
The term "universal design" is borrowed from the architectural concept of the same name, which called for curb cuts, automatic doors and other architectural features to be built into the design to avoid costly after-the-fact adaptations for individuals with disabilities. UDL applies the same concept to learning — creating a curriculum with numerous built-in features to meet the learning needs of a wide range of students, including those with disabilities and special talents.
As a way of providing multiple means of representation, calculators can be an alternative to traditional paper and pencil computations for students with learning disabilities to explore principles and procedures in mathematics.
Calculators can help students with disabilities explore math concepts. In the early grades, students can use them to test number concepts such as counting (using the automatic constant to add 1 repeatedly), number relationships (more than, less than), and magnitudes (100 is much larger than 10). For many students these basic concepts are difficult to grasp, and the visual nature of the calculator display, along with the speed and accuracy with which they work, can help students develop their own mental "pictures" of number concepts. Similarly, graphing calculators can support older students with disabilities to confirm hypotheses, understand geometric principles, and connect visual, numeric, and symbolic representations of algebraic equations.
Remembering the steps in a math procedure is another common problem for students with disabilities. Multiple line displays, such as those on graphing calculators or two-line calculators used with younger children, appear to be particularly beneficial in understanding math concepts and procedures because students can see the steps of the solution, rather than just the answer (St. John & Lapp 2000).
Calculators provide students an alternative to paper and pencil methods for building calculations and fluency strategies.
The typical special education curriculum in mathematics focuses on basic computation skills, not the problem-solving skills that are essential for success in the general education math curriculum. Yet many students who struggle to learn basic facts have the potential to learn higher-level skills and concepts. By learning to use calculators effectively, these students can focus their efforts on the math concepts, rather than struggling to perform computations. Calculators can give students with disabilities access to exploring higher-level math concepts.
Recall of math facts is difficult for many students with disabilities; the errors they make in recalling math facts can result in inaccuracies at all levels of computation. Calculators can help students become more fluent and accurate with math facts by providing feedback (Graham & Thomas, 2000).
Underlying the scaffolds calculators provide for students' recognition and strategic learning is the support they provide for affective learning. Calculators can help learning disabled students participate in rigorous problem-solving activities that might otherwise be too frustrating for these learners. Learners who function best when doing something physical often enjoy the tactile aspect of working with calculators.
The use of calculators in the classroom should be limited to the teacher's professional judgment, but when used, their potential is infinite. When teaching or assessing computation skills, calculators might not be an appropriate support for students to use. But when teachers balance this curriculum with real-world problem-solving activities, calculators can provide an appropriate tool to balance the challenge of these endeavors with the support students with learning disabilities need.
Click the "References" link above to hide these references.
Ellington, A. J. (2003). A meta-analysis of the effects of calculators on students' achievement and attitude levels in precollege mathematics classes. Journal for Research in Mathematics Education, 34(5), 433-463.
Graham, A.T. & Thomas, M.O.J. (2000). Building a versatile understanding of algebraic variables with a graphic calculator. Educational Studies in Mathematics, 41(3), 265-282.
Rose, D. H., & Meyer, A. (2006). A practical reader in Universal Design for Learning. Cambridge, MA: Harvard Education Press.
St. John, D. & Lapp D.A. (2000). Developing numbers and operations with affordable handheld technology. Teaching Children Mathematics, 7(3)November 162-164.
Thompson, A., & Sproule, S. (2000). Deciding when to use calculators. Mathematics Teaching in the Middle School, 6, 126-29.
A "Tech Works" brief from the National Center for Technology Innovation (NCTI) and the Center for Implementing Technology in Education (CITEd). * Figure 1: Thompson, T., & Sproule, S. (2005). Calculators for students with special needs. Teaching Children Mathematics, 11(7), 391-395. | http://www.ldonline.org/article/Beyond_%22Getting_the_Answer%22%3A_Calculators_Help_Learning_Disabled_Students_Get_the_Concepts | 13 |
10 | Amino acids release water molecules as they combine chemically to form proteins. According to this behavior, known as the Le Chatelier’s principle, it is not possible for a reaction that gives off water (a so-called condensation reaction) to take place in an environment containing water. (See Le Chatelier’s Principle, the.) Therefore, the oceans—where evolutionists say that life began—are definitely unlikely, unsuitable places for amino acids to combine and produce proteins.
Given this “water problem” that so demolished all their theories, evolutionists began to construct new scenarios. Sydney Fox, the best-known of these researchers, came up with an interesting theory to resolve the difficulty. He theorized that immediately after the first amino acids had formed in the primitive ocean, they must have been splashed onto the rocks by the side of a volcano. The water in the mixture containing the amino acids must then have evaporated due to the high temperature in the rocks. In this way, amino acids could have distilled and combined—to give rise to proteins.
But his complicated account pleased nobody. Amino acids could not have exhibited a resistance to heat of the kind that Fox proposed. Research clearly showed that amino acids were destroyed at higher temperatures. Even so, Fox refused to abandon his claim.
Under the influence of Miller’s scenario, Stanley Fox combined certain amino acids to form the molecules above, which he called proteinoids. However, these useless amino acid chains had nothing to do with the real proteins that make up living organisms. In fact, all his endeavors confirmed that life cannot be created in the laboratory, let alone come into being by chance.
He combined purified amino acids by heating them in a dry environment in the laboratory under very special conditions. The amino acids were duly combined, but he still obtained no proteins. . What he did obtain were simple, disordered amino-acid sequences, bound to one another in a random manner, that were far from resembling the proteins of any living thing. Moreover, had Fox kept the amino acids at the same temperature, the useless links that did emerge would have immediately broken down again.183
Another point that makes his experiment meaningless is that Fox used pure amino acids from living organisms, rather than those obtained in the Miller Experiment. In fact, however, the experiment, claimed to be an extension of the Miller Experiment, should have continued from the conclusion of that experiment. Yet neither Fox nor any other researcher used the useless amino acids that Miller produced.184
This experiment of Fox’s was not received all that positively by evolutionist circles because it was obvious that the amino acid chains (proteinoids) he obtained were not only meaningless, but could not have emerged under natural conditions. In addition, proteins—the building blocks of life—had still not been obtained. The problem of proteins had still not been solved.
Sydney Fox and the other researchers managed to unite the amino acids in the shape of “proteinoids” by using very special heating techniques under conditions which in fact did not exist at all in the primordial stages of Earth. Also, they are not at all similar to the very regular proteins present in living things. They are nothing but useless, irregular chemical stains. It was explained that even if such molecules had formed in the early ages, they would definitely be destroyed.185
The proteinoids that Fox obtained were certainly far from being true proteins in terms of structure and function. There were as different from proteins as a complex technological device is from a heap of scrap metal.
Furthermore, these irregular collections of amino acids had no chance of surviving in the primitive atmosphere. Under the conditions of that time, destructive chemical and physical effects produced by the intense ultraviolet rays reaching the Earth and by uncontrolled natural conditions would have broken down these proteinoids and made it impossible for them to survive. Because of the Le Chatelier’s principle, there can be no question of these amino acids being underwater where ultraviolet rays could not reach them. In the light of all these facts, the idea that proteinoid molecules represented the beginning of life increasingly lost all credibility among scientists.
183 Richard B. Bliss and Gary E. Parker, Origin of Life, California, 1979, p. 25.
185 S. W. Fox, K. Harada, G. Kramptiz, G. Mueller, “Chemical Origin of Cells,” Chemical Engineering News, June 22, 1970, p. 80. | http://harunyahya.com/en/works/16386/Fox-Experiment-the | 13 |
18 | The arithmetic mean is what is commonly called the average: When the word "mean" is used without a modifier, it can be assumed that it refers to the arithmetic mean. The mean is the sum of all the scores divided by the number of scores. The formula in summation notation is: μ = ΣX/N where μ is the population mean and N is the number of scores. If the scores are from a sample, then the symbol M refers to the mean and N refers to the sample size. The formula for M is the same as the formula for μ.
The mean is a good measure of central tendency for roughly symmetric distributions but can be misleading in skewed distributions since it can be greatly influenced by extreme scores. Therefore, other statistics such as the median may be more informative for distributions such as reaction time or family income that are frequently very skewed.
The sum of squared deviations of scores from their mean is lower than their squared deviations from any other number.
For normal distributions, the mean is the most efficient and therefore the least subject to sample fluctuations of all measures of central tendency.
The formal definition of the arithmetic mean is µ = E[X] where μ is the population mean of the variable X and E[X] is the expected value of X.
For a data set, the mean is just the sum of all the observations divided by the number of observations. Once we have chosen this method of describing the communality of a data set, we usually use the standard deviation to describe how the observations differ. The standard deviation is the square root of the average of squared deviations from the mean.
The mean is the unique value about which the sum of squared deviations is a minimum. If you calculate the sum of squared deviations from any other measure of central tendency, it will be larger than for the mean. This explains why the standard deviation and the mean are usually cited together in statistical reports.
The mean may often be confused with the median or mode. The mean is the arithmetic average of a set of values, or distribution; however, for skewed distributions, the mean is not necessarily the same as the middle value (median), or most likely (mode). For example, mean income is skewed upwards by a small number of people with very large incomes, so that the majority have an income lower than the mean. By contrast, the median income is the level at which half the population is below and half is above. The mode income is the most likely income, and favors the larger number of people with lower incomes. The median or mode are often more intuitive measures of such data.
That said, many skewed distributions are best described by their mean - such as the Exponential and Poisson distributions.
An amusing example
Most people have an above average number of legs - think about it. The mean number of legs is going to be less than 2 (because there are people with one leg and people with no legs). The mean is probably 1.999997 or somesuch figure. So since most people have two legs, they have an above average number! | http://www.drpeterjdadamo.com/wiki/wiki.pl/Arithmetric_Mean | 13 |
20 | Many different classes of
celestial bodies are orbiting the
Sun. Some have unique color
combinations that might provide a
clue as to their origin.
Oct 06, 2009
In the deepest regions of the
solar system, billions of kilometers from the Sun, are
several asteroid-sized icy rocks that have been difficult
for astronomers to classify. In a previous Picture of the
Day article about Kuiper Belt Objects, it was noted that the
largest of the planetoids, including Pluto and Charon, are
described by conventional theories as nebular condensates
left over after the major planets formed.
Scientists have detected other chunks of "debris" like
Chiron, a centaur-class planetoid 170 kilometers in
diameter, by using larger, more sensitive telescopes.
Centaurs take their group name from Chiron, the tutor of
Achilles, a mythical half-man, half-horse. Something that
makes the centaurs so interesting to researchers is the
colors that have been deduced from luminosity measurements.
Most are dull gray, but there are some blue-green centaurs
and 5145 Pholus is rust-red. Nothing in the current
theoretical lexicon prepared the astronomers for the color
As conventional theories propose, centaurs could originate
in the Kuiper Belt. Neptune's gravity might be strong enough
to perturb the orbits of some KBOs, pulling them out of the
Kuiper Belt's main region about 500 billion kilometers from
the Sun and sending them into proximity with the other gas
giants where they are slung into eccentric orbits
Because of their orbital instability, they are thought to
stay near the outer planets for only a few tens of million
years. Theories based on their elliptical orbits indicate
that some centaurs could eventually be ejected from the
solar system entirely, whereas the gas giant planets might
consume others. Other centaurs are speculated to fall into
the inner solar system where they transform into
short-period or Jupiter-family comets.
The Jupiter-family comets move at high velocities, revolving
every 20 years or less, with most solar orbits taking about
8 years. Some astronomers have suggested that the
short-period comets might also be accelerated back into the
outer solar system if they catch a "gravity boost" from
Jupiter, once again becoming centaurs. Chiron itself
manifests a coma of gas and dust whenever it reaches its
closest approach to the Sun, although it does not grow a
tail. Various centaurs exhibit this anomalous behavior, so
they are sometimes referred to as asteroid/comets.
Centaurs are very faint even with a 10-meter optical
telescope, so spectrographic analysis is impossible.
However, by passing the gathered light through three
different filters a ratio of brightness in the three bands
reveals the spectral energy distribution, which is
interpreted as color. Why do the centaurs have such color
variations? No one is sure at this point. Surface
composition is one theory, and deposition of external
material from "meteor polishing" is another.
The Electric Universe suggests a reason for the different
colored centaurs as well as for the different chemical
compositions that make up rocky planets and moons. In a
plasma cosmogony hypothesis, the stars are formed when
cosmic Birkeland currents twist around one another, creating
z-pinch regions that compress the plasma into a solid.
Laboratory experiments have shown that such compression
zones are the most likely candidates for star formation and
not collapsing nebulae, which is the 18th century theory to
which astrophysicists still cling.
When the stars are born, they are most likely under extreme
electrical stress. If such is the case, they will split into
one or more daughter stars, thereby equalizing their
Electric Universe theorist
Wal Thornhill wrote:
process is repeated in further electrical disturbances by
flaring red dwarfs and gas giant planets ejecting rocky and
icy planets, moons, comets, asteroids and meteorites.
Planetary systems may also be acquired over time by
electrical capture of independent interstellar bodies such
as dim brown dwarf stars. That seems the best explanation
for our ‘fruit salad’ of a solar system. Capture of a brown
dwarf requires that the dim star accommodate to a new
electrical environment within the plasma sheath of the Sun.
The brown dwarf flares and ejects matter, which becomes
planets, moons and smaller debris. The ‘dead’ dwarf star
becomes a gas giant planet.
"This is not the 4.5 billion year evolutionary story of the
clockwork solar system taught to us in Astronomy I. There is
no primordial nebular ‘stuff’ of which all objects in the
solar system were formed at the one time. The ‘stuff’ of
which stars are made has been differentiated and altered by
plasma discharge processes. All stars produce heavy elements
in their photospheric discharges, which alters their
internal composition with time. And the ‘stuff’ expelled
electrically from inside stars and gas giants is further
modified elementally, chemically and isotopically."
The reason that there is so much variability in the solar
system is because z-pinch compression is so powerful and
plasma discharges are so energetic. Centaurs are colorful
because they might have been ejected out of different gas
giant planets. Optical instruments show that Neptune has a
green color, Uranus a blue, Saturn a pale yellow and Jupiter
a rusty red. Could the centaurs be exemplifying their
By Stephen Smith
SPECIAL NOTE - **New Volumes Available:
We are pleased to announce a new
THE UNIVERSE ELECTRIC. Available now, the first volume
of this series, titled Big Bang, summarizes the failure of modern cosmology
and offers a new electrical perspective on the cosmos. At
over 200 pages, and
designed for broadest public appeal, it combines spectacular
full-color graphics with lean and readily understandable
**Then second and third volumes in the series are now available,
respectively titled Sun and Comet, they offer
the reader easy to understand explanations of how and why these bodies
exist within an Electric Universe.
High school and college students--and teachers in
numerous fields--will love these books. So will a large
audience of general readers.
Visitors to the Thunderbolts.info site have often
wondered whether they could fully appreciate the Electric
Universe without further formal education. The answer is
given by these exquisitely designed books. Readers from
virtually all backgrounds and education levels will find them
easy to comprehend, from start to finish.
For the Thunderbolts Project, this series is a milestone.
Please see for yourself by checking out the new
Thunderbolts Project website, our leading edge in
reaching new markets globally.
Please visit our | http://www.thunderbolts.info/tpod/2009/arch09/091006centaurs.htm | 13 |
12 | The Coronagraph in Space: Forecasting Solar Activity to Protect Transportation, Communication, and Power Supply on Earth
- The Corona and Eclipse
- Coronal Mass Ejections
- Playing Havoc with Electronics
- Studying the Corona
- A Forecasting Breakthrough
These images of the Sun's corona are from solar eclipses in 1980 (top left), 1988 (top right), 1991 (bottom left), and 1994 (bottom right) as observed on Earth. The Sun's changing magnetic fields determine the unique shapes of the corona.
Click image for larger view.
As early as 1307 B.C., ancients observed the Sun’s corona. Scientists did not begin to understand this phenomenon, however, until the mid-19th century while watching the solar eclipse. The beauty of a solar eclipse comes from the Sun's corona, which becomes visible when the moon passes between the Sun and Earth. The corona appears as streamers of light shining from around the Sun during an eclipse. But, it is actually the outermost layer of the solar atmosphere composed of very sparse but extraordinarily hot (800,000 to 3,000,000 degrees Kelvin or 1.4 million to 5.4 million degrees Fahrenheit) gas particles that are orders of magnitude hotter than the surface of the Sun! This outer atmosphere extends far beyond the Sun and even envelops the Earth.
Occasionally, the Sun's atmosphere "explodes" causing material from the corona to be "ejected" into space. These coronal mass ejections (CMEs) can contain several billion tons of matter that can accelerate to several million miles per hour in a spectacular outburst of energy. Solar material streaks through space, interacting with any planets or spacecraft in its path. If this immense cloud of energized solar plasma encounters Earth's magnetic field, significant disruptions can result. CMEs are often, but not always, associated with solar flares, another type of solar activity. In addition, CMEs always stream outward from the Sun, but not always toward Earth.
Coronal mass ejections sometimes disrupt communication signals on earth. Radio and television are not significantly affected, but shortwave, ground-to-air, ship-to-shore, military detection and early warning system communications can be. Click image for larger view.
Geomagnetic field disturbances on earth associated with these solar activities create the spectacular aurora (northern and southern lights). But, more ominously, they can play havoc with electronics on Earth by inducing electrical currents that damage power systems, disrupting communications, degrading high-tech navigation systems, and even accelerating pipeline corrosion if scientists and technicians are unable to take preventive action. NOAA’s Space Environment Center in Boulder, Colorado warns of solar-terrestrial events and continues to monitor solar activities. This service started during World War II when the Interservice Radio Propagation Laboratory in the National Bureau of Standards predicted the effects of solar activity on radio communications.
The coronagraph aboard this Solar and Heliospheric Observatory satellite is a dramatic breakthrough in NOAA space weather observers' ability to forecast the impact of coronal mass ejections on Earth. Click image for larger view.
To better study the corona, a French scientist, Bernard Lyot, invented the first coronagraph in 1930. The coronagraph is a special type of telescope that uses a solid disk (known as an occulter disk) to block direct sunlight and create an artifical eclipse for the observer peering through the instrument. Inventing the coronagraph marked the beginning of systematic study of the Sun’s corona. Today, satellites have improved NOAA’s ability to observe the corona and other solar phenomena and to forecast their impacts on electrical power distribution, communications, and electronics systems that result from geomagnetic field disturbances.
Key features of an image from the LASCO coronagraph. Click image for larger view.
In 1995, NOAA scientists achieved a pivotal breakthrough in solar forecasting when NASA launched the Solar and Heliospheric Observatory (SOHO) spacecraft. The payload included a coronal imager called LASCO (Large Angle and Spectrometric Coronagraph Experiment). This instrument was the first to provide near real-time imagery of the solar corona from space over an extended period of time. Although NASA designed and operates SOHO for research purposes, NOAA quickly recognized the benefits of near real-time coronal imagery and developed procedures to analyze the images for space environment forecasting. With these new techniques, forecasters can determine the occurrence, direction and estimated speed of coronal mass ejections. Knowing these significantly improves forecasters’ ability to predict the beginning, strength, and length of a geomagnetic storm and possible disruptions of electronic equipment on earth.
These images from the LASCO coronal imager aboard the Solar and Helispheric Observatory spacecraft dramatically help forecasters predict how space weather will affect the Earth. Bright streaks to the left and right of center show a large coronal mass ejection (CME) coming from a solar flare. Click image for larger view.
An example of improved solar forecasting happened in October and November 2003, when the NOAA's Space Environment Center (SEC) observed several major solar flares during an elevated period of solar activity. When a powerful X17 solar flare1 occurred on October 28, 2003, SEC forecasters used LASCO coronagraph imagery to determine that an associated coronal mass ejection (CME) would impact earth, and that it would arrive within 24 hours. The CME caused significant geomagnetic storming within the forecast period, 19 hours after the initial observation.
The improvements enabled by the LASCO imager allow SEC to make more accurate forecasts so that power companies, major airlines, and satellite operators can take preventive measures to decrease risk to their systems. The techniques developed by SEC to take advantage of this real-time coronal imaging technology have had a profound and momentous impact on space weather forecasting. | http://celebrating200years.noaa.gov/breakthroughs/coronagraph/welcome.html | 13 |
14 | Explanation of features and characteristics of our Moon by Ron Kurtus - Succeed in Understanding Astronomy. Key words: Physics, Physcial Science orbit, phases, craters, diameter, distance, temperature, gravity, orbit, ellipse, astronaut, lunatic, tides, School for Champions. Copyright © Restrictions
Characteristics of our Moon
by Ron Kurtus (revised 18 December 2005)
A moon is a large body or mass or material that orbits around a planet. It is usually much smaller than the planet. The Earth has only one moon, while Mars has two moons and Jupiter has 9 moons.
Our Moon is only about 1/4 the diameter of the Earth. Its gravity affects the Earth's tides. The Moon looks bright at night because of sunlight that is reflected off its surface. It has some distinct surface features that can be seen with the naked eye. Astronauts examined the surface more closely during Moon landings.
Questions you may have include:
- What are some of the features and characteristics of the Moon?
- What is the gravity on the Moon?
- What are the major features of the Moon's surface?
This lesson will answer those questions.
Note: Click the Play button to hear the text being read.
Time = 5 min. 40 sec.
Right-click to download MP3 (Choose Save target or Save link)
Characteristics of the Moon include its distance from the Earth, size, mass, density, and temperature.
The Moon is approximately 384,400 km (239,000 miles) from the Earth.
A radio signal from the Earth and bounced off the Moon's surface back to Earth would take approximately 2 seconds. Communication with an astronaut on the Moon would thus have a several second pause between a question and an answer.
The diameter of the Moon is 3479 kilometers (2162 miles). This is about 1/4 the diameter of the Earth (12,756 kilometers or 7,926 miles).
The mass of the Moon is 7.35*1022 kilograms, which is about 1/80 of the mass of the Earth. (1022 is 10 times itself 21 times or 1 followed by 22 zeros.)
The density of the Moon is 3340 kg/m3.
Can you verify the density of the Moon?
Density = mass divided by volume, d = m/V.
The volume of a sphere = 4/3 times pi times its radius cubed,
V = 4*π*r3/3.
The average temperature on the surface of the Moon during the day is 107°C. That is hot enough to boil water on the Earth. During the night, the average temperature drops to −153°C.
The Moon revolves around the Earth in an elliptical orbit every 27.3 days. The same side of the Moon always faces the Earth. Due to the angle of the Sun on the Moon, we see different portions of Moon illuminated. These are called the phases of the Moon.
Because of its smaller size and mass, the gravity of the Moon is about 1/6 the gravity on the Earth. That means that a person who weighs 180 pounds on Earth would only weigh 30 pounds, if measured on the Moon. That is why when the astronauts were on the Moon, they were able to jump so high—even while wearing the heavy space suit.
Moon causes tides
The force of gravity from the Moon affects the Earth. Its gravity reaches the Earth and pulls the oceans toward the Moon, causing the tides. The gravity from the Sun also affects the tides. The highest tides will always occur when the Moon and Sun are aligned. That is when there is a New Moon or a Full Moon.
Moon influences lunatics
There are even people who seem to be affected by the gravity of the moon. They are called "lunatics" from the Latin word luna, meaning moon. There are stories about people who are so affected by the moon that they turn into werewolves. Of course, that is fiction (I think).
The Moon shines at night, due to sunlight that is reflected off its surface.
The major features we can see on the Moon are its craters. These have apparently been caused by the impact from meteors over millions of years. Exploding volcanoes on the Moon also caused some craters.
You can see the craters on the Moon's surface
You can see the outlines of major craters on the Moon with your naked eye. The configuration almost looks like a face. Thus, they call it "the man in the Moon."
Much information about the surface of the Moon came from experiments United States astronauts made when they landed on the Moon in 1969. The United States landed men on the Moon six times between 1969 and 1972. Since then, no one else has landed on the Moon.
Our Moon is only about 1/4 the diameter of the Earth, has less gravity and has craters on its surface that can be seen with the naked eye. The Moon looks bright at night because of sunlight that is reflected off its surface. American astronauts landed on the Moon between 1969 and 1972.
Shoot for the Moon with your goals and plans
Resources and references
Facts About the Moon - From NASA
The Moon - Good details from Nine Planets site
The Moon - Information and statistics from Russian version of American website
From Blue Moons To Black Holes: A Basic Guide To Astronomy, Outer Space, And Space Exploration by Melanie Melton Knocke; Prometheus Books (2005) $19.00
Observing the Moon by Peter T. Wlasuk; Springer (2000) $39.95 - Reference book for anyone seriously interested in the Moon and its geology
Welcome to the Moon: Twelve Lunar Expeditions for Small Telescopes by Robert Bruce Kelsey; Naturegraph Publishers (1997) $11.95 - Well written "how to" for novice astronomers
What do you think?
Do you have any questions, comments, or opinions on this subject? If so, send an email with your feedback. I will try to get back to you as soon as possible.
Click on a button to send an email, Facebook message, Tweet, or other message to share the link for this page:
Students and researchers
The Web address of this page is:
Please include it as a link on your website or as a reference in your report, document, or thesis.
Where are you now?
Characteristics of our Moon | http://school-for-champions.com/astronomy/moon.htm | 13 |
15 | One of the most extraordinary features of the solar system is that it contains adequate abundances of all the elements essential for advanced life. What makes it so exceptional is that the elements must come from different sources: asymptotic giant branch stars, a Type I supernova (see here and here), Type II supernovae of at least two different types, white dwarf binary stars, and now, according to a new study also from a “faint supernova with mixing fallback.”1
A team of six Japanese astronomers, plus an American astronomer, carefully recorded the amounts of decay products from the following short-lived radionuclides (SLRs): beryllium-10, aluminum-26, chlorine-36, calcium-41, manganese-53, iron-60, palladium-107, iodine-129, and hafnium-182. In their calculations the team demonstrated that ejection of heavy-element material into the primordial solar system’s protoplanetary disk came from all but the last source mentioned above. However, none of these astrophysical sources can account for the early solar system’s abundances of SLRs with half-lives less than five million years, namely aluminum-26, calcium-41, manganese-53, and iron-60.
The astronomers’ calculations revealed that a rare kind of supernova could explain the solar system’s abundances of these particular SLRs. This supernova type is a low-luminosity (that is, faint) supernova where, during the star’s explosion, the inner region of the star experiences mixing. A small fraction of the mixed material is ejected into the interstellar medium and the remainder falls back into the core. In the words of the research team, “The modeled SLR abundances agree well with their solar system abundances.”
They also calculated the time interval between the explosion of the faint supernova and the formation of solar system’s oldest solid materials. That interval is approximately equal to one million years. The faint supernova eruption would need to be quite near the solar system forming region but not so close as to disturb its formation. Likewise, the timing and the proximity for the other sources (asymptotic giant branch stars, Type I supernova, Type II supernovae of at least two different types, white dwarf binary stars) of the heavy elements would need to be similarly fine-tuned.
SLRs make two important contributions to the solar system. One, they are heat sources for primordial asteroidal metamorphism and/or differentiation. Primordial asteroids are the building blocks for the solar system’s rocky planets (Mars, Earth, Venus, and Mercury). Thus, Earth’s exceptional interior differentiation (a crucial factor for establishing its strong, long-lasting magnetic field) is due, in part, to the primordial solar nebula’s exceptional abundances of SLRs.
Two, they provide high-resolution chronometers for events that took place during the first few million years of the solar system’s formation. Continuing studies could potentially yield a detailed history for early solar system events with a timing precision of better than a hundred thousand years for the different occurrences. Such historical accuracy could deliver much more evidence for the supernatural design of the solar system for life’s, and humanity’s, benefit.
- A. Takigawa et al., “Injection of Short-Lived Radionuclides into the Early Solar System from a Faint Supernova with Mixing Fallback,” Astrophysical Journal 688 (December 1, 2008): 1382-87. | http://www.reasons.org/articles/solar-system%E2%80%99s-extraordinary-birth-environment | 13 |
14 | Copyright © University of Cambridge. All rights reserved.
'Approximating Pi' printed from http://nrich.maths.org/
Why do this
This is a classic, the historical reference to Archimedes is
educational, and the problem should be part the education of
every student of mathematics. To do this problem requires only
very simple geometry and it introduces the idea of approximation
by finding an upper and lower bounds, and refining the
approximation by taking a series of values where, in this case, we
use smaller and smaller edges, or more and more sides for the
polygons. In addition this problem is a valuable pre-calculus
experience as it uses the idea of a limiting process involving
smaller and smaller ' bits'.
First ask everyone to work out the perimeters of the two squares in
the diagram. Then have a class discussion about what this tells us
about how large the length of the circumference of a circle can be
and how small. Discuss the history of this method with reference to
Archimedes and introduce the idea that it is a method for finding
the value of $\ pi$. Pose the problem: "How would you find the
value of $\pi$ if it was not already known?"
Introduce the idea of an upper bound and a lower bound
for pi and raise the question about how we might improve these
bounds to get closer to the value of pi. Then ask the class to
repeat the exercise using circumscribed and inscribed hexagons.and
Suggest your students researchArchimedes method for finding $\pi$
and other methods of approximating $\pi$ on the internet for
themselves. Discuss the difficulties of calculation, in particular
finding square roots, without modern calculating aids and refer to
the problem Archimedes and
Can you find the perimeter of the square (or other regular polygon)
circumscribing the circle?
Can you find the perimeter of the square (or other regular
polygon) inscribed inside the circle?
What can you say about the lengths of the perimeters of these two
polygons and the length of the circumference of the circle?
Knowing the circumference is $2\pi r$ how does this help you find a
lower and an upper bound for $\pi$.
See Archimedes and Numerical | http://nrich.maths.org/841/note?nomenu=1 | 13 |
11 | A Finite State Machine (FSM) is a model of behavior using states and state transitions.
A transition is a state change triggered by an input event, i.e.
transitions map some state-event pairs to other states.
As indicated in the name, the set of states should be finite.
Also, it is assumed that there is a finite set of
distinct input events or their categories (types, classes).
Subsequently, the set of transitions is finite as well.
FSMs are one of the most widely used models in computer programming in general.
In particular, FSMs are ubiquitous in programming embedded systems
and for describing digital circuits. Modeling by means of FSMs has been so successful that
they become adopted as a part of the Unified Modeling Language standard by Object Management Group.
The origin of FSMs is finite atomata.
A Finite Atomaton is a more formal notion than a FSM.
It is defined as a quintuple ( A, S, s, T, F ), where:
Finite automata are primarily used in parsing for recorgizing languages.
Input strings belonging to a given language should turn an automaton
to final states and all other input strings should turn this automaton
to states that are not final.
- A is a finite non empty set of symbols (input alphabet)
- S is a finite non empty set of states
- s is an initial state, an element of S
- T is the state transition function: T: S x A -> S
- F is the set of final states, which is a subset of S
Finite automata that additionally generate output are called transducers.
In order to define a transducer, an output alphabet and output function should be specified
in addition to the five components outlined earlier.
The output function can be a function of a state or a function of both state and input symbol.
If the output function depends on a state and input symbol, then it is called a
Mealy automaton. If the output function depends only on a state, then it called is a Moore automaton.
One of the most important facts about finite automata is that
instances of any
can be recognized by a finite automaton.
It is well-known how to build the recognizing automaton for a given regular expression.
Inversely, it is known how to construct a regular expression defining a language
recognized by a given finite automaton.
Automata theory studies properties of automata. For instance,
it investigates how to optimize automata. In addition to the finite atomata
defined above and which are also called deterministic finite automata,
there are non-deterministic finite automata. Automata theory also studies so-called
pushdown automata. All these topics are beyond the scope of this tutorial.
The notion of finite automata is mathematically rigorous. The notion of FSMs was introduced
as a less rigorous and more suitable for computer science.
A FSM is defined by the following:
As opposed to input symbols for finite automata,
any sequence of event can be a FSM input. In other words, any object can be an input entity.
The only restriction on input is that all possible inputs should be classified
as belonging to one of a finite number of distinct input categories (types, classes).
In many simpler cases, only a finite number of distinct input objects are allowed.
It is assumed that input events are then processed synchronously, that is,
the next event is processed only after the current event is fully consumed
and a transition is executed if necessary. This may require queing events
before the time comes to process them.
- a finite non empty set of states
- an initial state
- a finite non empty set of distinct input events or their categories
- state transitions
One other important difference between finite automata and FSMs is that
actions may be related to FSMs. The role of actions is to generate output.
FSMs may also communicate with other processes by means of actions.
Presumably, actions do not generate input events. In other words,
FSMs consume external events only.
Also, actions are stateless,
i.e. they cannot carry any information from one invocation to another.
Actions may be associated with transitions,
and if so, such FSM is called a Mealy machine. The FSMs in which actions are
associated with states are called Moore machines.
Since actions can be represented by virtually any program (code)
and action input can be any object,
the functionally of FSMs may be quite rich, going far beyond the limits
of finite automata.
FSMs are most commonly represented by state diagrams, which are also called
state transition diagrams. The state diagram is basically a directed graph where
each vertex represents a state and each edge represents a transition between two states.
Another common representation of FSMs is state transition tables. In these tables,
every column corresponds to a state, every row corresponds to an event category.
Values in the table cells give the states resulting from the respective transitions.
Table cells also can be used for specifying actions related to transitions.
FSM specifications are pretty simple and 'flat'.
Harel introduced hierarchical FSMs (statecharts) in which any state can be another FSM.
The aim of this extension is to reduce the size of FSM specifications.
In practice, the number of FSM transitions grows rapidly when the number of states grows.
This happens because it is necessary to copy existing transitions for newly introduced states.
The hierarchical FSMs solve this problem by bringing the possibility
to encapsulate newly introduced states within a FSM that corresponds to one state
of the next upper level. And the next-level transitions become applicable to the entire FSM
of this lower level. Yet another extension of FSMs is introduced by allowing
sequences of events to define transitions -
see this paper.
A. Aho, R. Sethi, J.Ullman Compilers: Principles, Techniques, and Tools. Addison-Wesley, 1985.
Unified Modeling Language (UML), version 1.5
R. Wieringa. A Survey of Structured and Object-Oriented Software
Specification Methods and Techniques
Copyright © 2005 Alexander Sakharov
Need to relax? Try brain teasers. I would recommend those marked 'cool'. | http://sakharov.net/fsmtutorial.html | 13 |
33 | Slide 1 : 1 / 30 : Surface Modeling
Slide 2 : 2 / 30 : Surface Modeling : Introduction
Slide 3 : 3 / 30 : Implicit Functions
Slide 4 : 4 / 30 : Polygon Surfaces
Slide 5 : 5 / 30 : Polygon Tables
Objects : set of vertices and associated attributes
Geometry : stored as three tables : vertex table, edge table, polygon table
Edge table ?
Tables also allow to store additional information
Slide 6 : 6 / 30 : Example: Plane Equations
Often in the graphics pipeline, we need to know the orientation of an object. It would be useful to store the plane equation with the polygons so that this information doesn't have to be computed each time.
The plane equation takes the form:
P(M) = Ax + By + Cz + D = 0
Using any three points from a polygon, we can solve for the coefficients. Then we can use the equation to determine whether a point is on the inside or outside of the plane formed by this polygon:
Ax + By + Cz + D < 0 ==> inside
Slide 7 : 7 / 30 : Polygon Meshes
The polygons we can represent can be arbitrarily large, both in terms of the number of vertices and the area. It is generally convenient and more appropriate to use a polygon mesh rather than a single mammoth polygon.
For example, you can simplify the process of rendering polygons by breaking all polygons into triangles. Then your triangle renderer from project two would be powerful enough to render every polygon. Triangle renderers can also be implemented in hardware, making it advantageous to break the world down into triangles.
Another example where smaller polygons are better is the Gouraud lighting model. Gouraud computes lighting at vertices and interpolates the values in the interiors of the polygons. By breaking larger surfaces into meshes of smaller polygons, the lighting approximation is improved.
Whenever you can, use Triangle strip, Triangle Fan, Quad Strip
Triangle mesh produces n-2 triangles from a polygon of n vertices.
Triangle list will produce only a n/3 triangles
Quadrilateral mesh produces (n-1) by (m-1) quadrilaterals from an n x m array of vertices.
Not co-planar polygon
Specifying polygons with more than three vertices could result in sets of points which are not co-planar! There are two ways to solve this problem:
Slide 8 : 8 / 30 : From Curves to Surfaces
Slide 9 : 9 / 30 : Beziér Patches
If one parameter is held at a constant value then the
above will represent a curve. Thus P(u,a) is a curve on the surface with the
parameter v being a constant a.
In a bicubic surface patch cubic polynomials are used to represent the edge curves P(u,0), P(u,1), P(0,v) and P(1,v)as shown below. The surface is then generated by sweeping all points on the boundary curve P(u,0) (say) through cubic trajectories,defined using the parameter v, to the boundary curve P(u,1). In this process the role of the parameters u and v can be reversed.
Slide 10 : 10 / 30 : Beziér Patches
The representation of the bicubic surface patch can be
illustrated by considering the Bezier Surface Patch.
The edge P(0,v) of a Bezier patch is defined by giving four control points P00, P01, P02 and P03. Similarly the opposite edge P(1,v) can be represented by a Bezier curve with four control points. The surface patch is generated by sweeping the curve P(0,v) through a cubic trajectory in the parameter u to P(1,v). To define this trajectory we need four control points, hence the Bezier surface patch requires a mesh of 4*4 control points as illustrated above.
Slide 11 : 11 / 30 : Example: The Utah Teapot
Single shaded patch - Wireframe of the control points - Patch edges
Slide 12 : 12 / 30 : Subdivision of Beziér Surfaces
For more, See : Rendering
Cubic Bezier Patches
by Chris Bentley
Slide 13 : 13 / 30 : Deforming a Patch
The net of control points forms a polyhedron in cartesian space, and the positions of the points in this space control the shape of the surface.
The effect of lifting one of the control points is shown on the right.
Slide 14 : 14 / 30 : Patch Representation vs. Polygon Mesh
Slide 15 : 15 / 30 : Constructive Solid-Geometry Methods (CSG)
The method of Constructive Solid Geometry arose from the observation that many industrial components derive from combinations of various simple geometric shapes such as spheres, cones, cylinders and rectangular solids. In fact the whole design process often started with a simple block which might have simple shapes cut out of it, perhaps other shapes added on etc. in producing the final design. For example consider the simple solid below:
This simple component could be produced by gluing two rectangular blocks together and then drilling the hole. Or in CSG terms the the union of two blocks would be taken and then the difference of the resultant solid and a cylinder would be taken. In carrying out these operations the basic primitive objects, the blocks and the cylinder, would have to be scaled to the correct size, possibly oriented and then placed in the correct relative positions to each other before carrying out the logical operations.
The Boolean Set Operators used are:
Note that the above definitions are not rigorous and have to be refined to define the Regularised Boolean Set Operations to avoid impossible solids being generated.
A CSG model is then held as a tree structure whose terminal nodes are primitive objects together with an appropriate transformation and whose other nodes are Boolean Set Operations. This is illustrated below for the object above which is constructed using cube and cylinder primitives.
CSG methods are useful both as a method of representation and as a user interface technique. A user can be supplied with a set of primitive solids and can combine them interactively using the boolean set operators to produce more complex objects. Editing a CSG representation is also easy, for example changing the diameter of the hole in the example above is merely a case of changing the diameter of the cylinder.
However it is slow to produce a rendered image of a model from a CSG tree. This is because most rendering pipelines work on B-reps and the CSG representation has to be converted to this form before rendering. Hence some solid modellers use a B-rep but the user interface is based on the CSG representation.
Slide 16 : 16 / 30 : Patch Representation vs. Polygon Mesh
Example of torus designed using a rotational sweep. The periodic spline cross section is rotated about an axis of rotation specified in the plane of the cross section.
We perform a sweep by moving the shape along a path. At intervals along this path, we replicate the shape and draw a set of connectiong line in the direction of the sweep to obtain the wireframe reprensentation.
Slide 17 : 17 / 30 : A CSG Tree Representation
Slide 18 : 18 / 30 : Implementation with ray casting
Difference (Obj2 - Obj1)
Slide 19 : 19 / 30 : A CSG Tree Representation
Slide 20 : 20 / 30 : Example Modeling Package: Alias Studio
Slide 21 : 21 / 30 : Volume Modeling
Slide 22 : 22 / 30 : Marching Cubes Algorithm
Extracting a surface from voxel data:
Slide 23 : 23 / 30 : Marching Cube Cases
Slide 24 : 24 / 30 : Extracted Polygonal Mesh
Slide 25 : 25 / 30 : Metaballs
An Overview of Metaballs/Blobby Objects
Slide 26 : 26 / 30 : Procedural Techniques: Fractals
fractal subdivision algorithm for generating mountains
Slide 27 : 27 / 30 : Procedural Modeling...
And have a look again to the 2002 first CG assignment
or "Simulating plant growth" by Marco Grubert
Slide 28 : 28 / 30 : Physically Based Modelling Methods
Physical modelling is a way of describing the behavior of an object in terms of the interactions of external and internal forces.
Simple methods for describing motion usually resort to having the object follow a pre-determined trajectory.
Physical modelling, on the other hand, is about dynamics.
Physically based modelling methods will tell show us how a table-cloth will drape over a table or how a curtain will fall from a window.
A common method for approximating such nonrigid objects is as a network of points with flexible connections between them.
Slide 29 : 29 / 30 : Spring Networks
Slide 30 : 30 / 30 : Particle Systems
Collection of particles - A particle system is composed of one or more individual particles.
Stochastically defined attributes : some type of random element
Position, Velocity (speed and direction), Color, Lifetime, Age, Shape, Size, Transparency
Particle Life Cycle
Generation - Particles in the system are generated randomly within a predetermined location of the fuzzy object
Particle Dynamics - The attributes of each of the particles may vary over time. (particle position is going to be dependent on previous particle position and velocity as well as time
Extinction - Each particle has two attributes dealing with length of existence: age and lifetime.
Extract from : Particle Systems by Allen Martin
The term particle system is loosely defined in computer graphics. It has been used to describe modeling techniques, rendering techniques, and even types of animation. In fact, the definition of a particle system seems to depend on the application that it is being used for. The criteria that hold true for all particle systems are the following:
Collection of particles - A particle system is composed of one or more individual particles. Each of these particles has attributes that directly or indirectly effect the behavior of the particle or ultimately how and where the particle is rendered. Often, particles are graphical primitives such as points or lines, but they are not limited to this. Particle systems have also been used to represent complex group dynamics such as flocking birds.
Stochastically defined attributes - The other common characteristic of all particle systems is the introduction of some type of random element. This random element can be used to control the particle attributes such as position, velocity and color. Usually the random element is controlled by some type of predefined stochastic limits, such as bounds, variance, or type of distribution.
Each object in Reeves particle system had the following
Velocity (speed and direction)
Particle Life Cycle
Each particle goes through three distinct phases in the particle system: generation, dynamics, and death. These phases are described in more detail here:
Generation - Particles in the system are generated randomly within a predetermined location of the fuzzy object. This space is termed the generation shape of the fuzzy object, and this generation shape may change over time. Each of the above mentioned attribute is given an initial value. These initial values may be fixed or may be determined by a stochastic process.
Particle Dynamics - The attributes of each of the particles may vary over time. For example, the color of a particle in an explosion may get darker as it gets further from the center of the explosion, indicating that it is cooling off. In general, each of the particle attributes can be specified by a parametric equation with time as the parameter. Particle attributes can be functions of both time and other particle attributes. For example, particle position is going to be dependent on previous particle position and velocity as well as time.
Extinction - Each particle has two attributes dealing with length of existence: age and lifetime. Age is the time that the particle has been alive (measured in frames), this value is always initialized to 0 when the particle is created. Lifetime is the maximum amount of time that the particle can live (measured in frames). When the particle age matches it's lifetime it is destroyed. In addition there may be other criteria for terminating a particle prematurely:
Running out of bounds - If a particle moves out of the viewing area and will not reenter it, then there is no reason to keep the particle active.
Hitting the ground - It may be assumed that particles that run into the ground burn out and can no longer be seen.
Some attribute reaches a threshold - For example, if the particle color is so close to black that it will not contribute any color to the final image, then it can be safely destroyed.
When rendering this system of thousands of particles, some assumptions have to be made to simplify the process. First, each particle is rendered to a small graphical primitive (blob). Particles that map to the same pixels in the image are additive - the color of a pixel is simply the sum of the color values of all the particles that map to it. Because of this assumption, no hidden surface algorithms are needed to render the image, the particles are simply rendered in order. Effects like temporal ant-aliasing (motion blur) are made simple by the particle system process. The position and velocity are known for each particle. By rendering a particle as a streak, motion blur can be achieved. | http://escience.anu.edu.au/lecture/cg/surfaceModeling/printNotes.en.html | 13 |
11 | PLATES ON THE MOVE
In this lesson the children will gain a better understanding
of the Earth, inside and out. Through this lesson and extended activities,
children will come to realize that at this very moment, according to scientists,
Africa appears to be tearing apart from Asia,a new mountain range is forming
by the Earth's crust being shoved up in the Mediterranean, the Red Sea is
getting larger and larger and may some day be called an ocean, while at
the same time the Pacific Ocean appears to be shrinking. This is all happening
so slowly that most of us, unless we become geologists, will not even notice
that it is happening. In fact, geologists only a hundred years ago would
not have believed that continents were moving and oceans were changing size.
However, today most accept the theory that continents, as well as the entire
crust of the Earth, are "on the move." We will examine the structure
of the Earth and how forces inside the Earth help supply the source of energy
needed to create the changes we see outside on the crust of the Earth.
Bill Nye the Science Guy: Crust of the Earth (KCTS/Seattle)
Bill Nye the Science Guy: Earthquakes (KCTS/Seattle)
Students will be able to:
- name the three main parts of the Earth;
- describe what the plates of the Earth are and the different ways that
- identify three of the four major things that happen when the plates
of the Earth move;
- practice mapping skills;
- be able to plot locations using longitude and latitude.
- diagram of the three layers of the Earth for the overhead or one for
- a map of the world with longitude and latitude on it
- a map of the world showing the plates on the go (either one for each
child or one for the overhead)
- shoe box
- core - the word scientists give for the center of the Earth which
is made of hot, heavy metals (mostly iron and nickel) that sank due to gravity
after the Earth formed. It is believed to have two layers, one molten and
the other solid rock.
- earthquake - a shaking or sliding of a portion of the Earth's crust.
It is caused by the sudden movement of masses of rock along a fault or by
changes in the size and shape of masses of rock far beneath the Earth's
- geysers - a hot spring from within the Earth that erupts intermittently
in a column of steam and hot water.
- magma - hot, molten rock from deep in the Earth.
- lava - magma that has reached the Earth's surface.
- mantle - made of the same thing the Earth's crust is made of only
a lot denser. The mantle has three zones: uppermost zone (lithosphere),
the middle zone (asthenosphere), and the deep mantle.
- molten rock - rock that is so hot it is a very thick, slow-moving
- outer core - the first layer of the core that is molten and responsible
for the gravitational pull of the Earth.
- plates - that part of the inside of the Earth that floats the crust
and the uppermost layer of the mantel. Scientists feel that there are about
12 of them.
- stalagmite - a formation of lime, shaped like a cone, built up on
the floor of a cave. It is formed by water dripping from a stalactite.
- stalactite - a formation of lime, shaped like an icicle, hanging from
the roof of a cave. It is formed by dripping water that contains lime.
- subduct - where one plate goes under the other which forces molten
rock up and out which will form an earthquake.
- plate tectonics - a new theory scientists put with the moving of the
- volcano - a place where the Earth's crust opens up and spews lava.
- seismograph - an instrument used to measure the shock waves of an
- epicenter - the center of an earthquake.
Have pictures mounted on cardboard or colored paper, or locate books with
pictures to depict each of the vocabulary words. Make sure they are well
labeled. Elicit discussion with each picture, soliciting from the children
any experiences they have had with things around each picture. Display the
pictures around the room and say, "These things all exist presently
or have at one time. Some may even be frightening if you are caught in the
middle of them, but they are all a part of science and we will spend time
from a safe distance looking at why these things exist."
Tell the children that we are about to explore an area of science
called geology. We will go over vocabulary words first so that when you
come across them in the video or other activities, they will have a better
understanding of the material.
To give students a specific responsibility while viewing, say,
"You are now going to see parts of two videos that tell about the different
parts of the Earth and how scientists think the land has formed over millions
of years. We will also consider the idea that the Earth is still changing
at this very minute.Watch the video to find out what scientists think is
inside the Earth."
BEGIN tape Bill Nye the Science Guy: The Earth's Crust
immediately following the opening credits. The video is Bill Nye outside
squatting down by a rock saying, "See these little holes in this rock?"
Watch the video long enough to allow the children to validate what is inside,
under the Earth's crust. STOP this tape with the visual of Bill Nye
giving a brief explanation of geysers. There are bubbles from the geysers
being shown and Bill Nye is standing in the front right corner of the screen.
Discuss two things that can happen to the Earth's crust because of the heated
energy from within. (earthquakes and geysers)
Note to the teacher: At this point you may skip the experiment that comes
next and demonstrate how a geyser works. Show the diagram of the side view
of the Earth where geysers are located or go back to the tape and show the
experiment that comes immediately following Bill Nye standing by the geyser.
You may also have children experiment with baking soda and vinegar by doing
a demonstration of a volcano using one already made by you or by the children
Try to promote conversation using the vocabulary words relating to earthquakes
and geysers to make sure that the children have a good handle on the subject.
Bring back into play the pictures used in the beginning of this activity
to demonstrate vocabulary words that go with earthquakes and geysers.
To give students a specific responsibility for viewing, say to the children,
"You are now going to watch another Bill Nye video. Before I tell you
its title to give it away, I want you to watch the first part of the film.
Tell me what the third change is that can happen to the Earth's crust and
how it happens."
FAST FORWARD the video Bill Nye The Science Guy: Earthquakes to start
after the credits and after the beginning vocal logo for Bill Nye and the
quake shake commercial. The scene is Bill Nye sitting at a desk with a model
of the Earth and moon saying, "Do you realize that every year there
are thousands of earthquakes all around the world?" The room starts
shaking. Continue watching and then pause after Bill Nye has explained about
the seismograph and seismometer by saying, "These are very delicate
and accurate instruments and scientists can tell just exactly how the Earth's
surface is moving at any time. They're fabulous!" Discuss what earthquakes
are and how they work.
Note to the teacher: In this discussion, you may either show the "Try
This" experiment using sand in a shoe box to demonstrate how faults
in earthquakes are made as well as volcanoes, or you may rely strictly on
the demonstration in the tape. Personally, the hands on, especially at this
age, will give the children a better grasp of exactly how this works.
Sum up this segment of the lesson with the children by saying, "Now
that you've learned about what causes volcanoes, geysers and earthquakes,
are along with a few instruments that are used to test them, we will finish
the tape. In this last section we will learn how to determine where earthquakes
are happening and how scientists compare one earthquake to another.
RESUME video after the "Try This Experiment" using the
sand in the shoe box to demonstrate what happens when plates of the Earth
move and the word "Try It!" appears on a black and blue screen.
Play the video to the end of the clip. STOP.
Begin this activity by reviewing what an earthquake is. Ask
questions such as:
"How long do they last? Can earthquakes happen anywhere? Are they more
likely to happen in one place than another?"
You want to make sure that everyone understands what an earthquake is, how
long Earthquakes normally last, what the magnitude of an earthquake is,
and how scientists measure an earthquake's magnitude.
Tell the children that they are going to plot some earthquakes on a world
map. The earthquakes are actual earthquakes of magnitude 6.1 or greater
that occurred during 1983, 1984 and 1985.
Divide the data you have copied among the children, giving each child some
earthquakes to plot. You might want to review latitude and longitude with
the children so that they know how to find the correct points on the map.
Then pass out markers and maps and give them time to plot the quakes.
When the children are finished ask them if they see any pattern as to where
the quakes have occurred. (After plotting the points, they should be able
to see that most of the earthquakes occurred along plate boundaries in the
Review the parts of the inner Earth by using a picture of the diagram of
the layers on a piece of acetate on the overhead. Break the mantle and core
into further parts pointing out the names given to each part.
Sum up the lesson by reviewing with the children that almost all earthquakes
are caused by the movement of rock along fractures in the Earth called faults.
Then explain that the Earth's surface (the crust plus the top portion of
the mantle) is believed to be divided into several large plates that slowly
move. As the plates move, they pull apart, collide or slide past each other
and this movement creates the stress that forces rocks to break along faults.
Have a geologist from the area or a geology teacher from a local
college or high school come to visit the class to talk about rocks and minerals
found in your area.
Visit a site where an earthquake, volcano or other significant history "Earth
moving" event has happened in your area if there is one.
Art and Science: Make a model-size volcano to demonstrate what
happens when a volcano erupts.
Art and Science: Make a model of the cross section of the Earth and label
The People Earth
(This activity is best done before viewing the tape and is to be done in
a large area inside or out.)
Cut pieces of paper with the words:
inner core (1)
outer core (3)
deep mantle (6)
(The numbers beside the words represent how many of each word would be given
out for a group of 30. Adjust the numbers to fit the size of your group.)
First have each of the children pick a part to play by drawing a slip from
the hat. (The names of what goes on which piece of paper are found in the
materials.) Then, using the information provided, explain what each "part"
does. Let the children practice any sounds or movements and then build the
Earth from the inside out.
1. Have the child playing the part of the inner core flex his or her muscles
(or pretend to lift weights) and stand in the center of the open area. Tell
the kids that this represents the very dense and solid metal inner core.
2. Next have the outer core kids form a circle around the inner core. They
should face in, toward the inner core. Then have them walk counterclockwise
around the inner core while holding their arms out to the side and waving
them up and down. Tell them this represents the fact that the outer core
is liquid and is moving.
3. Have the children playing the deep mantle join hands to form a circle
around the outer core. Have them chant, "hot rock, hot rock, hot rock."
4. Have the asthenosphere children surround the deep mantle. Have them slowly
sway their bodies back and forth to represent the movement that occurs in
5. Finally, have the lithosphere children form a circle around the entire
rest of the Earth. Have them face outward and slowly walk around the rest
of the Earth. Have them chant, "moving plates, moving plates."
Master Teacher: Gail Roberts
Lesson Plan Database
Thirteen Ed Online | http://www.thirteen.org/edonline/nttidb/lessons/ma/platema.html | 13 |
11 | 3.1 Introducing Plasma
It is known that space is filled with plasma. In fact, plasma is the most common type of matter in the universe. It is found in a wide range of places from fire, neon lights, and lightning on Earth to galactic and intergalactic space. The only reason that we are not more accustomed to plasma is that mankind lives in a thin biosphere largely made up of solids, liquids, and gases to which our senses are tuned. For example, we don’t experience fire as a plasma; we see a bright flame and feel heat. Only scientific experiments can show us that plasma is actually present in the flame.
“Plasma is a collection of charged particles that responds collectively to electromagnetic forces” (from the first paragraph in Physics of the Plasma Universe, Anthony Peratt, Springer-Verlag, 1992). A plasma region may also contain a proportion of neutral atoms and molecules, as well as both charged and neutral impurities such as dust, grains and larger bodies from small rocky bodies to large planets and, of course, stars.
The defining characteristic is the presence of the free charges, that is, the ions and electrons and any charged dust particles. Their strong response to electromagnetic fields causes behavior of the plasma which is very different to the behavior of an un-ionized gas. Of course, all particles – charged and neutral – respond to a gravity field, in proportion to its local intensity. As most of the Universe consists of plasma, locations where gravitational force dominates that of electromagnetism are relatively sparse.
Because of its unique properties, plasma is usually considered to be a phase of matter distinct from solids, liquids, and gases. It is often called the “fourth state of matter” although, as its state is universally the most common, it could be thought of as the “first” state of matter.
The chart below is commonly used to indicate how states change from a thermal point of view. The higher the temperature, the higher up the energy ladder with transitions upward and downward as indicated. However, it takes a very high thermal energy to ionize matter. There are other means as well, and an ionized state with charge imbalance can be induced and maintained at almost any temperature.
A solid such as a metal electrical cable, once it is connected in an electrical circuit with a sufficiently high electrical voltage source (battery; powerplant) will have its electrons separated from the metal nuclei, to be moved freely along the wire as a current of charged particles.
A beaker of water with a bit of metallic salt, such as sodium chloride, is readily ionized. If an electric voltage is applied via a positive and a negative wire, the hydrogen and oxygen atoms can be driven to the oppositely charged wires and evolve as the gaseous atoms they are at room temperature. Such stable, neutral states are a part of an electric universe, but this Guide will focus more on investigating the state of plasma and electric currents at larger scales, in space.
A molecular cloud of very cold gas and dust can be ionized by nearby radiating stars or cosmic rays, with the resulting ions and electrons taking on organized plasma characteristics, able to maintain charge and double layers creating charge separation and electrical fields with very large voltage differentials. Such plasma will accelerate charges and conduct them better than metals. Plasma currents can result in sheets and filamentary forms, two of the many morphologies by which the presence of plasma can be identified.
The proportion of ions is quantified by the degree of ionization. The degree of ionization of a plasma can vary from less than 0.01% up to 100%, but plasma behavior will occur across this entire range due to the presence of the charged particles and the charge separation typical of plasma behavior.
Plasma is sometimes referred to merely as an “ionized gas”. While technically correct, this terminology is incomplete and outdated. It is used to disguise the fact that plasma seldom behaves like a gas at all. In space it does not simply diffuse, but organizes itself into complex forms, and will not respond significantly to gravity unless local electromagnetic forces are much weaker than local gravity. Plasma is not matter in a gas state; it is matter in a plasma state.
The Sun’s ejection of huge masses of “ionized gas” (plasma) as prominences and coronal mass ejections against its own powerful gravity serves to illustrate this succinctly. The solar ‘wind’ is plasma, and consists of moving charged particles, also known as electric current. It is not a fluid, or a ‘wind’, or a ‘hot gas’, to put it in plain terms. Use of other words from fluid dynamics serves to obfuscate the reality of electric currents and plasma phenomena more powerful than gravity, around us in space, as far away as we can observe.
We know that space is filled with fields, a variety of particles, many of which are charged, and collections of particles in size from atoms to planets to stars and galaxies. Neutral particles — that is, atoms and molecules having the same number of protons as electrons, and neglecting anti-matter in this discussion — can be formed from oppositely charged particles. Conversely, charged particles may be formed from atoms and molecules by a process known as ionization.
If an electron – one negative charge – is separated from an atom, then the remaining part of the atom is left with a positive charge. The separated electron and the remainder of the atom become free of each other. This process is called ionization. The positively charged remainder of the atom is called an ion. The simplest atom, hydrogen, consists of one proton (its nucleus) and one electron. If hydrogen is ionized, then the result is one free electron and one free proton. A single proton is the simplest type of ion.
If an atom heavier than hydrogen is ionized, then it can lose one or more electrons. The positive charge on the ion will be equal to the number of electrons that have been lost. Ionization can also occur with molecules. It can also arise from adding an electron to a neutral atom or molecule, resulting in a negative ion. Dust particles in space are often charged, and the study of the physics of dusty plasmas is a subject of research in many universities today. Energy is required to separate atoms into electrons and ions — see the chart below.
Notice the repetitive pattern of the chart: an alkali metal has a relatively low ionization energy or temperature (easy to ionize). As you move to the right, increasing the atomic number – the number of protons in the nucleus of the atom – the energy required to ionize each ‘heavier’ atom increases. It peaks at the next “noble gas” atom, followed by a drop at the next higher atomic number, which will be a metal again. Then the pattern repeats.
It is interesting to note that hydrogen, the lightest element, is considered a ‘metal’ in this electric and chemical context, because it has a single electron which it readily “gives up” in its outer (and only) electron orbital. Common terminology in astronomy, in the context of the component elements in stars, is that hydrogen and helium are the ‘gases’ and all the other elements present are collectively termed ‘metals’.
3.3 Initiating and Maintaining Ionization
The energy to initiate and maintain ionization can be kinetic energy from collisions between energetic particles (sufficiently high temperature), or from sufficiently intense radiation. Average random kinetic energy of particles is routinely expressed as temperature, and in some very high velocity applications as electron-volts (eV). To convert temperature in kelvins (K) to eV, divide K by 11604.5. Conversely, multiply a value in eV by that number to get the thermal equivalent temperature in K.
The chart above represents the ionization energy required to strip the first, outermost electron from an atom or molecule. Subsequent electrons are more tightly bound to the nucleus and their ionization requires even higher energies. Several levels of electrons may be stripped from atoms in extremely energetic environments like those found in and near stars and galactic jets. Importance: These energetic plasmas are important sources of electrons and ions which can be accelerated to extremely high velocities, sources of cosmic rays and synchrotron radiation at many wavelengths. Cosmic ray links to cloud cover patterns affecting our global climate are reported in Henrik Svensmark’s book, The Chilling Stars.
Temperature is a measure of how much random kinetic energy the particles have, which is related to the rate of particle collisions and how fast they are moving. The temperature affects the degree of plasma ionization. Electric fields aligned (parallel) with local magnetic fields (“force-free” condition) can form in plasma. Particles accelerated in field-aligned conditions tend to move in parallel, not randomly, and consequently undergo relatively few collisions. The conversion of particle trajectories from random to parallel is called “dethermalization”. They are said to have a lower “temperature” as a result. Analogy: think of the vehicular motion in a “destruction derby” as “hot”, collision-prone random traffic, and freeway vehicular movement in lanes as “cool”, low-collision, parallel aligned traffic.
In a collision between an electron and an atom, ionization will occur if the energy of the electron (the electron temperature) is greater than the ionization energy of the atom. Equally, if an electron collides with an ion, it will not recombine if the electron has enough energy. One can visualize this as the electron’s having a velocity greater than the escape velocity of the ion, so it is not captured in an orbit around the ion.
Electron temperatures in space plasmas can be in the range of hundreds to millions of kelvins. Plasmas can therefore be effective at maintaining their ionized state. A charge-separated state is normal in space plasmas.
Other sources of ionization energy include high-energy cosmic rays arriving from other regions, high-energy or “ionizing” radiation such as intense ultraviolet light incident upon gas or weakly ionized plasma from nearby stars, an encounter between a plasma region and a neutral gas region in which the relative velocity of the encounter exceeds the Critical Ionization Velocity (CIV) (Hannes Alfvén, Collision between a nonionized gas and a magnetized plasma, Rev. Mod. Phys., vol. 32, p. 710, 1960) or energetic radiative processes created within the plasma itself.
In Big Bang cosmology, it is thought that there is not enough energy in the Universe to have created and maintained significant numbers of “loose” ions and electrons through ionization, and therefore they cannot exist. On the other hand, whenever ions and electrons combine into atoms, energy is given off. In the Big Bang Model, protons and electrons are thought to have been created before atoms, so an enormous amount of energy must have been released during the formation of the atoms in the Universe. It seems possible that if the Big Bang Model is correct, then this energy would still be available to re-ionize large numbers of atoms. Alternatively, it seems possible that not all protons and electrons combined into atoms after the Big Bang.
Note that the Electric Model does not rely on the Big Bang Model. The Electric Model simply says that we detect ions and electrons everywhere we have looked; so they do exist, probably in large numbers. Telescopes which “see” in high energy photons, such as Chandra (X-ray) and EIT, Extreme Ultraviolet Imaging Telescope on the SOHO solar observation spacecraft, attest to the presence of ionizing energy sources in the Universe, near and far. To suggest that mobile ions and electrons can’t exist in large numbers because, theoretically, there isn’t enough energy to have created them is as erroneous as arguing that the Universe can’t exist for the same reason.
3.4 Plasma Research
Although plasma may not be common in Earth’s biosphere, it is seen in lightning in its many forms, the northern and southern auroras, sparks of static electricity, spark plug igniters, flames of all sorts (see Chapter 2, ¶2.6), in vacuum tubes (valves), in electric arc welding, electric arc furnaces, electric discharge machining, plasma torches for toxic waste disposal, and neon and other fluorescent lighting tubes and bulbs.
Plasma behavior has been studied extensively in laboratory experiments for over 100 years. There is a large body of published research on plasma behavior by various laboratories and professional organizations, including the Institute of Electrical and Electronics Engineers (IEEE), which is the largest technical professional organization in the world today. The IEEE publishes a journal, Transactions on Plasma Science.
We will be relying on much of this research when explaining plasma behavior in the rest of this Guide. One point to bear in mind is that plasma behavior has been shown to be scalable over many orders of magnitude. That is, we can test small-scale examples of plasma in the laboratory and know that the observable results can be scaled up to the dimensions necessary to explain plasma behavior in space.
3.5 Plasma and Gases
Due to the presence of its charged particles, that is, ions, electrons, and charged dust particles, cosmic plasma behaves in a fundamentally different way from a neutral gas in the presence of electromagnetic fields.
Electromagnetic forces will cause charged particles to move differently from neutral atoms. Complex behavior of the plasma can result from collective movements of this kind.
A significant behavioral characteristic is plasma’s ability to form large-scale cells and filaments. In fact, that is why plasma is so named, due to its almost life-like behavior and similarities to cell-containing blood plasma.
The cellularization of plasma makes it difficult to model accurately. The use of the term ‘ionized gas’ is misleading because it suggests that plasma behavior can be modeled in terms of gas behavior, or fluid dynamics. It cannot except in certain simple conditions.
Alfvén and Arrhenius in 1973 wrote in Evolution of the Solar System:
“The basic difference [of approaches to modeling] is to some extent illustrated by the terms ionized gas and plasma which, although in reality synonymous, convey different general notions. The first term gives an impression of a medium that is basically similar to a gas, especially the atmospheric gas we are most familiar with. In contrast to this, a plasma, particularly a fully ionized magnetized plasma, is a medium with basically different properties.”
3.6 Conduction of electricity
Plasma contains dissociated charged particles which can move freely. Remembering that, by definition, moving charges constitute a current, we can see that plasma can conduct electricity. In fact, as plasma contains both free ions and free electrons, electricity can be conducted by either or both types of charge.
By comparison, conduction in a metal is entirely due to the movement of free electrons because the ions are bound into the crystal lattice. This means plasma is an even more efficient conductor than metals, as both the electrons and their corresponding ions are considered free to move under applied forces.
3.7 Electrical Resistance of Plasmas
In the Gravity Model, plasma is often assumed for simplicity to be a perfect conductor with zero resistance. However, all plasmas have a small but nonzero resistance. This is fundamental to a complete understanding of electricity in space. Because plasma has a small nonzero resistance, it is able to support weak electric fields without short-circuiting.
The electrical conductivity of a material is determined by two factors: the density of the population of available charge carriers (the ions and electrons) in the material and the mobility (freedom of movement) of these carriers.
In space plasma, the mobility of the charge carriers is extremely high because, due to the very low overall particle density and generally low ion temperatures, they experience very few collisions with other particles. On the other hand, the density of available charge carriers is also very low, which limits the capacity of the plasma to carry the current.
Electrical resistance in plasma, which depends on the inverse of the product of the charge mobility and the charge density, therefore has a small but nonzero value.
Because a magnetic field forces charged particles moving across the field to change direction, the resistance across a magnetic field is effectively much higher than the resistance in the direction of the magnetic field. This becomes important when looking at the behavior of electric currents in plasma.
Although plasma is a very good conductor, it is not a perfect conductor, or superconductor.
3.8 Creation of Charge Differences
Over a large enough volume, plasma tends to have the same number of positive and negative charges because any charge imbalance is readily neutralized by the movement of the high-energy electrons. So the question arises, how can differently charged regions exist, if plasma is such a good conductor and tends to neutralize itself quickly?
On a small scale, of the order of tens of meters in a space plasma, natural variations will occur as a result of random variations in electron movements, and these will produce small adjacent regions where neutrality is temporarily violated.
On a larger scale, positive and negative charges moving in a magnetic field will automatically be separated to some degree by the field because the field forces positive and negative charges in opposite directions. This causes differently charged regions to appear and to be maintained as long as the particles continue to move in the magnetic field.
Separated charge results in an electric field, and this causes more acceleration of ions and electrons, again in opposite directions. In other words, as soon as some small inhomogeneities are created, this rapidly leads to the start of more complex plasma behavior.
Over all scales, the signature filamentation and cellularization behavior of plasma creates thin layers where the charges are separated. Although the layers themselves are thin, they can extend over vast areas in space.
Important Things to Remember About Plasma Behavior
The essential point to bear in mind when considering space plasma is that it often behaves entirely unlike a gas. The charged particles which are the defining feature of a plasma are affected by electromagnetic fields, which the particles themselves can generate and modify.
In particular, plasma forms cells and filaments within itself, which is why it came to be called plasma, and these change the behavior of the plasma, like a feedback loop.
Plasma behavior is a little like fractal behavior. Both are complex systems arising from comparatively simple rules of behavior. Unlike fractals, though, plasma is also affected by instabilities, which add further layers of complexity.
Any theoretical or mathematical model of the Universe that does not take into account that complexity, is going to miss important aspects of the system’s behavior and fail to model it accurately.
end of Chapter 3 | http://www.thunderbolts.info/wp/2011/10/25/essential-guide-to-the-eu-chapter-3/ | 13 |
21 | Persuasion is underneath the umbrella term of Influence. In other words, persuasion is influence, but it requires communication, whereas influence doesn't necessarily. Persuasion can attempt to influence the beliefs, attitudes, intentions, motivations, or behaviors. Persuasion is a process aimed at changing a person's (or a group's) attitude or behavior toward some event, idea, object, or other person(s), by using written or spoken words to convey information, feelings, or reasoning, or a combination thereof. Persuasion is also an often used tool in the pursuit of personal gain, such as election campaigning, giving a sales pitch, or in Trial Advocacy. Persuasion can also be interpreted as using one's personal or positional resources to change people's behaviors or attitudes. Systematic persuasion is the process through which attitudes or beliefs are changed by appeals to logic and reason. Heuristic persuasion on the other hand is the process through which attitudes or beliefs are changed because of appeals to habit or emotion.
Brief History
Persuasion began with the Greeks, who emphasized rhetoric and elocution has the highest standard for a successful politician. All trials were held in front of the Assembly, and both the prosecution and the defense rested, as they often do today, on the persuasiveness of the speaker. Rhetoric was the ability to find the available means of persuasion in any instance. The Greek philosopher Aristotle listed four reasons why one should learn the art of persuasion: 1) truth and justice are perfect; thus if a case loses, it is the fault of the speaker; 2) it is an excellent tool for teaching; 3) a good rhetorician needs to know how to argue both sides to understand the whole problem and all the options; and 4) there is no better way to defend one’s self.
In the fifteenth century, Niccolo Machiavelli wrote The Prince, which explored how a ruler must be concerned not only with reputation, but might also need to be willing to act immorally at the right times. As a political scientist, Machiavelli emphasized that occasional need for the methodical exercise of brute force or deceit. His moral and ethical beliefs led to the formation of Machiavellianism, which is characterized as “the employment of cunning and duplicity in statecraft or in general conduct,” a very different form of persuasion.
Theories of Persuasion
Functional theories
Functional theorists attempt to understand the divergent attitudes individuals have towards people, objects or issues in different situations. There are four main functional attitudes:
- Adjustment function: A main motivation for individuals is to increase positive external rewards and minimize the costs. Attitudes serve to direct behavior directed towards the rewards and away from punishment.
- Ego Defensive function: The process by which an individual protects their ego from being threatened by their own negative impulses or threatening thoughts.
- Value-expressive: When an individual derives pleasure from presenting an image of themselves which is in line with their self-concept and the beliefs that they want to be associated with.
- Knowledge function: The need to attain a sense of understanding and control over one’s life. An individual’s attitudes therefor serve to help set standards and rules which govern their sense of being.
When communication is targeted at an underlying function its degree of persuasiveness will influence whether the individual will change their attitude, after determining that another attitude will be more effective in fulfilling that function.
Elaboration Likelihood Model (ELM) of Persuasion
Persuasion has traditionally been associated with two routes.
- Central route: Whereby an individual evaluates information presented to them based on the pros and cons of it and how well it supports their values
- Peripheral route: Change is mediated by how attractive the source of communication is bypassing the deliberation process.
The ELM forms a new facet of the route theory. It holds that the probability of effective persuasion depends on how successful the communication is at bringing to mind a relevant mental representation, which is the elaboration likelihood. Thus if the target of the communication is personally relevant, this increases the elaboration likelihood of the intended outcome and would be more persuasive if it were through the central route. Communication which does not require careful thought would be better suited to the peripheral route.
Conditioning Theories
Conditioning plays a huge part in the concept of persuasion. It is more often about leading someone into taking certain actions of their own, rather than giving direct commands. In advertisements for example, this is done by attempting to connect a positive emotion to a brand/product logo. This is often done by creating commercials that make people laugh, using a sexual undertone, inserting uplifting images and/or music etc. and then ending the commercial with a brand/product logo. Great examples of this are professional athletes. They are paid to connect themselves to things that can be directly related to their roles; sport shoes, tennis rackets, golf balls, or completely irrelevant things like soft drinks, popcorn poppers and panty hose. The important thing for the advertiser is to establish a connection to the consumer.
The thought is that it will affect how people view certain products, knowing that most purchases are made on the basis of emotion. Just like you sometimes recall a memory from a certain smell or sound, the objective of some ads is solely to bring back certain emotions when you see their logo in your local store. The hope is that by repeating the message several times it will cause the consumer to be more likely to purchase the product because he/she already connects it with a good emotion and a positive experience. Stefano DellaVigna and Matthew Gentzkow did a comprehensive study on the effects of persuasion in different domains. They discovered that persuasion has little or no effect on advertisement; however, there was a substantial effect of persuasion on voting if there was face-to-face contact.
Cognitive Dissonance Theory
Leon Festinger originally proposed the Theory of Cognitive Dissonance in 1956. He theorized that human beings constantly strive for mental consistency. Our cognition (thoughts, beliefs, or attitudes) can be in agreement, unrelated, or in disagreement with each other. Our cognition can also be in agreement or disagreement with our behaviors. When we detect conflicting cognition, or dissonance, it gives us a sense of incompleteness and discomfort. For example, a person who is addicted to smoking cigarettes but also suspects it could be detrimental to his health suffers from cognitive dissonance.
Festinger suggests that we are motivated to reduce this dissonance until our cognition is in harmony with itself. We strive for mental consistency. There are four main ways we go about reducing or eliminating our dissonance: (1) Changing our minds about one of the facets of cognition, (2) reducing the importance of a cognition, (3) increasing the overlap between the two, and (4) re-evaluating the cost/reward ratio. Revisiting the example of the smoker, he can either quit smoking, reduce the importance of his health, convince himself he is not at risk, or evaluate the reward of his smoking to be worth the cost of his health.
Cognitive Dissonance is powerful when it relates to competition and self-concept. The most famous example of how Cognitive Dissonance can be used for persuasion comes from Festinger and Carlsmith’s 1959 experiment in which participants were asked to complete a very dull task for an hour. Some were paid $20, while others were paid $1, and afterwards they were instructed to tell the next waiting participants that the experiment was fun and exciting. Those who were paid $1 were much more likely to convince the next participants that the experiment really was enjoyable than those who received $20. This is is because $20 is enough reason to participate in a dull task for an hour, so there is no dissonance. Those who received $1 experienced great dissonance, so they had to truly convince themselves that the task actually was enjoyable in order to avoid feeling like they were taken advantage of, and therefore reduce their dissonance.
Inoculation Theory
A vaccine introduces a weak form of a virus that can easily be defeated to prepare the immune system should it need to fight off a stronger form of the same virus. In much the same way, the Theory of Inoculation suggests a certain party can introduce a weak form of an argument that can easily be thwarted in order to prepare the audience to disregard a stronger, full-fledged form of the argument from an opposing party.
This is often practiced in negative advertisements and comparative advertisements, both for products and political causes. An example would be a manufacturer of a product displaying an ad that refutes one particular claim made about a rival’s product, so that when the audience sees an ad for said rival product, they will refute all the claims of the product without a second thought.
Attribution Theory
Humans attempt to explain the actions of others through either Dispositional Attribution or Situational Attribution. Dispositional Attribution, also referred to as Internal Attribution, attempts to point to a person’s traits, abilities, motives, or dispositions as a cause or explanation for their actions. A citizen criticizing a president by saying the nation is lacking economic progress and health because the president is either lazy or lacking in economic intuition is utilizing a dispositional attribution.
Situational Attribution, also referred to as External Attribution, attempts to point to the context around the person and factors of his surroundings, particularly things that are completely out of his control. A citizen claiming that a lack of economic progress is not a fault of the president but rather the fact that he inherited a poor economy from the previous president is situational attribution.
Fundamental Attribution Error occurs when people wrongly attribute either a shortcoming or accomplishment to internal or external factors, when in fact the inverse is true. In general, people tend to make dispositional attributions more often than situational attributions when trying to explain or understand a person’s behavior. This happens when we are much more focused on the individual because we do not know much about their situation or context. When trying to persuade others to like us or another person, we tend to explain positive behaviors and accomplishments with dispositional attribution, and negative behaviors and shortcomings with situational attributions.
Social Judgement Theory
Social Judgement Theory suggests that when people are presented with an idea or any kind of persuasive proposal, their natural reaction is to immediately seek a way to sort the information subconsciously and react to it. We evaluate the information and compare it with the attitude we already have, which is called the initial attitude or anchor point. When attempting to sort the incoming persuasive information, an audience will evaluate whether it lands in their latitude of acceptance, latitude of non-commitment or indifference, or the latitude of rejection. The size of these latitudes will vary from topic to topic. Our “ego-involvement” generally plays one of the largest roles in determining the size of these latitudes. When a topic is closely connected to how we define and perceive ourselves, or deals with anything we care passionately about, our latitudes of acceptance and non-commitment are likely to be much smaller and our attitude of rejection much larger. A person’s anchor point is considered to be the center of his latitude of acceptance, the position that is most acceptable to him. An audience is likely to distort incoming information to fit into their unique latitudes. If something falls within the latitude of acceptance, the subject tends to assimilate the information and consider it closer to his anchor point than it really is. Inversely, if something falls within the latitude of rejection, the subject tends to contrast the information and convince himself the information is farther away from is anchor point than it really is. When trying to persuade an individual target or an entire audience, it is vital to first learn the average latitudes of acceptance, non-commitment, and rejection of your audience. It is ideal to use persuasive information that lands near the boundary of the latitude of acceptance if the goal is to change the audience’s anchor point. Repeatedly suggesting ideas on the fringe of the acceptance latitude will cause people to gradually adjust their anchor points, while suggesting ideas in the rejection latitude or even the non-commitment latitude will not result in any change to the audience’s anchor point.
Persuasion methods are also sometimes referred to as persuasion tactics or persuasion strategies.
Weapons of influence
The principle of reciprocity states that when a person provides us with something, we attempt to repay him or her in kind. Reciprocation produces a sense of obligation, which can be a powerful tool in persuasion. The reciprocity rule is effective because it can be overpowering and instill in us a sense of obligation. Generally, we have a dislike for individuals who neglect to return a favor or provide payment when offered a free service or gift. As a result, reciprocation is a widely held principle. This societal standard makes reciprocity extremely powerful persuasive technique, as it can result in unequal exchanges and can even applies to an uninvited first favor.
Commitment and Consistency
Consistency is an important aspect of persuasion because it 1)is highly valued by society, 2)results in a beneficial approach to daily life, and 3)provides a valuable shortcut through the complicated nature of modern existence. Consistency allows us to more effectively make decisions and process information. The concept of commitment states that if a person commits, either orally or in writing, he or she is more likely to honor that particular commitment. This is especially true for written commitments, as they appear psychologically more concrete and can be back up with hard proof. Once a person commits to a stance, he or she has a tendency to behave according to that commitment. Commitment is an effective persuasive technique because once you get someone to make a commitment, they are more likely to engage in self-persuasion, providing themselves and others with reasons and justifications to support his or her commitment in order to avoid dissonance.
We are influenced by others around us; we want to be doing what everyone else is doing. People often base their actions and beliefs on what others around them are doing, how others act or what others believe. “The power of the crowd” is very effective. We all want to know what others are doing around us. We are so obsessed with what others do and how others act, that we then try to be just like other people. Cialdini gives an example that is somewhat like this: in a phone–a–thon, the host will say something along the line of, “Operators are waiting, please call now.” The only context that you have from that statement is that the operators are waiting and they are not busy. Rather the host may say: “If operators are busy, please call again.” This is proving the technique of social proof. Just by changing three words, it sounds like the lines are busy and other people are calling; so it must be a good, legitimate organization.
Social proof is most effective when people are uncertain or when there are similarities in a situation. In uncertain or ambiguous situations, when there are multiple possibilities or choices that need to be made, people are likely to conform to what others do/are doing. We become more influenced by the people around us, in situations that cause us to make a decision. The other effective situation for social proofing is when there are similarities. We are more prone to change/conform around people who are similar to us. If someone who is similar to you is being controlling and a leader, you are more likely to listen and follow what it is they are saying.
This principle is simple and concise. People say “yes” to people that they like. Two major factors contribute to overall liking. The first is physical attractiveness. People who are more physically attractive seem to be more persuasive; they get what they want and they can easily change others' attitudes. This attractiveness is proven to send favorable messages/impressions of other traits that a person may have, such as talent, kindness, and intelligence. The second factor is similarity. This is the simpler aspect of "liking." The idea of similarity states if people like you, they are more likely to say “yes” to what you ask them. When we do this, we usually don’t think about it, it just comes naturally.
We have the tendency to believe that if an expert says something, then it must be true. People like to listen to those who are knowledgeable and trustworthy, so if you can be those two things, then you are already on your way to getting people to believe and listen to you.
The Milgram study, done in 1974, consisted of a teacher and a learner who are both in different rooms. The teacher was told to ask questions to the learner and if the learner got it wrong, the teacher was to give him an electric shock. The catch to this experiment is that the teacher does not know is that the learner does not actually get a shock; the experiment was being done to see “When it is their job, how much suffering will ordinary people be willing to inflict on an entirely innocent other person” (Cialdini 176). In this study the results show that most teachers were willing to give as much pain that was available to them. People are willing to bring pain upon others when they are directed to do so by some other authority figure.
Scarcity is a principle that people underestimate. When something has limited availability, people assign it more value. According to Cialdini, “people want more of what they cannot have.” When scarcity is an issue, the context matters. This means that within certain contexts, scarcity “works” better. To get people to believe that something is scarcer, you need to explain what about that certain product will give them what no other product will. You have to work the audience in the correct way. Something else that you can do to get people believe that something is scarce is to tell them what they will lose, not what they will gain. Saying things like “you will lose $5”, rather than saying “you could save $5”. You are making something sound more scarce.
There are two major reasons why the scarcity principle keeps: 1)when things are difficult to get, they are usually more valuable so that can make it a better cue for the quality; and 2) when things become less available, we lose the chances to acquire those things. When this happens, we assign the scarce item or service more value simply because it is harder to aquire.
The whole of this principle is that we all want things that are out of our reach. If we see something that is popular we do not want it as much as something that is very rare.
Machiavellian Persuasion
Machiavellianism employs the tools of manipulation and deceit to gain wealth and power. Robert Greene wrote The 48 Laws of Power, a distillation of 3,000 years of the history of power, drawing on the lives of strategists and historical figures using the philosophies of Machiavelli to show people how to gain power, preserve it, and defend themselves against power manipulators.
In the preface of his book, Greene explains the dilemma of courtier, embodied in most of rules in his book: “While appearing the very paragon of elegance, they had to outwit and thwart their opponents in the subtlest of ways. The successful courtier learned over time to make all of his moves indirect; if he stabbed an opponent in the back, it was with a velvet glove and the sweetest of smiles on his faces. Instead of coercion or outright treachery (expect into the most rare of occasions), the perfect courtier got his way through seduction, charm, deception, and subtle strategy, always planning several moves ahead. Life in the court was a never-ending game that required constant vigilance and tactical thinking. It was civilized war.”
Some of the 48 Laws include:
- 1: Never Outshine the Master - Draws on the issues of Authority and Need for Ego Gratification
- 5: So Much Depends on Reputation - Emphasizes the power and necessity for Credibility and Likeability
- 16: Use Absence to Increase Respect and Honor - Draws upon the influential power of Authority and Scarcity
- 37: Create Compelling Spectacles - Emphasizes the power of Vividness
- 40: Despise the Free Lunch - Recognizes on the danger of Reciprocity
Relationship based persuasion
In their book The Art of Woo, G. Richard Shell and Mario Moussa present a four-step approach to strategic persuasion. They explain that persuasion means to win others over, not to defeat them. Thus it is important to be able to see the topic from different angles in order to anticipate the reaction others have to a proposal.
Step 1: Survey your situation
This step includes an analysis of the persuader's situation, goals, and challenges that the persuader faces in his or her organization.
Step 2: Confront the five barriers
Five obstacles pose the greatest risks to a successful influence encounter: relationships, credibility, communication mismatches, belief systems, and interest and needs.
Step 3: Make your pitch
People need a solid reason to justify a decision, yet at the same time many decisions are made on the basis of intuition. This step also deals with presentation skills.
Step 4: Secure your commitments
In order to safeguard the longtime success of a persuasive decision, it is vital to deal with politics at both the individual and organizational level.
List of methods
By appeal to reason:
By appeal to emotion:
Aids to persuasion:
- Body language
- Communication skill or Rhetoric
- Personality tests and conflict style inventory help devise strategy based on an individual's preferred style of interaction
- Sales techniques
Coercive techniques, some of which are highly controversial and/or not scientifically proven to be effective:
||This section needs additional citations for verification. (May 2010)|
Persuasion in Culture
It is through a basic cultural personal definition of persuasion that everyday people understand how others are attempting to influence them and then how they influence others. The dialogue surrounding persuasion is constantly evolving because of the necessity to use persuasion in everyday life. Persuasion tactics traded in society have influences from researchers, which may sometimes be misinterpreted. It is evolutionary advantageous, in the sense of wealth and survival, to persuade and not be persuaded. In order to understand persuasion, members of a culture will gather knowledge from domains such as “buying, selling, advertising, and shopping, as well as parenting and courting.”
Persuasion Knowledge Model (PKM)
The Persuasion Knowledge Model (PKM) was created by Friestad and Wright in 1994. This framework allows the researchers to analyze the process of gaining and using everyday persuasion knowledge. The researchers suggest the necessity of including “the relationship and interplay between everyday folk knowledge and scientific knowledge on persuasion, advertising, selling, and marketing in general.”
In order to educate the general population about research findings and new knowledge about persuasion, teacher must draw on their pre-existing beliefs from folk persuasion in order to make the research relevant and informative to lay people, which creates “mingling of their scientific insights and commonsense beliefs.”
As a result of this constant mingling, the issue of persuasion expertise becomes messy. Expertise status can be interpreted from a variety of sources like job titles, celebrity, or published scholarship.
It is through this multimodal process that we who create concepts like "stay away from car salesmen, they will try to trick you.” The kind of persuasion techniques blatantly employed by car salesman creates an innate distrust of them in popular culture. According to Psychology Today, they employ tactics ranging from making personal life ties with the customer to altering reality by handing the customer the new car keys before the purchase.
Neurobiology of persuasion
Attitudes and persuasion are among the central issues of social behavior. One of the classic questions is when are attitudes a predictor of behavior. Previous research suggested that selective activation of left prefrontal cortex might increase the likelihood that an attitude would predict a relevant behavior. Using lateral attentional manipulation, this was supported.
An earlier article showed that EEG measures of anterior prefrontal asymmetry might be a predictor of persuasion. Research participants were presented with arguments that favored and arguments that opposed the attitudes they already held. Those whose brain was more active in left prefrontal areas said that they paid the most attention to statements with which they agreed while those with a more active right prefrontal area said that they paid attention to statements that disagreed. This is an example of defensive repression, the avoidance or forgetting of unpleasant information. Research has shown that the trait of defensive repression is related to relative left prefrontal activation. In addition, when pleasant or unpleasant words, probably analogous to agreement or disagreement, were seen incidental to the main task, an fMRI scan showed preferential left prefrontal activation to the pleasant words.
One way therefore to increase persuasion would seem to be to selectively activate the right prefrontal cortex. This is easily done by monaural stimulation to the contralateral ear. The effect apparently depends on selective attention rather than merely the source of stimulation. This manipulation had the expected outcome: more persuasion for messages coming from the left.
See also
- Captatio benevolentiae
- Compliance gaining
- Crowd manipulation
- Elaboration likelihood model
- Judge–advisor system
- Inoculation theory
- Regulatory Focus Theory
- Social psychology
- The North Wind and the Sun
|Look up persuasion in Wiktionary, the free dictionary.|
- Seiter, Robert H. Gass, John S. (2010). Persuasion, social influence, and compliance gaining (4th ed.). Boston: Allyn & Bacon. p. 33. ISBN 0-205-69818-2.
- "Persuasion". Business Dictionary. Retrieved 9 May 2012.
- Fautsch, Leo (January 2007). "Persuasion". The American Salesman 52 (1): 13–16. Retrieved 9 December 2012.
- Schacter, Daniel L., Daniel T. Gilbert, and Daniel M. Wegner. "The Accuracy Motive: right is better than wrong-Persuasion." Psychology. ; Second Edition. New York: Worth, Incorporated, 2011. 532. Print.
- Ancient greece
- Katz, D. (1960). "The functional approach to the study of attitudes". Public Opinion Quarterly 24 (2): 163–204. doi:10.1086/266945.
- DeBono, K.G. (1987). "Investigating the social-adjustive and value-expressive functions of attitudes: Implications for persuasion processes". ournal of Personality and Social Psychology 52 (2): 279–287. doi:10.1037/0022-35188.8.131.529.
- Petty; Cacioppo (1986). "The elaboration likelihood model of persuasion". Advances in Experimental Social Psychology 19 (1): 123–205. doi:10.1016/S0065-2601(08)60214-2.
- Petty; Cacioppo & Schumann (1983). "Central and peripheral routes to advertising effectiveness: The moderating role of involvement". Journal of Consumer Research 10 (2): 135–146. doi:10.1086/208954.
- Cialdini, R.B. (2007). "Influence: The Psychology of Persuasion" New York: HarperCollins Publishers.
- DellaVigna , S., & Gentzko, M. (2010). Persuasion: Empirical evidence. The Annual Review of Economics, 2, 643-69. doi: 10.1146/annurev.economics.102308.12430
- Cialdini, R. B. (2001). Influence: Science and practice (4th ed.). Boston: Allyn & Bacon.
- The art of Woo by G. Richard Shell and Mario Moussa, New York 2007, ISBN 978-1-59184-176-0
- Friestad, Marian; Wright, Peter. Everyday persuasion knowledge. Psychology & Marketing16. 2 (Mar 1999)
- Lawson, Willow. Persuasion:Battle on the Car Lot, Psychology Today published on September 1, 2005 - last reviewed on July 31, 2009
- Drake, R. A., & Sobrero, A. P. (1987). Lateral orientation effects upon trait behavior and attitude behavior consistency. Journal of Social Psychology, 127, 639-651.
- Cacioppo, J. T., Petty, R. E., & Quintanar, L. R. (1982). Individual differences in relative hemispheric alpha abundance and cognitive responses to persuasive communications. Journal of Personality and Social Psychology, 43, 623-636.
- Tomarken, A. J., & Davidson, R. J. (1994). Frontal brain activity in repressors and nonrepressors. Journal of Abnormal Psychology, 103, 339-349.
- Herrington, J. D., Mohanty, A., Koven, N. S., Fisher, J. E., Stewart, J. L., Banich, M. T., et al. (2005). Emotion-modulated performance and activity in left dorsolateral prefrontal cortex. Emotion, 5, 200-207.
- Drake, R. A., & Bingham, B. R. (1985). Induced lateral orientation and persuasibility. Brain and Cognition, 4, 156-164.
- Herbert I. Abelson, Ph D. Persuasion "How opinions and attitudes are changed" Copyright© 1959 | http://en.wikipedia.org/wiki/Persuasion | 13 |
22 | Most substances expand when heated and contract when cooled. The exception is water. The maximum density of water occurs at 4°. This explains why a lake freezes at the surface, and not from the bottom up. If water at 0°C is heated, its volume decreases until it reaches 4°C. Above 4°C, water behaves normally and expands in volume as it is heated. Water expands as it is cooled from 4°C to 0°C and expands even more as it freezes. That is why ice cubes float in water and pipes break when the water inside of them freezes.
The change in length in almost all solids when heated is directly proportional to the change in temperature and to its original length. A solid expands when heated and contracts when cooled: The length of a material decreases as the temperature decreases; its length increases as the temperature increases. So a rod that is 2 m long expands twice as much as a rod which is 1 m long for the same ten degree increase in temperature. Holes in materials also expand or contract with the material, if a material gets larger, the hole also gets larger.
DL = a L DT
where L is the length of the material
a is the coefficient of linear expansion
DT is the temperature change in ° C
Physical quantities such as pressure, temperature, volume, and the amount of a substance describe the conditions in which a particular material exists. They describe the state of the mateterial and are referred to as state variables. These state variables are interrelated; one cannot be changed without changing the other. Where V0, P0, and T0 represent the initial state of the material and V, P, and T represent the final states of the material. .
In physics, we use an ideal gas to repesent the material and thus simplifying the equation of state.
Ideal Gas Law The volume of a gas is proportional to the number of moles of the gas, n. The volume varies inversely with the pressure. The pressure is proportional to the absolute temperature of the gas. Combining these relationships yields the following equation of state for an ideal gas,
PV = nRT
Where T is measured in Kelvin and R is the ideal gas constant
Ideal Gas Constant In SI units, R = 8.314 J/ mol K
Ideal Gas Real gases do not follow the ideal gas law exactly. An ideal gas is one for which the ideal gas law holds precisely for all pressures and temperatures. Gas behavior approximates the ideal gas model at very low pressures when the gas molecules are far apart and at temperatures close to that at which the gas liquefies.
Kinetic theory of gases
particles in a hot body have more kinetic energy than those in a cold body; as temperature increases, kinetic energy increases. If the temperature of rises, the gas molecules move at greater speeds. If the volume remains the same, the hotter molecules would be expected to hit the walls of the container more frequently than the cooler ones, resulting in a rise in pressure.
An advanced look at the kinetic theory: The assumptions describing an ideal gas make up the postulates of the kinetic theory:
1. An ideal gas is made up of a large number of gas molecules N each with mass m moving in random directions with a variety of speeds.
2. The gas molecules are separated from each other by an average distance that is much greater than the molecule's diameter.
3. The molecules obey laws of mechanics, interacting only when they collide.
4. Collisions between the walls of the container or with other gas molecules are assumed to be perfectly elastic.
Entropy disorder; the higher the temperature, the more disorder (or entropy) a substance has
measure of an object’s kinetic energy; temperature measures how hot or how cold an object is with respect to a standard
The most common scale is the Celsius (or Centigrade, though in the
(-273.15 °C), or 0 K.
Triple Point The triple point of water serves as a point of reference. It is only at this point (273.16 °K) that the three phases of water (gas, liquid, and solid) exist together at a unique value of temperature and pressure.
Temperature is a property of a system that determines whether the system will be in thermal equilibrium with other systems.
Molecular Interpretation of Temperature The concept that matter is made up of atoms in continual random motion is called the kinetic theory. We assume that we are dealing with an ideal gas. In an ideal gas, there are a large number of molecules moving in random directions at different speeds, the gas molecules are far apart, the molecules interact with one another only when they collide, and collisions between gas molecules and the wall of the container are assumed to be perfectly elastic. The average translational kinetic energy of molecules in a gas is directly proportional to the absolute temperature. If the average translational kinetic energy is doubled, the absolute temperature is doubled.
KEav = 1/2 mvav2 = 3/2 kT
where T is the temperature in Kelvin and k is Boltzmann's constant
k = 1.38 x 10-23 J/K
The relationship between Boltzmann's constant (k), Avogadro's number (N), and the gas constant (R) is given by:
k = R/N
An advanced look at the relationship between pressure and the kinetic theory: The pressure exerted by an ideal gas on its container is due to the force exerted on the walls of the container by the collisions of the molecules with the walls of area A. The collisions cause a change in momentum of the gas molecules. These assumptions can be used to derive an expression between pressure and the average kinetic energy of the gas molecules. The pressure is directly proportional to the square of the average velocity. Since the average kinetic energy is directly proportional to the temperature, pressure is also directly proportional to the temperature (for a fixed volume).
PV = 2/3 N (1/2 mvav2)
The higher the temperature, according to kinetic theory, the faster the molecules are moving, on average.
rms speed The square root of the average speed speed in the kinetic energy expression is called the rms speed.
vrms = (3RT/M)1/2
where R is the ideal gas constant, T is temperature in Kelvin, and M is the molecular mass in units of kg/mol
Heat(symbol is Q; SI unit is Joule)
amount of thermal energy transferred from one object to another due to temperature differences (we will learn in thermodynamics why heat flows from a hot to a cold body).
Temp and Internal Energy
T = Temperature – related to the average KE of the molecules in a substace
U = Internal Energy (Thermal Energy) – the total KE of the molecules.
= # molecules * avg KE
= N * KEavg
which simplifies to U = 3/2 n R T
Mechanical Equivalent of Heat
James Joule described the reversible conversion of heat energy and work. The calorie is defined as the amount of energy needed to raise the temperature of one gram of water at 14.5° one degree Celcius. The SI unit for work and energy is the Joule.
1 calorie = 4.186 J
1000 calories is equal to 1 food Calorie
There are three basic ways in which heat is transferred. In fluids, heat is often transferred by convection, in which the motion of the fluid itself carries heat from one place to another. Another way to transfer heat is by conduction, which does not involve any motion of a substance, but rather is a transfer of energy within a substance (or between substances in contact). The third way to transfer energy is by radiation, which involves absorbing or giving off electromagnetic waves.
Heat transfer in fluids generally takes place via convection. Convection currents are set up in the fluid because the hotter part of the fluid is not as dense as the cooler part, so there is an upward buoyant force on the hotter fluid, making it rise while the cooler, denser, fluid sinks. Birds and gliders make use of upward convection currents to rise, and we also rely on convection to remove ground-level pollution.
Forced convection, where the fluid does not flow of its own accord but is pushed, is often used for heating (e.g., forced-air furnaces) or cooling (e.g., fans, automobile cooling systems).
When heat is transferred via conduction, the substance itself does not flow; rather, heat is transferred internally, by vibrations of atoms and molecules. Electrons can also carry heat, which is the reason metals are generally very good conductors of heat. Metals have many free electrons, which move around randomly; these can transfer heat from one part of the metal to another.
The equation governing heat conduction along something of length (or thickness) L and cross-sectional area A, in a time t is:
Q/t = k A ∆T / L
(Q/t) = H = rate of heat loss or gain
k is the thermal conductivity, a constant depending only on the material, and having units of J / (s m °C).
Copper, a good thermal conductor, which is why some pots and pans have copper bases, has a thermal conductivity of 390 J / (s m °C). Styrofoam, on the other hand, a good insulator, has a thermal conductivity of 0.01 J / (s m °C).
Consider what happens when a layer of ice builds up in a freezer. When this happens, the freezer is much less efficient at keeping food frozen. Under normal operation, a freezer keeps food frozen by transferring heat through the aluminum walls of the freezer. The inside of the freezer is kept at -10 °C; this temperature is maintained by having the other side of the aluminum at a temperature of -25 °C.
The aluminum is 1.5 mm thick. Let's take the thermal conductivity of aluminum to be 240 J / (s m °C). With a temperature difference of 15°, the amount of heat conducted through the aluminum per second per square meter can be calculated from the conductivity equation:
The third way to transfer heat, in addition to convection and conduction, is by radiation, in which energy is transferred in the form of electromagnetic waves. We'll talk about electromagnetic waves in a lot more detail later in the year; an electromagnetic wave is basically an oscillating electric and magnetic field traveling through space at the speed of light. Don't worry if that definition goes over your head, because you're already familiar with many kinds of electromagnetic waves, such as radio waves, microwaves, the light we see, X-rays, and ultraviolet rays. The only difference between the different kinds is the frequency and wavelength of the wave.
Note that the radiation we're talking about here, in regard to heat transfer, is not the same thing as the dangerous radiation associated with nuclear bombs, etc. That radiation comes in the form of very high energy electromagnetic waves, as well as nuclear particles. The radiation associated with heat transfer is entirely electromagnetic waves, with a relatively low (and therefore relatively safe) energy.
Everything around us takes in energy from radiation, and gives it off in the form of radiation. When everything is at the same temperature, the amount of energy received is equal to the amount given off. Because there is no net change in energy, no temperature changes occur. When things are at different temperatures, however, the hotter objects give off more energy in the form of radiation than they take in; the reverse is true for the colder objects.
We've looked at the three types of heat transfer. Conduction and convection rely on temperature differences; radiation does, too, but with radiation the absolute temperature is important. In some cases one method of heat transfer may dominate over the other two, but often heat transfer occurs via two, or even all three, processes simultaneously.
A stove and oven are perfect examples of the different kinds of heat transfer. If you boil water in a pot on the stove, heat is conducted from the hot burner through the base of the pot to the water. Heat can also be conducted along the handle of the pot, which is why you need to be careful picking the pot up, and why most pots don't have metal handles. In the water in the pot, convection currents are set up, helping to heat the water uniformly. If you cook something in the oven, on the other hand, heat is transferred from the glowing elements in the oven to the food via radiation
study of properties of thermal energy
Each of the laws of thermodynamics are associated with a variable. The zeroeth law is associated with temperature, T; the first law is associated with internal energy, U; and the second law is associated with entropy, S.
System any object or set of objects we are considering. A closed system is one in which mass is constant. An open system does not have constant mass. No energy flows into or out of a closed system which is said to be isolated.
Environment everything else
If two objects at different temperatures are placed in thermal contact (so that the heat energy can transfer from one to the other), the two objects will reach the same temperature, or become in thermal equilibrium.
Zeroth Law of Thermodynamics If two systems are in thermal equilibirum with a third system, they are in thermal equilibrium with each other.
Internal or Thermal Energy(symbol is U; unit is J)
sum of all the energy an object possesses; it cannot be measured; only changes in internal energy can be determined
The kinetic theory can be used to clearly distinguish between temperature and thermal energy. Temperature is a measure of the average kinetic energy of individual molecules. Thermal energy refers to the total energy of all the molecules in an object.
Internal Energy of an Ideal Gas The internal energy of an ideal gas only depends upon temperature and the number of moles of the gas (n).
U = 3/2 nRT
where R is the ideal gas constant, R = 8.315 J/mol K
Characteristics of an Ideal Gas:
1. An ideal gas consists of a large number of gas molecules occupying a negligible volume.
2. Ideal gas molecules have random motion.
3. Ideal gas molecules undergo elastic collisions with the walls of the container and with other gas molecules.
4. The temperature of an ideal gas is proportional to the kinetic energy of the gas molecules.
1st law of thermodynamics The total increase in the internal energy of a system is equal to the sum of the work done on the system or by the system and the heat added to or removed from the system. It is a restatement of the law of conservation of energy. Changes in the internal energy of a system are caused by heat and work.
DU = Q + W
where Q is the heat added to the system and W is the net work done on the system. In other words, heat added is positive; heat lost is negative. Work done on the system (an example would be compression of a gas) is positive; work done by the system (an example would be expansion of a gas) is negative.
The best way to remember the sign convention for work: if a gas is compressed (volume decreases), work is positive; if a gas expands (volume increases), work is negative. It is just like mechanics, if you (the environment) do work on the system, you would compress it. The work you do is considered to be positive.
A graph of pressure vs volume for a particular temperature for an ideal gas. Each curve, representing a specific constant temperature, is called an isotherm. The area under the isotherm represents the work done by the system during a volume change.
When a system undergoes a change of state from an initial state to a final state, the system passes through a series of intermediate staes. This series of states is called a path. Points 1 and 2 represent an initial state (1) with pressure P1 and volume V1 and a final state (2) with pressure P2 and volume V2. If the pressure is kept constant at P1, the system expands to volume V2 (point 3 on the diagram). The pressure is then reduced to P2 (probably by decreasing the temperature)and the volume is kept constant at V2 to reach point 2 on the diagram. The work done by the systemd during this process is the area under the line from state 1 to state 3. There is no work done during the constant volume process from state 3 to state 2. Or, the system might traverse the path state 1 to state 4 to state 2, in which case the work done is the area under the line from state 4 to state 2. Or, the system might traverse the path represented by the curved line from state 1 to state 2, in which case, the work is represented by the area underneath the curve from state 1 to state 2. The work is different for each path.
The work done by the system depends not only upon the initial and final states, but also upon the path taken.
1. Isothermal Process temperature (T) is constant. If there is no temperature change, there is no internal energy change.
DU = 0
Q = -W
The curve shown represents an isotherm.
Since the temperature is constant, no change in internal energy occurs. Internal energy changes only occur when there are temperature changes. At constant temperature, the pressure and volume of the system decrease as along the path state 1 to state 2.
Example of an isothermal process: An ideal gas (the system) is contained in a cylinder with a moveable piston. Since the system is an ideal gas, the ideal gas law is valid. For constant temperture, PV=nRT becomes PV=constant. At point 1, the gas is at pressure P1, volume V1, and temperature T. A very slow expansion occurs, so that the gas stays at the same constant temperature. If heat Q is added, the gas must expand. As the gas expands, it pushes on the moveable piston, thus doing work on the environment (or negative work). At point w, the gas now has volume V2 which is greater than V1, pressure P2 which is less than P1, and temperature T. The amount of work done by the system on the environment during its expansion has the same magnitude as the amount of heat added to the system. The amount of work done is equal to the area under the curve.
How to know if heat was added or removed in an isothermal process: if heat is added, the volume increases and the pressure decreases. Remember, pressure is determined by the number of collisions the gas molecules make with the walls of the container. If the volume increases at constant temperature, the gas molecules make fewer collisions with the walls of the container, and pressure decreases.
2. Isobaric Process pressure (P) is constant. If pressure is kept constant, the work done during the process is given by
W = - P DV
DU = Q + W
P is held constant, so the amount of work done is represented by the area underneath the path from 1 to 2. Typically, lab experiments are isobaric processes.
Example of isobaric process: An ideal gas is contained in a cylinder with a moveable piston. The pressure experienced by the gas is always the same, and is equal to the external atmospheric pressure plus the weight of the piston. The cylinder is heated, allowing the gas to expand. Heat was added to the system at constant pressure, thus increasing the volume. The change in internal energy U is equal to the sum of the work done by the system on the environment during the volume expansion (negative work) and the amount of heat added to the system. The amount of work done is equal to the area under the curve.
How to determine if heat was added or removed: in an isobaric process, heat is added if the gas expands and removed if the gas is compressed.
How to tell if the temperature is increasing or decreasing: in an isobaric process, adding heat results in an increase in internal energy. If the internal energy increases, the temperature increases. Typically, volume expansions are small and all the heat added serves to increase the internal energy. In our graph, point 2 was at a higher temperature than point 1.
3. Isochoric Process Volume (V) is constant. Since there is no change in volume, no work is done.
W = 0
DU = Q
Since V is constant, no work is done. If heat is added to the system, the internal energy U increases; if heat is removed from the system, the internal energy U decreases. In the pV diagram shown, heat is removed along the path 1 to 2, thus decreasing the pressure at constant volume.
Example of an isochoric process: An ideal gas is contained in a rigid cylinder (one whose volume cannot change). If the cylinder is heated, no work can be done even though enormous forces are generated within the cylinder. No work is done because there is no displacement (the system does not move). The heat added only increases the internal energy of the system.
How to tell if heat is added or removed: in an isochoric process, heat is added when the pressure increases.
How to tell if the temperature increases or decreases: since U=3/2 nRT, if the internal energy is increasing, then the temperature is increasing. In our diagram, point 1 is at a higher temperature than point 2.
4. Aidabatic Process No heat (Q) is allowed to flow into or out of the system. This can occur if the system is well-insulated or the process happens quickly. (in other words, Q=0)
DU = W
The internal energy and the temperature decreases if the gas expands.
In this well-insulated process shown, heat cannot
transfer to the environment. The amount of work done is represented by the area
under the path from state 1 to state 2. In this example, the volume increases
along the path from state 1 to state 2, so work is done on the environment by
the system (negative work). There is a decrease in
Example of an adiabatic process: An ideal gas is contained in a cylinder with a moveable piston. Insulating material surrounds the cylinder, preventing heat flow. The ideal gas is compressed adiabatically by pushing against the moveable piston. Work is done on the gas (positive work). Remember, Q=0. The amount of work done in the adiabatic compression results in an increase in the internal energy of the system.
This applet presents a simulation of four simple transformations in a contained ideal monoatomic or diatomic gas. The user chooses the type of transformation and, depending on the type of transformation, adds or removes heat, or adjusts the gas volume manually. The applet displays the values of the three variables of state P, V, and T, as well as a P-V or P-T graph in real time.
How to tell if the temperature increases or decreases: since U=3/2 nRt, if the internal energy increases, the temperature increases. In our example, the final temperature would be greater. than the initial temperature. In our pV diagram, the temperature at point 1 is greater than the temperature at point 2.
2nd law of thermodynamics This law is a statement about which processes can occur in nature and which cannot.
The second law of thermodynamics explains things that don't happen:
It is not possible to reach absolute zero (0 K). Since heat can only flow from a hot to a cold substance, in order to decrease the temperature of a substance, heat must be removed and transferred to a "heat sink" (something that is colder). Since there is no temperature less than absolute zero, there is no heat sink to use to remove heat to reach that temperature.
DS = Q / T
where T is the Kelvin temperature
Determining how entropy changes: When dealing with entropy, it is the change in entropy which is important.
· In a reversible process (one in which there is no friction), if heat is added to a system, the entropy of the system increases, and vice versa. If entropy increases for the system, it must decrease for the environment by the same amount, and vice versa. For reversible processes, the total entropy (the entropy of the system plus the environment) is constant.
· In an irreversible process (those in the real world), the total entropy either is unchanged or increases.
1. automobile engines-thermal energy from a high heat source is converted into mechanical energy (work) and exhaust is expelled
2. refrigerator-thermal energy is removed from a cold body (work is required) and transferred to a hot body (the room. Another example is a heat pump.
Drawing of a real engine showing transfer of heat from a high to a low termperature reservoir, performing work. The figure below shows the overall operation of a heat engine. During every cycle, heat QH is extracted from a reservoir at temperature TH; useful work is done and the rest is discharged as heat QL to a reservoir at a cooler temperature TL. Since an engine is a cycle, there is no change in internal energy adn the net work done per cycle equals the net heat transferred per cycle.
The purpose of an engine is to transform as much QH into work as possible. So...coffee can't spontaneously start swirling around because heat would be withdrawn from the coffee and totally transformed into work. A heat engine converts thermal energy into mechanical energy.
Drawing of a refrigerator showing transfer of heat from a low to a high temperature reservoir, requiring work. The purpose of a refrigerator is to transfer heat from the low-temperature to the high-temperature reservoir, doing as little work on the system as possible.
There is no perfect refrigerator because it is not possible for heat to flow from one body to another body at a higher temperature with no other change taking place. The purpose of a heat pump or a refrigerator is to convert mechanical energy into thermal energy.
Efficiency of a heat engine The efficiency e of any heat engine is defined as the ratio of the work the engine does (W) to the heat input at the high temperature (QH).
e = W / QH
or, e = (QH - (QL) / QH
Carnot (ideal) efficiency This is the theoretical limit to efficiency. It is defined in terms of the operating temperatures.
eideal = (TH - (TL) / TH | http://www.greatneck.k12.ny.us/GNPS/SHS/dept/science/wells/ap/heat%20and%20thermo%20reading.htm | 13 |
15 | Pi RecipesArticle Free Pass
To Eudoxus of Cnidus (c. 400–350 bce) goes the honour of being the first to show that the area of a circle is proportional to the square of its radius. In today’s algebraic notation, that proportionality is expressed by the familiar formula A = πr2. Yet the constant of proportionality, π, despite its familiarity, is highly mysterious, and the quest to understand it and find its exact value has occupied mathematicians for thousands of years. A century after Eudoxus, Archimedes found the first good approximation of π: 310/71 < π < 31/7. He achieved this by approximating a circle with a 96-sided polygon (see ). Even better approximations were found by using polygons with more sides, but these only served to deepen the mystery, because no exact value could be reached, and no pattern could be observed in the sequence of approximations.
A stunning solution of the mystery was discovered by Indian mathematicians about 1500 ce: π can be represented by the infinite, but amazingly simple, seriesπ/4 = 1 − 1/3 + 1/5 − 1/7 +⋯. They discovered this as a special case of the series for the inverse tangent function:tan−1 (x) = x − x3/3 + x5/5 − x7/7 +⋯.
The individual discoverers of these results are not known for certain; some scholars credit them to Nilakantha Somayaji, some to Madhava. The Indian proofs are structurally similar to proofs later discovered in Europe by James Gregory, Gottfried Wilhelm Leibniz, and Jakob Bernoulli. The main difference is that, where the Europeans had the advantage of the fundamental theorem of calculus, the Indians had to find limits of sums of the form
Before Gregory’s rediscovery of the inverse tangent series about 1670, other formulas for π were discovered in Europe. In 1655 John Wallis discovered the infinite productπ/4 = 2/3∙4/3∙4/5∙6/5∙6/7⋯, and his colleague William Brouncker transformed this into the infinite continued fraction
Finally, in Leonhard Euler’s Introduction to Analysis of the Infinite (1748), the seriesπ/4 = 1 − 1/3 + 1/5 − 1/7 +⋯ is transformed into Brouncker’s continued fraction, showing that all three formulas are in some sense the same.
Brouncker’s infinite continued fraction is particularly significant because it suggests that π is not an ordinary fraction—in other words, that π is irrational. Precisely this idea was used in the first proof that π is irrational, given by Johann Lambert in 1767.John Colin Stillwell
What made you want to look up "Pi Recipes"? Please share what surprised you most... | http://www.britannica.com/EBchecked/topic/1084437/Pi-Recipes | 13 |
12 | Today is Ada Lovelace Day, “an international day of blogging to draw attention to women excelling in technology.” I — along with more than a thousand other people — have pledged to write about a female role model in technology.
Ada Lovelace was Byron’s daughter and worked with computer pioneer Charles Babbage on his “Computing Engines” — and is widely thought of as the first computer programmer. A reconstruction of the “Difference Engine” is on view at the Science Museum around the corner from here, and if you’re reading this on 24 March, you can go and talk to Ada herself!
But I want to talk not about a programmer, but a computer. That is, a computer named Henrietta Swan Leavitt. In the early 20th Century, some (always male) astronomers had batteries of (almost always female) “computers” working for them, doing their calculations and other supposedly menial scientific work.
Leavitt — who had graduated from Radcliffe College — was employed by Harvard astronomer Charles Pickering to analyze photographic plates: she counted stars and measured their brightness. Pickering was particularly interested in “variable stars”, which changed their brightness over time. The most interesting variable stars changed in a regular pattern and Leavitt noticed that, for a certain class of these stars known as Cepheids, the brighter ones had longer periods. Eventually, in 1912, she made this more precise, and to this day the “Cepheid Period-Luminosity Relationship” remains one of the most important tools in the astronomers box.
It’s easy enough to measure the period of a Cepheid variable star: just keep taking data, make a graph, and see how long it takes to repeat itself. Then, from the Period-Luminosity relationship, we can determine its intrinsic luminosity. But we can also easily measure how bright it appears to us, and use this, along with the inverse-square relationship between intrinsic luminosity and apparent brightness, to get the distance to the star. That is, if we put the same star twice as far away, it’s four times dimmer; three times as far is nine times dimmer, etc.
This was just the technique that astronomy needed, and within a couple of decades it had led to a revolution in our understanding of the scale of the cosmos. First, it enabled astronomers to map out the Milky Way. But at this time, it wasn’t even clear whether the Milky Way was the only agglomeration of stars in the Universe, or one amongst many. Indeed, this was the subject of the so-called “great debate” in 1921 between American astronomers Harlow Shapley and Heber Curtis. Shapley argued that all of the nebuale (fuzzy patches) on the sky were just local collections of stars, or extended clouds of gas, while Curtis argued that some of them (in particular, Andromeda) were galaxies — “Island Universes” as they were called — like our own. By at least some accounts, Shapley won the debate at the time.
But very soon after, due to Leavitt’s work, Edwin Hubble determined that Curtis was correct: he saw the signature of Cepheid stars in (what turned out to be) the Andromeda galaxy and used them to measure the distance, which turned out to be much further away than the stars in the galaxy. A few years later, Hubble used Leavitt’s Period-Luminosity relationship to make an even more startling discovery: more distant galaxies were receding from us at a speed (measured using the galaxy’s redshift) proportional to their distance from us. This is the observational basis for the Big Bang theory of the Universe, tested and proven time and again in the eighty or so years since then.
Leavitt’s relationship remains crucial to astronomy and cosmology. The Hubble Space Telescope’s “Key Project” was to measure the brightness and period of Cepheid stars in galaxies as far away as possible, determining Hubble’s proportionality constant and set an overall scale for distances in the Universe.
The social situation of academic astronomy of her day strongly limited Leavitt’s options — women weren’t allowed to operate telescopes, and it was yet more difficult for her as she was deaf, as well. Although Leavitt was “only” employed as a computer, she was eventually nominated for a Nobel prize for her work — but she had already died. We can only hope that the continued use of her results and insight to this day is a small recompense and recognition of her life and work. | http://www.andrewjaffe.net/blog/2009/03/ada-lovelace-da.html | 13 |
11 | One of the most important feature of the microcontroller is a number of input/output pins used for connection with peripherals. In this case, there are in total of thirty-five general purpose I/O pins available, which is quite enough for the most applications.
In order pins’ operation can match internal 8-bit organization, all of them are, similar to registers, grouped into five so called ports denoted by A, B, C, D and E. They all have several features in common:
By clearing some bit of the TRIS register (bit=0), the corresponding port pin is configured as output. Similarly, by setting some bit of the TRIS register (bit=1), the corresponding port pin is configured as input. This rule is easy to remember 0 = Output, 1 = Input.
Fig. 3-1 I/O Ports
Port A is an 8-bit wide, bidirectional port. Bits of the TRISA and ANSEL control the PORTA pins. All Port A pins act as digital inputs/outputs. Five of them can also be analog inputs (denoted as AN):
Fig. 3-2 Port A and TRISA Register
Similar to bits of the TRISA register which determine which of the pins will be configured as input and which as output, the appropriate bits of the ANSEL register determine whether the pins will act as analog inputs or digital inputs/outputs.
Each bit of this port has an additional function related to some of built-in peripheral units. These additional functions will be described in later chapters. This chapter covers only the RA0 pin’s additional function since it is related to Port A only.
The microcontroller is commonly used in devices which have to operate periodically and, completely independently using a battery power supply. In such cases, minimal power consumption is one of the priorities. Typical examples of such applications are: thermometers, sensors for fire detection and similar. It is known that a reduction in clock frequency reduces the power consumption, so one of the most convenient solutions to this problem is to slow the clock down (use 32KHz quartz crystal instead of 20MHz).
Setting the microcontroller to sleep mode is another step in the same direction. However, even when both measures are applied, another problem arises. How to wake the microcontroller and set it to normal mode. It is obviously necessary to have an external signal to change logic state on some of the pins. Thus, the problem still exists. This signal must be generated by additional electronics, which causes higher power consumption of the entire device.
The ideal solution would be the microcontroller wakes up periodically by itself, which is not impossible at all. The circuit which enables that is shown in figure on the right.
Fig. 3-3 ULPWU Unit
The principle of operation is simple:
A pin is configured as output and logic one (1) is brought to it. That causes the capacitor to be charged. Immediately after this, the same pin is configured as an input. The change of logic state enables an interrupt and the microcontroller is set to Sleep mode. Afterwards, there is nothing else to be done except wait for the capacitor to discharge by the leakage current flowing out through the input pin. When it occurs, an interrupt takes place and the microcontroller continues with the program execution in normal mode. The whole sequence is repeated...
Theoretically, this is a perfect solution. The problem is that all pins able to cause an interrupt in this way are digital and have relatively large leakage current when their voltage is not close to the limit values Vdd (1) or Vss (0). In this case, the capacitor is discharged for a short time since the current amounts to several hundreds of microamperes. This is why the ULPWU circuit able to register slow voltage drops with ultra low power consumption was designed. Its output generates an interrupt, while its input is connected to one of the microcontroller pins. It is the RA0 pin. Referring to Fig. 3-4 (R=200 ohms, C=1nF), discharge time is approximately 30mS, while the total consumption of the microcontroller is 1000 times lower (several hundreds of nanoamperes).
Fig. 3-4 Sleep Mode
Port B is an 8-bit wide, bidirectional port. Bits of the TRISB register determine the function of its pins.
Fig. 3-5 Port B and TRISB register
Similar to Port A, a logic one (1) in the TRISB register configures the appropriate port pin as input and vice versa. Six pins on this port can act as analog inputs (AN). The bits of the ANSELH register determine whether these pins act as analog inputs or digital inputs/outputs:
Each Port B pin has an additional function related to some of the built-in peripheral units, which will be explained in later chapters.
Fig. 3-6 WPUB register
Having a high level of resistance (several tens of kilo ohms), these “virtual” resistors do not affect pins configured as outputs, but serves as a useful complement to inputs. As such, they are connected to CMOS logic circuit inputs. Otherwise, they would act as if they are floating because of their high input resistance.
Fig. 3-7 Pull-up resistors
* Apart from the bits of the WPUB register, there is another bit affecting pull-up resistors installation. It is RBPU bit of the OPTION_REG. It is a general-purpose bit because it affects installation of all Port resistors.
Fig. 3-8 IOCB register
Because of these features, the port B pins are commonly used for checking push-buttons on the keyboard because they unerringly register any button press. Therefore, there is no need to “scan” these inputs all the time.
Fig. 3-9 Keyboard Example
When the X, Y and Z pins are configured as outputs set to logic one (1), it is only necessary to wait for an interrupt request which arrives upon any button press. By combining zeros and units on these outputs it is checked which push-button is pressed.
The RB0/INT pin is a single “true” external interrupt source. It can be configured to react to signal raising edge (zero-to-one transition) or signal falling edge (one-to-zero transition). The INTEDG bit of the OPTION_REG register selects the signal.
You have probably noticed that the PIC16F887 microcontroller does not have any special pins for programming (writing the program to ROM). The Ports pins available as general purpose I/O pins during normal operation are used for this purpose (Port B pins clock (RB6) and data transfer (RB7) during program loading). In addition, it is necessary to apply a power supply voltage Vdd (5V) and Vss (0V), as well as voltage for FLASH memory programming Vpp (12-14V). During programming, Vpp voltage is applied to the MCLR pin. All details concerning this process, as well as which one of these voltages is applied first, are beside the point, the programmers electronics are in charge of that. The point is that the program can be loaded to the microcontroller even when it is soldered onto the target device. Normally, the loaded program can also be changed in the same way. This function is called ICSP (In-Circuit Serial Programming). It is necessary to plan ahead when using it.
It is not complicated at all! It is only necessary to install a 4-pin connector onto the target device so that the necessary programmer voltages may be applied to the microcontroller. In order that these voltages don't interfere with other device electronics, design some sort of circuit breaking into this connection (using resistors or jumpers).
Fig. 3-10 ICSP Connection
These voltages are applied to socket pins in which the microcontroller is to be placed.
Fig. 3-11 Programmer On-Board Connections
Port C is an 8-bit wide, bidirectional port. Bits of the TRISC Register determine the function of its pins. Similar to other ports, a logic one (1) in the TRISC Register configures the appropriate port pin as an input.
Fig. 3-12 Port C and TRISC Register
All additional functions of this port's bits will be explained later.
Port D is an 8-bit wide, bidirectional port. Bits of the TRISD register determine the function of its pins. A logic one (1) in the TRISD register configures the appropriate port pin as input.
Fig. 3-13 Port D and TRISD Register
Port E is a 4-bit wide, bidirectional port. The TRISE register’s bits determine the function of its pins. Similar to other ports, a logic one (1) in the TRISE register configures the appropriate port pin as input. The exception is RE3 which is input only and its TRIS bit is always read as ‘1’.
Fig. 3-14 Port E and TRISE Register
Similar to Ports A and B, three pins can be configured as analog inputs in this case. The ANSELH register bits determine whether a pin will act as analog input (AN) or digital input/output:
The ANSEL and ANSELH registers are used to configure the input mode of an I/O pin to analog or digital.
Fig. 3-15 ANSEL and ANSELH Registers
The rule is:
To configure a pin as an analog input, the appropriate bit of the ANSEL or ANSELH registers must be set (1). To configure pin as digital input/output, the appropriate bit must be cleared (0).
The state of the ANSEL bits has no affect on digital output functions. The result of any attempt to read some port pin configured as analog input will be 0.
Fig. 3-16 ANSEL and ANSELH Configuration
You will probably never write a program which fully utilises all the Ports in an efficient manner to justify learning all there is to know about these Ports. However, they are probably the simplest modules within the microcontroller. This is how they are used: | http://www.mikroe.com/chapters/view/4/chapter-3-i-o-ports/ | 13 |
24 | literacy has practical uses across the curriculum.
science, technology, social studies, health and nutrition, history and
geography, mathematics, arts and crafts all rely on visual texts such
as maps, diagrams, graphs, timelines or tables.
can use a table to list all the questions they aim to answer. The
table helps them to see how much they have researched and what still needs
to be investigated.
information, as a diagram, map or table, helps children to see how facts
are connected, whereas "making notes" often provides only
a collection of isolated pieces of data.
texts do some things better than verbal texts; verbal texts do some
things better than visual texts.
texts (texts made of words and sentences) are ideal for recording details
and examples. They capture nuances, subtleties and ambiguities. But
they are poor at showing the sequence and structure of ideas, that is,
how all the pieces fit together.
texts tend to simplify and generalize a topic and may omit minor details.
But they are best at capturing the connections between the key details
and show the structure or organizing principle of a topic.
examples of visual texts click here.
means reading information in one form (such as words and sentences)
and summarizing it in another form (such as a diagram or table).
you ask students to re-compose the information, they can no longer simply
copy their source. They need to think about what a paragraph means before
they can summarize it as a visual text.
is a key strategy in aiding comprehension.
NEW: More on Re-composing here.
texts are graphic organizers
texts, such as flow charts and tree diagrams, are ideal for providing
a framework for writing.
can plan nonfiction writing by using a suitable visual text:
report: use a table or tree
diagram to organize the order of the paragraphs ("Which comes
first? What goes next?").
recall the key events along a timeline before starting to write.
use a flow chart to sequence
the steps in an explanation.
organize the steps in the right order using a timeline
or flow chart.
(persuasion) : use a flow chart to sequence in the best
order all the reasons for a point of view
draw up a table of reasons "for" and "against"
before making a decision about which side of a discussion you support
are more accessible than words
young readers can interpret ("read") diagrams and maps long
before they can read the same information in words and sentences.
their reading with information books
that cue the unfamiliar words with clear diagrams, not just photographs.
children who are "unable to read" may be merely waiting for
you to provide them with illustrated nonfiction. Not everyone chooses
to read fiction.
literacy across the curriculum
a diagram ("picture glossary")
to provide key vocabulary when introducing unfamiliar or "technical"
charts, timelines and tables can help students to plan an essay
(such as explanations, recounts, reports).
it is more helpful to summarize a text as a diagram or a table,
instead of writing disconnected "notes".
scientific concepts are more clearly grasped when visualized as a visual
text, such as a cross section (for example to explain how we breathe)
or flow chart (to show an animal's life cycle).
in nature can also be summarized as a web diagram (such as a food web)
or flow chart (such as the water cycle).
procedures ("how to...") can be followed more easily when
arranged as a flow chart, storyboard,
relationships can be understood quickly if you sometimes use a web diagram
(sometimes called a sociogram) or a tree diagram.
charts are useful in explaining topics such as recycling, habitats,
interdependence and responsibilities.
timelines to summarize sequences of events
maps (maps with arrows showing journeys) help children to visualize
exploration and migration themes
over time, causes of key events, and sequencing of events can be shown
clearly using flow charts
graphs help visualize economic and other changes over time
(line, bar, and pie) help students to grasp concepts such as climate,
population change, and public opinion
charts help visualize topics such as the water cycle, climate change,
globalization, and Earth processes
can be used to visualize political states, climate, vegetation, wealth
and poverty, trade, war, and so on.
a pie chart to show the proportions of different food groups we eat
flow chart can help students to understand processes such as respiration
graphs can record changes in body temperature, heartbeat, pulse and
breathing before and after exercise
sections and cutaway diagrams help students to visualize how the body
children can benefit from visualizing addition and subtraction using
simple bar graphs.
concepts are best shown in maps and diagrams.
children can interpret problems more successfully if they are encouraged
to visualize the key elements in a map or diagram.
assists work in measurement and recording of data.
storyboards to support instructions in craft activities and explaining
how different materials are used.
who elect to take art subjects are identifying themselves as visual
learners; build on their visual strengths by providing explanations
as flow charts and organizing cooperative projects using tables and
to write an essay (grades 48)
who have prepared plenty of detailed notes on a topic still feel "lost"
when starting to write their essay.
starting the essay, therefore, ask students to plan it using one of
these visual texts. Each text has a different purpose:
tree diagram organizes the topic into groups and examples.
It is ideal for writing an information report.
timeline arranges events in time sequence and is useful in
planning the order in which to write a recount.
storyboard shows how an item changes
over time, making it a suitable planning tool for writing procedures
flow chart sequences events in order and is one of the most
useful visual texts, helping in the planning of explanations, procedures
table summarizes groups and allows
us to compare them, aspect by aspect. This helps in the planning
of a discussion of different points of view, or deciding on a choice
on visual planners
making "notes," a diagram with labels can help children to
remember the meanings of unfamiliar words.
flow chart helps to summarize a sequence of events (in history) or cause-and-effect
(in history or science).
a table to divide the topic into groups and to suggest the order in
which to write about them.
a tree diagram to show how subtopics are related to main topics.
on visual summaries
a table to list all the questions you aim to answer. The table helps
you to know how much you have researched and what still needs to be
children to concentrate on a text by having them guess its meaning first,
connecting the key words in a web diagram.
students to read information in one form (such as an explanation in
words and sentences) and to summarize it in a different form
(such as a flow chart). This strategy, called re-composing, avoids "copying"
and requires the student to figure out the key facts, guiding concept,
and the structure of the information.
characters in a novel with a kind of web diagram called a sociogram.
Plots and subplots in novels and picture books are best summarized in
a flow chart. Simpler plots can be storyboarded.
to read (grades K3)
help young readers to predict unfamiliar words. Make sure your nonfiction
books include diagrams such as cross sections and flow charts, not just
with labels are more helpful than vocabulary lists for beginning readers.
The pictorial part of the diagram helps them to see the meaning of each
word and therefore to find the word they need.
readers can interpret information in a map or diagram long before they
can understand the "same information" written out in sentences
solving / decision making (grades 38)
and maps can help us explain how things are made or how machines work.
a table to compare alternative solutions to a problem. The table also
helps us to make a decision because it arranges side-by-side the strengths
and weaknesses of each possible "answer." More
on decision making using tables.
ideas are explained with examples and practical lesson plans in Steve's
to Home Page
Copyright © Black Cockatoo Publishing PL 2011 | http://www.k-8visual.info/using_Text.html | 13 |
11 | Researchers at CSU have teamed up with NASA to test water-saving technology on California crops
By Vinnee Tong
Near the Central Valley town of Los Banos, Anthony Pereira opens a tap to send water into the fields at his family’s farm. Pereira grows cotton, alfalfa and tomatoes. And he is constantly deciding how much water is the right amount to use.
“Water savings is always an issue,” he says. “That’s why we’re going drip here on this ranch. We gotta try to save what we can now for the years to come.”
Thanks to some new technology, that might get a little easier. To help farmers like Pereira, engineers at NASA and CSU Monterey Bay are developing an online tool that can estimate how much water a field might need. Here’s how it works: satellites orbiting the earth take high-resolution pictures — so detailed that you can zoom in to a quarter of an acre.
“The satellite data is allowing us to get a measurement of how the crop is developing,” says CSUMB scientist Forrest Melton, the lead researcher on the project. “We’re actually measuring the fraction of the field that’s covered by green, growing vegetation.”
Those images are combined with data they’re collecting right now at a dozen California farms from Redding to Bakersfield and from Salinas to Visalia.
In Pereira’s fields a tractor carrying tomato seedlings leads the way as farm workers nestle the plants into the dirt. Alongside them the researchers drill holes in the ground to put sensors underneath and around the crops. The sensors measure wind temperature, radiant energy from the sun and how thirsty the soil may be on a given day.
Walking through the field, researcher Chris Lund is carrying equipment that will collect all that data.
“Once a minute it’ll take a measurement of all the sensors that are attached to this,” he explains, “the soil moisture sensors, the soil water potential sensors, and in this case the capillary lysimeter, which measures how much water is going out the bottom of the system.”
Using this information with the satellite images that are updated about once a week, the researchers have come up with a formula that can estimate how much water a field might need. Farmers will soon be able to access estimates for their fields online and eventually they’ll be able to use their cell phones.
That means Pereira will no longer have to rely on the old-school way of deciding how much water to use.
“Before, everything was furrow-irrigated or flood-irrigated, and we’d just schedule depending on what the weather is,” he told me. “If it’s warm, we say, ‘OK we’re going to try to irrigate every two weeks.’ If it’s cooler, then let’s try to stretch it out another week, 10 days or so to make the water stretch out more.”
The California Department of Water Resources estimates water savings could amount to hundreds of dollars per acre, and the crop yield could be better, too. The joint research team sees its water-saving tool as something that could be used by any farmer. At the Ames Research Center in Mountain View, NASA’s Rama Nemani studies a map of the world mounted on a wall.
“If you look at the map like this, there are a lot of areas that are like California that are starved for water but need to still produce food,” he says. “So we have to figure out how to use whatever limited water each place has to the best possible extent.”
This online water saving tool could be available at no cost to farmers around the state as soon as next year, and eventually to farmers around the world.
Hear the radio version of this story from KQED’s The California Report. | http://blogs.kqed.org/climatewatch/2012/07/17/satellites-helping-save-water-on-california-farms/ | 13 |
10 | Measurement of Atomic Weights
You may have wondered why we have been so careful to define atomic weights and isotopic weights as ratios of masses. The reason will be clearer once the most important and accurate experimental technique by which isotopic weights are measured has been described. This technique, called mass spectrometry, has developed from the experiments with cathodeThe electrode in an electrochemical cell where reduction occurs; the negatively charged electrode in a vacuum tube.-ray tubes mentioned earlier in this chapter. It depends on the fact that an electrically charged particle passing through a magnetic field of constant strength moves in a circular path. The radius r of such a path is directly proportional to the mass m and the speed u of the particle, and inversely proportional to the charge Q. Thus the greater the mass or speed of the particle, the greater the radius of its path. The greater the charge, the smaller the radius. In a mass spectrometerAn instrument that measures the mass of charged particles by accelerating them through electric and magnetic fields and detecting the angles through which their paths are deflected., as seen below, atomsThe smallest particle of an element that can be involved in chemical combination with another element; an atom consists of protons and neutrons in a tiny, very dense nucleus, surrounded by electrons, which occupy most of its volume. or molecules in the gaseous phase are bombarded by a beam of electrons. Occasionally one of these electrons will strike another electron in a particular atom, and both electrons will have enough energyA system's capacity to do work. to escape the attraction of the positive nucleusThe collection of protons and neutrons at the center of an atom that contains nearly all of the atoms's mass.. This leaves behind a positive ion since the atom now has one more protonThe positively charged particle in an atomic nucleus; its mass is similar to the mass of a hydrogen atom. than it has electrons. For example,
126C + e–(high-speed electron) → 126C + + 2e–
Once positive ions are produced in a mass spectrometer, they are accelerated by the attraction of a negative electrodeIn an electrochemical cell, a surface on which oxidation or reduction occurs; an electrode conducts electric current into or out of a cell. and pass through a slitA component of a spectrophotometer that allows only a fraction of the incident light to pass through it; a slit can be used to select a narrow range of wavelengths from dispersed electromagnetic radiation.. This produces a narrow beam of ions traveling parallel to one another. The beam then passes through electric and magnetic fields. The fields deflect away all ions except those traveling at a certain speed.
The beam of ions is then passed between the poles of a large electromagnet. Since the speed and charge are the same for all ions, the radii of their paths depend only on their masses. For different ions of masses m1 and m2
and the ratio of masses may be obtained by measuring the ratio of radii, The paths of the ions are determined either by a photographic plate (which darkens where the ions strike it, as in Figure 1) or a metalAn element characterized by a glossy surface, high thermal and electrical conductivity, malleability, and ductility. plate connected to a galvanometer (a device which detects the electric current due to the beam of charged ions).
EXAMPLE When a sample of carbon is vaporized in a mass spectrometer, two lines are observed on the photographic plate. The darker line is 27.454 cm, and the other is 29.749 cm from the entrance slit. Determine the relative atomic masses (isotopic weights) of the two isotopes of carbon.
SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. Since the distance from the entrance slit to the line on the photographic plate is twice the radius of the circular path of the ions, we have
Thus m2 = 1.083m1. If we assume that the darker mark on the photographic plate is produced because there are a greater number of 126C+ ions than of the less common 136C+, then m1 may be equated with the relative mass of 126C and may be assigned a value of 12.000 000 exactly. The isotopic weight of 126C is then
m2 = (1.083 59)(12.000 000) = 13.0031
Notice that in mass spectrometry all that is required is that the charge and speed of the two ions whose relative masses are to be determined be the same. If the mass of an individual ion were to be measured accurately, its actual speed upon entering the magnetic field and the exact magnitude of its electric charge would have to be known very accurately. Therefore it is easier to measure the ratio of two masses than to determine a single absolute mass, and so atomic weights are reported as pure numbers. | http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/The%20Structure%20of%20Atoms/1224/measurement-atomic-weights | 13 |
11 | The instrument that will form the heart of NASA's James Webb Space Telescope, or JWST, has...
Putting a New Spin on Computing
Physicists at the UA have achieved a breakthrough toward the development of a new breed of computing devices that can process data using less power.
Unlike conventional computing devices, which require electric charges to flow along a circuit, spintronics harnesses the magnetic properties of electrons rather than their electric charge to process and store information.
"Spintronics has the potential to overcome several shortcomings of conventional, charge-based computing. Microprocessors store information only as long as they are powered up, which is the reason computers take time to boot up and lose any data in their working memory if there is a loss of power," said Philippe Jacquod, an associate professor with joint appointments in the College of Optical Sciences and the department of physics at the College of Science, who published the research together with his postdoctoral assistant, Peter Stano.
"In addition, charge-based microprocessors are leaky, meaning they have to run an electric current all the time just to keep the data in their working memory at their right value," Jacquod added. "That's one reason why laptops get hot while they're working."
"Spintronics avoids this because it treats the electrons as tiny magnets that retain the information they store even when the device is powered down. That might save a lot of energy."
To understand the concept of spintronics, it helps to picture each electron as a tiny magnet, Jacquod explained.
"Every electron has a certain mass, a certain charge and a certain magnetic moment, or as we physicists call it, a spin," he said. "The electron is not physically spinning around, but it has a magnetic north pole and a magnetic south pole. Its spin depends on which pole is pointing up."
Current microprocessors digitize information into bits, or "zeroes" and "ones," determined by the absence or presence of electric charges. "Zero" means very few electronic charges are present; "one" means there are many of them. In spintronics, only the orientation of an electron's magnetic spin determines whether it counts as a zero or a one.
"You want as many magnetic units as possible, but you also want to be able to manipulate them to generate, transfer and exchange information, while making them as small as possible" Jacquod said.
Taking advantage of the magnetic moment of electrons for information processing requires converting their magnetic spin into an electric signal. This is commonly achieved using contacts consisting of common iron magnets or with large magnetic fields. However, iron magnets are too crude to work at the nanoscale of tomorrow's microprocessors, while large magnetic fields disturb the very currents they are supposed to measure.
"Controlling the spin of the electrons is very difficult because it responds very weakly to external magnetic fields," Jacquod explained. "In addition, it is very hard to localize magnetic fields. Both make it hard to miniaturize this technology."
"It would be much better if you could read out the spin by making an electric measurement instead of a magnetic measurement, because miniaturized electric circuits are already widely available," he added.
In their research paper, based on theoretical calculations controlled by numerical simulations, Jacquod and Stano propose a protocol using existing technology and requiring only small magnetic fields to measure the spin of electrons.
"We take advantage of a nanoscale structure known as a quantum point contact, which one can think of as the ultimate bottleneck for electrons," Jacquod explained. "As the electrons are flowing through the circuit, their motion through that bottleneck is constrained by quantum mechanics. Placing a small magnetic field around that constriction allows us to measure the spin of the electrons."
"We can read out the spin of the electrons based on how the current through the bottleneck changes as we vary the magnetic field around it. Looking at how the current changes tells us about the spin of the electrons."
"Our experience tells us that our protocol has a very good chance to work in practice because we have done similar calculations of other phenomena," Jacquod said. "That gives us the confidence in the reliability of these results."
In addition to being able to detect and manipulate the magnetic spin of the electrons, the work is a step forward in terms of quantifying it.
"We can measure the average spin of a flow of electrons passing through the bottleneck," Jacquod explained. "The electrons have different spins, but if there is an excess in one direction, for example ten percent more electrons with an upward spin, we can measure that rather precisely."
He said that up until now, researchers could only determine there was excess, but were not able to quantify it.
"Once you know how to produce the excess spin and know how to measure it, you could start thinking about doing basic computing tasks," he said, adding that in order to transform this work into applications, some distance has yet to be covered.
"We are hopeful that a fundamental stumbling block will very soon be removed from the spintronics roadmap," Stano added.
Spintronics could be a stepping stone for quantum computing, in which an electron not only encodes zero or one, but many intermediate states simultaneously. To achieve this, however, this research should be extended to deal with electrons one-by-one, a feat that has yet to be accomplished. | http://www.uanews.org/story/putting-new-spin-computing | 13 |
34 | What Is Calculus?
By this point, you should be familiar with using functions and solving equations (and systems of equations) involving real numbers using the techniques you learned in Algebra. With Calculus we are allowed to do things that we are not allowed to do in Algebra using two mathematical constructs in particular: infinity, which is larger than every number, and the infinitesimal, which is smaller than every number.
Calculus I, AP Calculus AB
You remember from Algebra that the slope of a line can be determined using the change in y divided by the change in x. The slope is simply one number that tells you the direction the line is going. However, other functions do not have a definite slope. You can tell if a function is increasing or decreasing by looking at a graph, and if you were to pick two values of x, say a and b, you could find what’s known as the average rate of change between a and b just by finding the change in y and dividing it by the change in x.
Sometimes, however, we want to find something called the instantaneous rate of change, or the direction in which a function is going at one value of x. We cannot use the average rate of change formula for one point, because the changes in y and in x are both 0, and we are not allowed to divide by 0. What we can do, however, is look at the average rate of change from a to some variable b, and see what happens to the average rate of change as we move b closer to a. As b moves closer to a, we see that both the changes in y and in x are approaching 0, and we can see what happens to that rate the closer they get. This process is called differentiating a function, or taking the derivative of a function.
Another thing you will learn to do in Calculus I is to find the area bound by a function. You know how to find the area of a rectangle by multiplying it’s length by width, but if you want to find the area bound between a curve and the x-axis, you will learn to use a process called Riemann integration.
Calculus II, AP Calculus BC
Calculus II continues what you’ve learned in Calc I, and in most cases it will cover three topics:
- Techniques of integration, where you will learn how to integrate more complicated functions, and how to choose what technique to use by looking at a function.
- Intro to differential equations, where you learn to make mathematical models using a differential equation, or an equation with a derivative in it.
- Infinite sequences and series. An infinite sequence is an ordered list of numbers, and here you will learn how to tell whether or not a sequence converges. An infinite series is the sum of all the terms in a sequence, and you will also learn how to tell whether or not a series will converge. You will also learn how to represent some functions, such as sin(x), as a special kind of infinite series called a power series.
Calc III, often called Multivariable Calculus or Vector Calculus, involves applying the techniques learned in Calc I and II to functions of more than one variable. You will learn how to take a partial derivative, which differentiates a multivariable function with respect to only one variable, and you will learn multiple integration, which involves integrating a multivariable function with | http://akronmathtutor.com/calculus-tutor-akron.html | 13 |
13 | Media literacy is a repertoire of competences that enable people to analyze, evaluate, and create messages in a wide variety of media modes, genres, and forms.
Media Education is the process of teaching and learning about media. It is about developing young people's critical and creative abilities when it comes to the media. Media education should not be confused with educational technology or with educational media. Surveys repeatedly show that, in most industrialized countries, children now spend more time watching television than they do in school, or also on any other activity apart from sleeping Media Education has no fixed location, no clear ideology and no definitive recipients; it is subject to whims of a financial market bigger than itself. Being able to understand the media enables people to analyze, evaluate, and create messages in a wide variety of mediums, genres, and forms. A person who is media literate is informed. There are many reasons why media studies are absent from the primary and secondary school curricula, including cuts in budgets and social services as well as over-packed schedules and expectations.
Education for media literacy often uses an inquiry-based pedagogic model that encourages people to ask questions about what they watch, hear, and read. Media literacy education provides tools to help people critically analyze messages, offers opportunities for learners to broaden their experience of media, and helps them develop creative skills in making their own media messages. Critical analysis can include identifying author, purpose and point of view, examining construction techniques and genres, examining patterns of media representation, and detecting propaganda, censorship, and bias in news and public affairs programming (and the reasons for these). Media literacy education may explore how structural features—such as media ownership, or its funding model -- affect the information presented.
Media literate people should be able to skillfully create and produce media messages, both to show understanding of the specific qualities of each medium, as well as to create independent media and participate as active citizens. Media literacy can be seen as contributing to an expanded conceptualization of literacy, treating mass media, popular culture and digital media as new types of 'texts' that require analysis and evaluation. By transforming the process of media consumption into an active and critical process, people gain greater awareness of the potential for misrepresentation and manipulation (especially through commercials and public relations techniques), and understand the role of mass media and participatory media in constructing views of reality.
Media literacy education is sometimes conceptualized as a way to address the negative dimensions of mass media, popular culture and digital media, including media violence, gender and racial stereotypes, the sexualization of children, and concerns about loss of privacy, cyberbullying and Internet predators. By building knowledge and competencies in using media and technology, media literacy education may provide a type of protection to children and young people by helping them make good choices in their media consumption habits, and patterns of usage.
Concepts of media education
Media education can be in many ways. In general, media education has come to be defined in terms of conceptual understandings of the media. Usually this means key concepts or key aspects. This approach does not specify particular objects of study and this enables media education to remain responsive to students' interests and enthusiasms. David Buckingham has come up with four key concepts that "provide a theoretical framework which can be applied to the whole range of contemporary media and to 'older' media as well: Production, Language, Representation, and Audience." These concepts are defined by David Buckingham as follows:
Production involves the recognition that media texts are consciously made. Some media texts are made by individuals working alone, just for themselves or their family and friends, but most are produced and distributed by groups of people often for commercial profit. This means recognizing the economic interests that are at stake in media production, and the ways in which profits are generated. More confident students in media education should be able to debate the implications of these developments in terms of national and cultural identities, and in terms of the range of social groups that are able to gain access to media.
Studying media production means looking at:
- Technologies: what technologies are used to produce and distribute media texts?
- Professional practices: Who makes media texts?
- The industry: Who owns the companies that buy and sell media and how do they make a profit?
- Connections between media: How do companies sell the same products across different media?
- Regulation: Who controls the production and distribution of media, and are there laws about this?
- Circulation and distribution: How do texts reach their audiences?
- Access and participation: Whose voices are heard in the media and whose are excluded?
Every medium has its own combination of languages that it uses to communicate meaning. For example, television uses verbal and written language as well as the languages of moving images and sound. Particular kinds of music or camera angles may be used to encourage certain emotions. When it comes to verbal language, making meaningful statements in media languages involves "paradigmatic choices" and "syntagmatic combinations". By analyzing these languages, one can come to a better understanding of how meanings are created.
Studying media languages means looking at:
- Meanings: How does media use different forms of language to convey ideas or meanings?
- Conventions: How do these uses of languages become familiar and generally accepted?
- Codes: How are the grammatical 'rules' of media established and what happens when they are broken?
- Genres: How do these conventions and codes operate in different types of media contexts?
- Choices: What are the effects of choosing certain forms of language, such as a certain type of camera shot?
- Combinations: How is meaning conveyed through the combination or sequencing of images, sounds, or words?
- Technologies: How do technologies affect the meanings that can be created?
The notion of 'representation' is one of the first established principles of media education. The media offers viewers a facilitated outlook of the world and they re-represent reality. Media production involves selecting and combining incidents, making events into stories, and creating characters. Media representations allow viewers to see the world in some particular ways and not others. Audiences also compare media with their own experiences and make judgements about how realistic they are. Media representations can be seen as real in some ways but not in others: viewers may understand that what they are seeing is only imaginary and yet they still know it can explain reality.
Studying media representations means looking at:
- Realism: Is this text intended to be realistic? Why do some texts seem more realistic than others?
- Telling the truth: How do media claim to tell the truth about the world?
- Presence and absence: What is included and excluded from the media world?
- Bias and objectivity: Do media texts support particular views about the world? Do they use moral or political values?
- Stereotyping: How do media represent particular social groups? Are those representations accurate?
- Interpretations: Why do audiences accept some media representations as true, or reject others as false?
- Influences: Do media representations affect our views of particular social groups or issues?
Studying audiences means looking at how demographic audiences are targeted and measured, and how media are circulated and distributed throughout. It means looking at different ways in which individuals use, interpret, and respond to media. The media increasingly have had to compete for people's attention and interest because research has shown that audiences are now much more sophisticated and diverse than has been suggested in the past. Debating views about audiences and attempting to understand and reflect on our own and others' use of media is therefore a crucial element of media education.
Studying media audiences means looking at:
- Targeting: How are media aimed at particular audiences?
- Address: How do the media speak to audiences?
- Circulation: How do media reach audiences?
- Uses: How do audiences use media in their daily lives? What are their habits and patterns of use?
- Making sense: How do audiences interpret media? What meanings do they make?
- Pleasures: What pleasures do audiences gain from media?
- Social differences: What is the role of gender. social class, age, and ethnic background in audience behavior?
UNESCO and media education
UNESCO has had a long standing experience with media literacy and education. The organization has supported a number of initiatives to introduce media and information literacy as an important part of lifelong learning. Most recently, the UNESCO Action for Media Education and Literacy brought together experts from numerous regions of the world to "catalyze processes to introduce media and information literacy components into teacher training curricula worldwide."
UNESCO questionnaire
In 2001, a media education survey was sent out by UNESCO in order to better understand which countries were incorporating media studies into different school's curriculum as well as to help develop new initiatives in the field of media education. A questionnaire was sent to a total of 72 experts on media education in 52 different countries around the world. The people who received this questionnaire were people involved in academics (such as teachers), policy makers, and educational advisers. The questionnaire addressed three key areas:
1) “Media education in schools: the extent, aims, and conceptual basis of current provision; the nature of assessment; and the role of production by students.”
2) "Partnerships: the involvement of media industries and media regulators in media education; the role of informal youth groups; the provision of teacher education.”
3) “The development of media education: research and evaluation of media education provision; the main needs of educators; obstacles to future development; and the potential contribution of UNESCO.”
The results from the answers of the survey were double-sided. It was noted that media education had been making a very uneven progress because while in one country there was an abundant amount of work towards media education, another country may have hardly even heard of the concept. One of the main reasons why media education has not taken full swing in some countries is because of the lack of policy makers addressing the issue. In some developing countries, educators say that media education was only just beginning to register as a concern because they were just starting to develop basic print literacy.
In the countries that media education existed at all, it would be offered as an elective class or an optional area of the school system rather than being on its own. Many countries argued that media education should not be a separate part of the curriculum but rather should be added to a subject already established. The countries which deemed media education as a part of the curriculum included the United States, Canada, Mexico, New Zealand, and Australia. Many countries lacked even just basic research on media education as a topic, including Russia and Sweden. Some said that popular culture is not worthy enough of study. But all of the correspondents realized the importance of media education as well as the importance of formal recognition from their government and policy makers that media education should be taught in schools.
Media literacy education is actively focused on the instructional methods and pedagogy of media literacy, integrating theoretical and critical frameworks rising from constructivist learning theory, media studies and cultural studies scholarship. This work has arisen from a legacy of media and technology use in education throughout the 20th century and the emergence of cross-disciplinary work at the intersections of scholarly work in media studies and education. Voices of Media Literacy, a project of the Center for Media Literacy representing first-person interviews with media literacy pioneers active prior to 1990 in English-speaking countries, provides historical context for the rise of the media literacy field and is available at http://www.medialit.org/voices-media-literacy-international-pioneers-speak Media education is developing in Great Britain, Australia, South Africa, Canada, the United States, with a growing interest in the Netherlands, Italy, Greece, Austria, Switzerland, India, Russia and among many other nations. UNESCO has played an important role in supporting media and information literacy by encouraging the development of national information and media literacy policies, including in education UNESCO has developed training resources to help teachers integrate information and media literacy into their teaching and provide them with appropriate pedagogical methods and curricula.
United Kingdom
Education for what is now termed media literacy has been developing in the UK since at least the 1930s. In the 1960s, there was a paradigm shift in the field of media literacy to emphasize working within popular culture rather than trying to convince people that popular culture was primarily destructive. This was known as the popular arts paradigm. In the 1970s, there came a recognition that the ideological power of the media was tied to the naturalization of the image. Constructed messages were being passed off as natural ones. The focus of media literacy also shifted to the consumption of images and representations, also known as the representational paradigm. Development has gathered pace since the 1970s when the first formal courses in Film Studies and, later, Media Studies, were established as options for young people in the 14-19 age range: over 100,000 students (about 5% of this age range) now take these courses annually. Scotland has always had a separate education system from the rest of the UK and began to develop policies for media education in the 1980s. In England, the creation of the National Curriculum in 1990 included some limited requirements for teaching about the media as part of English. The UK is widely regarded as a leader in the development of education for media literacy. Key agencies that have been involved in this development include the British Film Institute, the English and Media Centre Film Education and the Centre for the Study of Children, Youth and Media at the Institute of Education, London.
In Australia, media education was influenced by developments in Britain related to the inoculation, popular arts and demystification approaches. Key theorists who influenced Australian media education were Graeme Turner and John Hartley who helped develop Australian media and cultural studies. During the 1980s and 1990s, Western Australians Robyn Quin and Barrie MacMahon wrote seminal text books such as Real Images, translating many complex media theories into classroom appropriate learning frameworks. In most Australian states, media is one of five strands of the Arts Key Learning Area and includes "essential learnings" or "outcomes" listed for various stages of development. At the senior level (years 11 and 12), several states offer Media Studies as an elective. For example, many Queensland schools offer Film, Television and New Media, while Victorian schools offer VCE Media. Media education is supported by the teacher professional association Australian Teachers of Media which publishes a range of resources and the excellent Screen Education.
In South Africa, the increasing demand for Media Education has evolved from the dismantling of apartheid and the 1994 democratic elections. The first national Media Education conference in South Africa was actually held in 1990 and the new national curriculum has been in the writing stages since 1997. Since this curriculum strives to reflect the values and principles of a democratic society there seems to be an opportunity for critical literacy and Media Education in Languages and Culture courses.
In areas of Europe, media education has seen many different forms. Media education was introduced into the Finnish elementary curriculum in 1970 and into high schools in 1977. But the media education we know today did not evolve in Finland until the 1990s. Media education has been compulsory in Sweden since 1980 and in Denmark since 1970. In both these countries, media education evolved in the 1980s and 1990s as media education gradually moved away from moralizing attitudes towards an approach that is more searching and pupil-centered. In 1994, the Danish education bill gave recognition to media education but it is still not an integrated part of the school. The focus in Denmark seems to be on information technology.
France has taught film from the inception of the medium, but it has only been recently that conferences and media courses for teachers have been organized with the inclusion of media production. Germany saw theoretical publications on media literacy in the 1970s and 1980s, with a growing interest for media education inside and outside the educational system in the 80s and 90s. In the Netherlands media literacy was placed in the agenda by the Dutch government in 2006 as an important subject for the Dutch society. In April, 2008, an official center has been created (mediawijsheid expertisecentrum = medialiteracy expertisecenter) by the Dutch government. This center is more a network organization existing out of different partners who have their own expertise with the subject of media education. The idea is that media education will become a part of the official curriculum.
The history of media education in Russia goes back to the 1920s. The first attempts to instruct in media education (on the press and film materials, with the vigorous emphasis on the communist ideology) appeared in the 1920s but were stopped by Joseph Stalin’s repressions. The end of the 1950s - the beginning of the 1960s was the time of the revival of media education in secondary schools, universities, after-school children centers (Moscow, Saint Petersburg, Voronezh, Samara, Kurgan, Tver, Rostov on Don, Taganrog, Novosibirsk, Ekaterinburg, etc.), the revival of media education seminars and conferences for the teachers. During the time when the intensive rethinking of media education approaches was on the upgrade in the Western hemisphere, in Russia of the 1970s–1980s media education was still developing within the aesthetic concept. Among the important achievements of 1970s-1990s one can recall the first official programs of film and media education, published by Ministry of Education, increasing interest of Ph.D. to media education, experimental theoretic and practical work on media education by O.Baranov (Tver), S.Penzin (Voronezh), G.Polichko, U.Rabinovich (Kurgan), Y.Usov (Moscow), Aleksandr Fyodorov (Taganrog), A.Sharikov (Moscow) and others. The important events in media education development in Russia are the registration of the new specialization (since 2002) for the pedagogical universities – ‘Media Education’ (№ 03.13.30), and the launch of a new academic journal ‘Media Education’ (since January 2005), partly sponsored by the ICOS UNESCO ‘Information for All’. Additionally, the Internet sites of Russian Association for Film and Media Education (English and Russian versions) were created. Taking into account the fact that UNESCO defines media education as the priority field of the cultural educational development in the 21st century, media literacy has good prospects in Russia.
In North America, the beginnings of a formalized approach to media literacy as a topic of education is often attributed to the 1978 formation of the Ontario-based Association for Media Literacy (AML). Before that time, instruction in media education was usually the purview of individual teachers and practitioners. Canada was the first country in North America to require media literacy in the school curriculum. Every province has mandated media education in its curriculum. For example, the new curriculum of Quebec mandates media literacy from Grade 1 until final year of secondary school (Secondary V). The launching of media education in Canada came about for two reasons. One reason was the concern about the pervasiveness of American popular culture and the other was the education system-driven necessity of contexts for new educational paradigms. Canadian communication scholar Marshall McLuhan ignited the North American educational movement for media literacy in the 1950s and 1960s. Two of Canada's leaders in Media Literacy and Media Education are Barry Duncan and John Pungente. Duncan passed away on June 6, 2012, even after retired from classroom teaching but was still active in media education. Pungente is a Jesuit priest who has promoted media literacy since the early 1960s.
Media Awareness Network (MNet), a Canadian non-profit media education organization, hosts a Web site which contains hundreds of free lesson plans to help teachers integrate media into the classroom. MNet also has created award-winning educational games on media education topics, several of which are available free from the site, and has also conducted original research on media issues, most notable the study Young Canadians in a Wired World. MNet also hosts the Talk Media Blog, a regular column on media education issues.
The United States
Media literacy education has been an interest in the United States since the early 20th century, when high school English teachers first started using film to develop students' critical thinking and communication skills. However, media literacy education is distinct from simply using media and technology in the classroom, a distinction that is exemplified by the difference between "teaching with media" and "teaching about media." In the 1950s and 60s, the ‘film grammar’ approach to media literacy education developed in the United States, where educators began to show commercial films to children, having them learn a new terminology consisting of words such as fade, dissolve, truck, pan, zoom, and cut. Films were connected to literature and history. To understand the constructed nature of film, students explored plot development, character, mood and tone. Then, during the 1970s and 1980s, attitudes about mass media and mass culture began to shift. Around the English-speaking world, educators began to realize the need to “guard against our prejudice of thinking of print as the only real medium that the English teacher has a stake in.” A whole generation of educators began to not only acknowledge film and television as new, legitimate forms of expression and communication, but also explored practical ways to promote serious inquiry and analysis—- in higher education, in the family, in schools and in society. Typically, U.S. media literacy education includes a focus on news, advertising, issues of representation, and media ownership. Media literacy competencies can also be cultivated in the home, through activities including co-viewing and discussion.
Media literacy education began to appear in state English education curriculum frameworks by the early 1990s as a result of increased awareness in the central role of visual, electronic and digital media in the context of contemporary culture. Nearly all 50 states have language that supports media literacy in state curriculum frameworks. In 2004, Montana developed educational standards around media literacy that students are required to be competent in by grades 4, 8, and 12. Additionally, an increasing number of school districts have begun to develop school-wide programs, elective courses, and other after-school opportunities for media analysis and production.
Media literacy education is now gaining momentum in the United States because of the increased emphasis on 21st century literacy, which now incorporates media and information literacy, collaboration and problem-solving skills, and emphasis on the social responsibilities of communication. More than 600 educators are members of the National Association for Media Literacy Education (NAMLE), a national membership group that hosts a bi-annual conference. In 2009, this group developed an influential policy document, the Core Principles of Media Literacy Education in the United States. It states, "The purpose of media literacy education is to help individuals of all ages develop the habits of inquiry and skills of expression that they need to be critical thinkers, effective communicators and active citizens in today’s world. Principles include: (1) Media Literacy Education requires active inquiry and critical thinking about the messages we receive and create; (2) Media Literacy Education expands the concept of literacy in all forms of media (i.e., reading and writing); (3) Media Literacy Education builds and reinforces skills for learners of all ages. Like print literacy, those skills necessitate integrated, interactive, and repeated practice; (4) Media Literacy Education develops informed, reflective and engaged participants essential for a democratic society; (5) Media Literacy Education recognizes that media are a part of culture and function as agents of socialization; and (6) Media Literacy Education affirms that people use their individual skills, beliefs and experiences to construct their own meanings from media messages.
In the United States, various stakeholders struggle over nuances of meaning associated with the conceptualization of the practice on media literacy education. Educational scholars may use the term critical media literacy to emphasize the exploration of power and ideology in media analysis. Other scholars may use terms like new media literacy to emphasize the application of media literacy to user-generated content or 21st century literacy to emphasize the use of technology tools. As far back as 2001, the Action Coalition for Media Education (ACME) split from the main media literacy organization as the result of debate about whether or not the media industry should support the growth of media literacy education in the United States. Renee Hobbs of Temple University in Philadelphia wrote about this general question as one of the "Seven Great Debates" in media literacy education in an influential 1998 Journal of Communication article.
The media industry has supported media literacy education in the United States. Make Media Matter is one of the many blogs (an “interactive forum”) the Independent Film Channel features as a way for individuals to assess the role media plays in society and the world. The television program, The Media Project, offers a critical look at the state of news media in contemporary society. During the 1990s, the Discovery Channel supported the implementation of Assignment: Media Literacy, a statewide educational initiative for K-12 students developed in collaboration with the Maryland State Board of Education.
Because of the decentralized nature of the education system in a country with 70 million children now in public or private schools, media literacy education develops as the result of groups of advocates in school districts, states or regions who lobby for its inclusion in the curriculum. There is no central authority making nationwide curriculum recommendations and each of the fifty states has numerous school districts, each of which operates with a great degree of independence from one another. However, most U.S. states include media literacy in health education, with an emphasis on understanding environmental influences on health decision-making. Tobacco and alcohol advertising are frequently targeted as objects for "deconstruction, " which is one of the instructional methods of media literacy education. This resulted from an emphasis on media literacy generated by the Clinton White House. The Office of National Drug Control Policy (ONDCP) held a series of conferences in 1996 and 1997 which brought greater awareness of media literacy education as a promising practice in health and substance abuse prevention education. The medical and public health community now recognizes the media as a cultural environmental influence on health and sees media literacy education as a strategy to support the development of healthy behavior.
Interdisciplinary scholarship in media literacy education is emerging. In 2009, a scholarly journal was launched, the Journal of Media Literacy Education, to support the work of scholars and practitioners in the field. Universities such as Appalachian State University, Columbia University, Ithaca College, New York University, the University of Texas-Austin, Temple University, and the University of Maryland offer courses and summer institutes in media literacy for pre-service teachers and graduate students. Brigham Young University offers a graduate program in media education specifically for inservice teachers. The Salzburg Academy for Media and Global Change is another institution that educates students and professionals from around the world the importance of being literate about the media.
See also
- Information and media literacy
- Information literacy
- Postliterate society
- Visual literacy
- Buckingham, David (2007). Media education : literacy, learning and contemporary culture (Reprinted. ed.). Cambrdige [u.a]: Polity Press. ISBN 0745628303.
- Rideout, Livingstone, Bovill. "Children's Usage of Media Technologies". SAGE Publications. Retrieved 2012-04-25.
- The European Charter for Media Literacy. Euromedialiteracy.eu. Retrieved on 2011-12-21.
- See Corporate media and Public service broadcasting
- e.g., Media Literacy Resource Guide.
- Frau-Meigs, D. 2008. Media education: Crossing a mental rubicon.” It will also benefit generations to come in order to function in a technological and media filled world. In Empowerment through media education: An intercultural dialogue, ed. Ulla Carlsson, Samy Tayie, Genevieve Jacqui¬not-Delaunay and Jose Manuel Perez Tornero, (pp. 169 – 180). Goteborg University, Sweden: The International Clearinghouse on Children, Youth and Media, Nordicom in cooperation with UNESCO, Dar Graphit and Mentor Association.
- "UNESCO Media Literacy". Retrieved 2012-04-25.
- Domaille, Buckingham, Kate, David. "Where Are We Going and How Can We Get there?". Retrieved 2012-04-25.
- Media and Information Literacy. Portal.unesco.org. Retrieved on 2011-12-21.
- Buckingham, David
- Education. BFI (2010-11-03). Retrieved on 2011-12-21.
- English and Media Centre | Home. Englishandmedia.co.uk. Retrieved on 2011-12-21.
- Home. Film Education. Retrieved on 2011-12-21.
- at Zerolab.info. Cscym.zerolab.info. Retrieved on 2011-12-21.
- Culver, S., Hobbs, R. & Jensen, A. (2010). Media Literacy in the United States. International Media Literacy Research Forum.
- Hazard, P. and M. Hazard. 1961. The public arts: Multi-media literacy. English Journal 50 (2): 132-133, p. 133.
- Hobbs, R. & Jensen, A. (2009). The past, present and future of media literacy education. Journal of Media Literacy Education 1(1), 1 -11.
- What's Really Best for Learning?
- Hobbs, R. (2005). Media literacy and the K-12 content areas. In G. Schwarz and P. Brown (Eds.) Media literacy: Transforming curriculum and teaching. National Society for the Study of Education, Yearbook 104. Malden, MA: Blackwell (pp. 74 – 99).
- Core Principles of MLE : National Association for Media Literacy Education. Namle.net. Retrieved on 2011-12-21.
- Hobbs, R. (2006) Multiple visions of multimedia literacy: Emerging areas of synthesis. In Handbook of literacy and technology, Volume II. International Reading Association. Michael McKenna, Linda Labbo, Ron Kieffer and David Reinking, Editors. Mahwah: Lawrence Erlbaum Associates (pp. 15 -28).
- Hobbs, R. (1998). The seven great debates in the media literacy movement. Journal of Communication, 48 (2), 9-29.
- Journal of Media Literacy Education
- Fedorov, Alexander. Media Education and Media Literacy LAP Lambert Academic Publishing, 364 p.
- Study on Assessment Criteria for Media Literacy Levels PDF
- Study on the Current Trends and Approches to Media Literacy in Europe PDF
- A Journey to Media Literacy Community - A space for collaboration to promote media literacy concepts as well as a learning tool to become media-wise.
- Audiovisual and Media Policies - Media Literacy at the European Commission
- Center for Media Literacy - providing the CML MediaLit Kit with Five Core Concepts and Five Key Questions of media literacy
- EAVI - European Association for Viewers' Interests - Not for profit international organisation working in the field of media literacy
- Information Literacy and Media Education
- National Association for Media Literacy Education
- Project Look Sharp - an initiative of Ithaca College to provide materials, training and support for the effective integration of media literacy with critical thinking into classroom curricula at all education levels.
- Media Education Lab at the University of Rhode Island - Improves the practices of digital and media literacy education through scholarship and community service.
- MED - Associazione italiana per l'educazione ai media e alla comunicazione - the Italian Association for Media Literacy Education. | http://en.wikipedia.org/wiki/Media_education | 13 |
11 | This article works through the creation of a ‘toy’ genetic algorithm which starts with a few hundred random strings and evolves towards the phrase “Hello World!”. It’s a toy example because we know in advance what the optimum solution is – the phrase “Hello World!” – but it provides a nice simple introduction to evolutionary algorithms.
In short, a typical genetic algorithm works like this:
Represent solutions as binary strings (called chromosomes). Create a random population of a few hundred candidate solutions. Select two parents from the population and breed them to create two offspring. Sometimes mutate the children in some way. Repeat the selection/breeding/mutation cycle until you reach a desired level of fitness.
In more detail, genetic algorithms are comprised of the following concepts:
- A chromosome which expresses a possible solution to the problem as a string
- A fitness function which takes a chromosome as input and returns a higher value for better solutions
- A population which is just a set of many chromosomes
- A selection method which determines how parents are selected for breeding from the population
- A crossover operation which determines how parents combine to produce offspring
- A mutation operation which determines how random deviations manifest themselves
Now let’s work through a specific problem – the genetic Hello World algorithm:
Constructing The Chromosome
“The human genome itself is just a parts list” – Eric Lander
Exactly how you encode solutions will depend heavily on your problem. We encode solutions as arrays of integers, which can be thought of as analogous to strings of characters. For example we see the array [97,98,99] as equivalent to the string “abc” because the ascii value of ‘a’ is 97, ‘b’ is 98, ‘c’ is 99. So our chromosomes are essentially just strings. It is more usual for a GA to use a binary string for the chromosome.
The Fitness Function
“I have called this principle, by which each slight variation, if useful, is preserved, by the term of Natural Selection†– Charles Darwin
In order to know which chromosomes are better solutions, we need a way to judge them. For this, we use a fitness function. Even within one problem, there can be various possible fitness functions. We want strings that are closer to “Hello World!” to be classed as more fit. How could we achieve this? Some possibilities which I considered:
Return a count of the characters from the chromosome which are in the target string. Thus “eHllo World!” would be fitter than “Gello World!”, but exactly as fit as “Hello World!”.
Return a count of characters from the chromosome which are in the right place, thus “HxxloxWxxxxx” would have a fitness of 4, and “Hello World!” would have a fitness of 12. This is better, but too coarse – there are only 12 possible levels of fitness.
The fitness function I decided on was the following:
Return the sum of the character-wise differences between the chromosome and the target string. Thus “Hello Xorld!” has a fitness of -1 because X is one letter away from W, while “Hello Yorld!” has a fitness of -2 because Y is two characters away from W. This measure is quite rich; there are many different possible fitness levels, and it’s quite expressive. In the code, we implement this like so:
“Society exists only as a mental concept; in the real world there are only individuals.†– Oscar Wilde
We must start with an initial population, so we start by creating 400 totally random 12 character strings. To make things more interesting, the strings aren’t just made up of random alphanumerics – instead we pick characters from any of the 256 possible ascii characters.
“Parentage is a very important profession, but no test of fitness for it is ever imposed in the interest of the children.†– George Bernard Shaw
In nature, all kinds of factors contribute towards whether individuals get the chance to pass on their genetic code or not. Beer is often involved.
In genetic algorithms, we can choose from a number of specific selection methods, some of which are outlined here:
We may decide to be quite fascist and only allow the top 25% to breed. This can actually work very well! There are risks however – firstly, some currently-unfit individual may be carrying a trait which would have later helped the population, secondly this will reduce the diversity of the population – you may be breeding out other good approaches. For example, there might be a type of solution which quickly outperforms other candidates at the start, but then can’t go much further, while another type of solution might slowly become fitter and fitter and eventually overtake the first type. By choosing the “elite” method of selection you could make your population susceptible to these problems, especially if there are multiple good answers. It’s not so much a problem if there’s only one correct solution; you won’t need population diversity.
Another approach is to pick members with a probability according to their fitness; thus fitter candidates are more likely to be selected, but unfit candidates are not precluded from selection. This is a popular method, but if there is an unusually fit candidate it will dominate the mating pool, again reducing diversity.
This is the selection model I am using. In this, we take two members of the population at complete random and keep the fittest as the first parent, then do the same with another two members and keep the fittest as the other parent. This has the advantage that it’s faster than elitism (where you have to rank all candidates by fitness), still favours fitter candidates over unfit ones, and doesn’t allow one particularly fit candidate to dominate.
In the future, I’d probably recommend some hybrid selection method which, say, keeps the top 10% (elitism), and uses tournament selection for the remaining 90%; this way the very best candidates have no chance of being lost, but the population remains diverse.
“We are all gifted. That is our inheritance.” – Ethel Waters
Now we’ve chosen two good candidates, we want to ensure their genes survive in the next generation. Sometimes we’ll just clone the parents to create 2 children, but more often (usually between 60% and 100% of the time), we mix the parents chromosomes together very simply – blindly even – by applying an operation called crossover. Let’s say we’d chosen the following parents:
“Hellh Grrld” and “Grllo Worlk”
We chose a random point in the string, say the 4th character, indicated here with a forward slash:
“Hell/h Grrld” and “Grll/o Worlk”
and we just swap everything before that crossover point to create two offspring:
“Grllh Grrld” and “Hello Worlk”
Well, the first child isn’t very fit and probably won’t get selected the next time, but the second child is a really good candidate- only one character is wrong. Moreover, this second child is fitter than either of the two parents. Way to go, kid!
“Evolutionary plasticity can be purchased only at the ruthlessly dear price of continuously sacrificing some individuals to death from unfavourable mutations” – Theodosius Dobzhansky
Finally, we sometimes mutate the children just slightly before releasing them back into the population. Let’s imagine that we’d created our population of random strings but that none of them started with a “H”. No amount of crossover could ever produce the vital “H” we need to reach the perfect solution. So very occasionally, we’ll mutate one of the characters in the chromosome, introducing a never-before seen gene just in case it happens to be exactly what we want.
Most mutations won’t be beneficial – if we mutated our “Hello Worlk” child, the chances of randomly producing a beneficial mutation are slim – we have a 1 in 11 chance of choosing the right character; any other position will decrease fitness, and then we have to choose one of the characters between d and k in order to make an improvement; again, anything else will decrease fitness. So mutation usually occurs with a very low probability, sometimes as low as 0.001% depending on the problem. Our mutation rate is high (20%) because we will need to generate new characters, and my approach to mutation limits the destructive potential.
There are a number of ways to approach mutation. In a binary string, we would mutate simply by flipping one of the bits. In our example, I do a slightly more clever mutation. Rather than choosing a random position and changing the character to a random one between 0 and 255, I chose a random position and alter the character that’s there by up to 5 places. So a ‘k’ could become anything between ‘g’ and ‘p’, but never anything outside that bracket. This helps to ensure that mutations aren’t too destructive – the maximum impact on the fitness score is 5 points.
Running the Algorithm
“Life is just one damned thing after another” -Elbert Hubbard
Now we’ve got all the building blocks defined, running the algorithm is simply a matter of performing selection and breeding again and again until either a certain number of generations have gone by, or we reach a desired level of fitness.
“In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.” – Douglas Adams.
Thanks for reading this far. I hope I haven’t misrepresented genetic algorithms, but I’m sure I’ve made a mistake or two along the way. Please let me know in the comments and I’ll try to update the code or text accordingly. | http://www.puremango.co.uk/2010/12/genetic-algorithm-for-hello-world/comment-page-1/ | 13 |
27 | Structural Biochemistry/RNA World Hypothesis
The RNA World Hypothesis speculates that the origin of life began with ribonucleic acid (RNA) because of its ability to serve both as a storage for genetic information and enzymatic activity. It is proposed that RNA preceded the current genetic material, deoxyribonucleic acid (DNA), and led the evolution of the DNA → RNA → protein world.
There are two schools of thought that both support the RNA World hypothesis:
1. According to the Genetic Takeover Hypothesis, an earlier form of life on earth used RNA as its only genetic component. This proposes that there may be a pre-RNA molecule that used RNA or by change created RNA as a side product.
2. The first form of life on earth used RNA as its only genetic component. This theory requires that RNA came from inanimate matter.
The book, The Genetic Code, written in 1967 by Carl Woese, was the first published material that supported RNA World Hypothesis. Francis Crick and Leslie Orgel proposed the idea that RNA once did the work of DNA and proteins in 1968. Their theories were not validated until the work of Nobel Prize laureate Thomas R. Cech. In the 1970s, Cech was studying the splicing of RNA in a single-celled organism, Tetrahymena thermophila, when he discovered that an unprocessed RNA molecule could splice itself. He announced his discovery in 1982 and became the first to show that RNA has catalytic functions. The phrase “RNA World Hypothesis” was then coined later in 1986 by Harvard molecular biologist and Nobel Prize laureate Walter Gilbert as he commented on the recent observations of the catalytic properties of RNA. Another major milestone occurred in 2000 when it was published in Science that "The Ribosome is a Ribozyme" and the proteins in the ribosomes exist primarily on the periphery.
The primordial soup that made up Earth had compounds including nucleotides. These nucleotides sequenced spontaneously and randomly, eventually forming an RNA molecule (or a similar molecule) with catalytic characteristics. RNA has properties of autocatalytic self-replication and assembly, contributing to its exponential increase in number. Scientists today have assumed that replication was not perfect in the time of primitive life, and therefore variations of RNA developed. RNA's catalytic properties do not only apply to itself, but it also catalyzes transesterification—a process necessary for protein synthesis that allows specific peptide sequences and proteins to arise. Some of the peptides formed may have supported the self-replication of RNA and provided the possibility of undergoing modifications. Those modifications led to more efficient sequences of RNA molecules.
A possible model for forming purine and pyrimidine bases was proposed by Urey and Miller's experiment. This experiment brought about evidence that organic molecules originated from inorganic ingredients such as carbon dioxide, ammonia, and water etc. Such products were mixed under a reduced environment and subjected to electric shock (via lightning), which led to the creation of more reactive molecules, such as hydrogen, cyanide, and aldehydes, as well as some amino acids and organic acids over a certain period of time. These amino acids contributed to the formation of peptide sequences.
http://www.smithlifescience.com/MillersExp.htm (see apparatus)
Proof that the RNA World Hypothesis has clout is found through the function of present day ribosomes. RNA is the ribosome's tool for synthesizing proteins and catalyzing the formation of peptide bonds. Therefore, this points to the fact that RNA is multifunctional, and can act as a synthesizer, transporter, messenger, and ribosome molecule.
One may ask, if RNA was the precursor of DNA and proteins, how did this evolution occur? DNA complements the RNA sequence and stores genomic information. Since DNA is a more stable molecule than RNA, it makes sense for DNA to adapt to the environment and take over this job of RNA. And how is DNA more stable than RNA? The difference between the general structure of DNA and RNA is found in the sugar. DNA has a deoxyribose sugar while RNA has a ribose sugar. The missing 2'-OH group on the deoxyribose sugar is what makes DNA more stable, since there is no hydroxyl group for other molecules to react with. Otherwise, RNA does not remain in a helical ring, as does DNA, since the chain of nucleotides would be easily broken apart. Another possibility scientists are exploring is the idea that reverse RNA transcriptase has a part in the RNA to DNA transformation. This transcriptase, found in retro-virus and HIV processes, in addition to RNA replicase, may be the enzyme that performed this transition. Furthermore, the combination of cyanoacetaldehyde and urea formed uracil (U) and cytosine (C)-- components of the primordial soup. This belief was supported by another of Miller's experiments. There is no evidence at this time that thymine (T), the nitrogenous base in DNA that takes the place of uracil (U) in RNA, was formed from this atmosphere. This infers that RNA was a predecessor of DNA. In addition, proteins that had formed from RNA were found to be versatile structures, allowing them to take over what was initially RNA's catalytic functioning.
Properties Supporting Hypothesis
Properties that support the RNA Hypothesis became more clear in the 1980s when it was discovered that RNA can activate and deactivate other molecules by binding with them while folding into specific structures. Before this discovery, researchers believed that RNA only had a few functions. Consideration of RNA as the pre-component of cellular life has since been studied extensively.
Major evidence that has the scientific community believing that RNA predates DNA and proteins are as follows:
- RNA has the ability to store genetic data, and pass down hereditary information.
- It is the main component linking up DNA and gene formation to amino acids and protein synthesis via transcription and translation.
- RNA's ability to duplicate itself as well as the genetic information it carries, very much like DNA.
- RNA's complexity is less than that of DNA, and involves fewer types of molecules in order to self-replicate.
- DNA requires an RNA primer in order to replicate while RNA does not need any such primer. This shows how DNA seems to depend more on RNA for its continued existence rather than the other way around.
- RNA is able to catalyze reactions as proteins do. The formation of a protein is also administered by RNA which points heavily to its preexistence over the proteins.
- RNA's ability to form double helices similar to DNA, and tertiary structures similar to those of catalytic proteins.
- The structure of RNA, with an hydroxyl group in the 2' position of the sugar molecule, makes it a less stable molecule which is capable of attacking a phosphodiester bond near it as long as the RNA molecule is in a flexible position and not constrained. This made it susceptible to breakdown and allowed an adaptation of different conformations which perhaps was beneficial to early life.
- RNA's different set of bases such as Uracil, which is “1 product of damage to cytosine” made RNA more prone to mutations thus making it more suitable to primitive life in early times.
An experiment supporting the RNA World Hypothesis was, the Miller-Urey Experiment. The Miller-Urey Experiment was an experiment tested by Stanley L. Miller and Harold C. Urey in 1953 to see which molecules were present in the origin of life. This experiment specifically tested the hypothesis of Alexander Oparin's and J.B.S. Haldane's hypothesis that stated the conditions of prebiotic Earth favored chemical reactions, synthesizing inorganic compounds into organic ones. Both experiments helped scientists around the world better understand the evolution of the Earth and how organic compounds formed. The gases used by Miller and Urey were Methane (CH4), Ammonia (NH3), Hydrogen (H2) and water (H2O). After putting these gases (which were presumed to be present in prebiotic Earth), Miller and Urey continuously ran an electric current throughout the closed vessel system to stimulate lightning, which was thought to be extremely common on early Earth. These compounds were put inside a sterile array of glass tubes and flasks connected in a loop, with one flask half-full of liquid water and another flask containing a pair of electrodes. The liquid water was heated to induce evaporation, sparks were fired between the electrodes to simulate lightning through the atmosphere and water vapor, and then the atmosphere was cooled again so that the water could condense and trickle back into the first flask in a continuous cycle. Within a day, the mixture had turned pink in color, and at the end of one week of continuous operation, Miller and Urey observed that as much as 10–15% of the carbon within the system was now in the form of organic compounds. Two percent of the carbon had formed amino acids that are used to make proteins in living cells, with glycine as the most abundant. Sugars and liquids were also formed. Nucleic acids were not formed within the reaction. But the common 20 amino acids were formed, in various concentrations. This experiment thus showed that organic compounds that are vital to cellular function and life were easily made under the conditions of prebiotic Earth.
In an interview, Stanley Miller stated: "Just turning on the spark in a basic pre-biotic experiment will yield 11 out of 20 amino acids." This further supported the RNA World Hypothesis.
Other Experiments
The Miller-Urey and Oparin experiments helped launch other experiments to further show that organic compounds formed in early Earth and to also confirm the RNA World Hypothesis.
In 1961, Juan Oro conducted an experiment that concluded amino acids could be formed from HCN, hydrogen cyanide, and Ammonia (NH3) in an aqueous solution. This experiment also produced Adenine, one of the nucleotide bases. This became a major breakthrough because adenine is one of the four bases in DNA and RNA, the genetic material of a cell. Adenine is also used during the process of ATP (adenosine triphosphate), which is an energy releasing molecule in cells.
This experiment led to more showing that the other three RNA and DNA bases could be formed through similar experiments of simulated chemical environments with reducing atmospheres.
Properties Opposing Hypothesis
Most of the opponents of the RNA World concentrate on dispelling the idea that RNA was the first form of genetic material, although they do agree that there may have been some other pre-RNA form of genetic material. In summary:
1. Ribose is relatively unstable and difficult to form in a prebiotic mixture. Despite the favorable and controllable conditions that are available in laboratory settings, pre-cellular life has never been created from inanimate matter.
2. The origin of life began roughly 300 million years ago. Some believe that this is too short of a time period for the prebiotic soup to evolve in to a pre-RNA or RNA World.
3. Lack of evidence of large amounts of polyphosphates in primitive Earth makes it unlikely that it was the source of prebiotic energy or that it was involved in the first genetic material. .
Others believe that RNA is not a likely pre-DNA form of genetic material. Their arguments include:
1. The limited catalytic capabilities of RNA. Theorists say that RNA needed to have had a multitude of catalytic abilities to be able to survive the prebiotic world, but RNA has not shown this. Proteins, on the other hand, do have those catalytic abilities via their varying, enzymatic abilities.
2. The prebiotic simulation of the formation of the RNA molecule has shown some difficulty in that the bases and the sugar molecule do not readily react in water.
3. Opponents advocate proteins over RNA because they are easily formed.
5. Recent research shows that non-coding RNA regions have well-adapted and very specialized roles in the cell. Examples include siRNA and miRNA—they work well in an environment where RNAi and mRNA already exist. Because of their usefulness that we are just beginning to understand, it makes it less likely that there are "relics" of the RNA World present in our DNA as Gilbert originally mentioned in 1986 .
Alternative Theories
The difficulty of RNA formation has caused other propositions of alternative theories on precursor materials for cellular life:
- Peptide Nucleic Acid theory (PNA), a nucleic acid with a backbone of peptide bonds, made a likely theory because it overcame the problem in RNA theory regarding the difficulty of RNA to attach ribose and phosphate groups together.
- Threose nucleic acids are proposed as a more likely starting material than RNA.
- Glycold nucleic acids are proposed as precursors rather than RNA because they are easily formed.
- Double origin theory suggests that both RNA and proteins existed around the same time independently.
The RNA World hypothesis is supported by RNA's ability to store, transmit, and duplicate genetic information, as DNA does. RNA can also act as a ribozyme (an enzyme made of ribonucleic acid). Because it can reproduce on its own, performing the tasks of both DNA and proteins (enzymes), RNA is believed to have once been capable of independent life. Further, while nucleotides were not found in Miller-Urey's origins of life experiments, they were found by others' simulations, notably those of Joan Oro. Experiments with basic ribozymes, like the viral RNA Q-beta, have shown that simple self-replicating RNA structures can withstand even strong selective pressures (e.g., opposite-chirality chain terminators) (The Basics of Selection (London: Springer, 1997)).
Additionally, in the past a given RNA molecule might have survived longer than it can today. Ultraviolet light can cause RNA to polymerize while at the same time breaking down other types of organic molecules that could have the potential of catalyzing the break down of RNA (ribonucleases), suggesting that RNA may have been a relatively common substance on early Earth. This aspect of the theory is still untested and is based on a constant concentration of sugar-phosphate molecules.
The base cytosine does not have a plausible prebiotic simulation method because it easily undergoes hydrolysis.
Prebiotic simulations making nucleotides have conditions incompatible with those for making sugars (lots of formaldehyde). So they must somehow be synthesized, then brought together. However, they do not react in water. Anhydrous reactions bind with purines, but only 8% of them bind with the correct carbon atom on the sugar bound to the correct nitrogen atom on the base. Pyrimidines, however, do not react with ribose, even anhydrously.
Then phosphate must be introduced, but in nature phosphate in solution is extremely rare because it is so readily precipitated. After being introduced, the phosphate must combine with the nucleoside and the correct hydroxyl must be phosphorylated, in order to create a nucleotide.
For the nucleotides to form RNA, they must be activated themselves (meaning that they must be combined with two more phosphate groups, as in adenosine triphosphate). Activated purine nucleotides form small chains on a pre-existing template of all-pyrimidine RNA. However, this does not happen in reverse because the pyrimidine nucleotides do not stack well.
Additionally, the ribose must all be the same enantiomer, because any nucleotides of the wrong chirality act as chain terminators.
A.G. Cairns-Smith in 1982 criticized writers for exaggerating the implications of the Miller-Urey experiment. He argued that the experiment showed, not the possibility that nucleic acids preceded life, but its implausibility. He claimed that the process of constructing nucleic acids would require 18 distinct conditions and events that would have to occur continually over millions of years in order to build up the required quantities.
Alternative Hypothesis
As mentioned above, a different version of the same hypothesis is "Pre-RNA world", where a different nucleic acid is proposed to pre-date RNA. A candidate nucleic acid is peptide nucleic acid (PNA), which uses simple peptide bonds to link nucleobases.53 PNA is more stable than RNA, but its ability to be generated under prebiological conditions has yet to be demonstrated experimentally.
Threose nucleic acid (GNA), and like PNA, also lack experimental evidence for their respective abiogenesis.
An alternative—or complementary— theory of RNA origin is proposed in the PAH world hypothesis, whereby 57
The iron-sulfur world theory proposes that simple metabolic processes developed before genetic materials did, and these energy-producing cycles catalyzed the production of genes.
Yet another alternative theory to the RNA world hypothesis is the panspermia hypothesis. It discusses the possibility that the earliest life on this planet was carried here from somewhere else in the galaxy, possibly on meteorites similar to the Murchison meteorite.58 This does not invalidate the concept of an RNA world, but posits that this world was not Earth but rather another, probably older, planet.
Implications of the RNA World
The RNA world hypothesis, if true, has important implications for the very definition of life. For the majority of the time following the elucidation of the structure of DNA by Watson and Crick, life was considered as being largely defined in terms of DNA and proteins: DNA and proteins seemed to be the dominant macromolecules in the living cell, with RNA serving only to aid in creating proteins from the DNA blueprint.
The RNA world hypothesis places RNA at center-stage when life originated. This has been accompanied by many studies in the last ten years demonstrating important aspects of RNA function that were not previously known, and support the idea of a critical role for RNA in the functionality of life. In 2001, the RNA world hypothesis was given a major boost with the deciphering of the 3-dimensional structure of the ribosome, which revealed the key catalytic sites of ribosomes to be composed of RNA and for the proteins to hold no major structural role, and be of peripheral functional importance. Specifically, the formation of the peptide bond, the reaction that binds amino acids together into proteins, is now known to be catalyzed by an adenine residue in the rRNA: the ribosome is a ribozyme. This finding suggests that RNA molecules were most likely capable of generating the first proteins. Other interesting discoveries demonstrating a role for RNA beyond a simple message or transfer molecule include the importance of small nuclear ribonucleoproteins (SnRNPs) in the processing of pre-mRNA and RNA editing and reverse transcription from RNA in Eucaryotes in the maintenance of telomeres in the telomerase reaction.
- ^ Gesteland, R.F., Cech, T.R., Atkins, J.F., 2006, The RNA World: the nature of modern RNA suggests a prebiotic RNA, Cold Spring Harbor Laboratory Press, United States of America, 768 p.
- ^ Gilbert, W., "The RNA World". Nature 618.
- ^ Altman, S. The RNA World
- ^ Lazcano, A, Miller, S.L., "The Origin and Early Evolution of Life: Prebiotic Chemistry, the Pre-RNA World, and Time" Cell, Vol. 85, 7930798, June 14, 1996.
- ^ RNA World Hypothesis at NationMaster.com
- ^ RNA World Hypothesis at ExperienceFestival
- ^ Eddy, S.R., "Non-Coding RNA Genes and the Modern RNA World" Nature Reviews: Genetics Vol. 2, Dec. 2001.
9.Nelson, David L. Principles of Biochemistry, 4th ed. W. H. Freeman, 2004.
12.Asimov, Isaac (1981). Extraterrestrial Civilizations. Pan Books Ltd. pp. 178. http://www.accessexcellence.org/WN/NM/miller.php | http://en.wikibooks.org/wiki/Structural_Biochemistry/RNA_World_Hypothesis | 13 |
24 | Human rights in the United States
||This article's lead section may not adequately summarize key points of its contents. (May 2013)|
Human rights in the United States are legally protected by the Constitution of the United States, including the amendments, state constitutions, conferred by treaty, and enacted legislatively through Congress, state legislatures, and state referenda and citizen's initiatives. Federal courts in the United States have jurisdiction over international human rights laws as a federal question, arising under international law, which is part of the law of the United States.
The first human rights organization in the Thirteen Colonies of British America, dedicated to the abolition of slavery, was formed by Anthony Benezet in 1775. A year later, the Declaration of Independence advocated to the monarch of England (who was asserting sovereignty through a divine right of kings), for civil liberties based on the self-evident truth “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.” This view of human liberties postulates that fundamental rights are not granted by a divine or supernatural being to monarchs who then grant them to subjects, but are granted by a divine or supernatural being to each man (but not woman) and are inalienable and inherent.
After the Revolutionary War, the former thirteen colonies, free of the English monarch's claim of sovereignty, went through a pre-government phase of more than a decade, with much debate about the form of government they would have. The United States Constitution, adopted in 1787 through ratification at a national convention and conventions in the colonies, created a republic that guaranteed several rights and civil liberties; the Constitution significantly referred to "Persons", not "Men" as was used in the Declaration of Independence, omitted any reference to the supernatural imagination (such as a "Creator" or "God") and any authority derived or divined therefrom, and allowed "affirmation" in lieu of an "oath" if preferred. The Constitution thus eliminated any requirement of supernatural grant of human rights and provided that they belonged to all Persons (presumably meaning men and women, and perhaps children, although the developmental distinction between children and adults poses issues and has been the subject of subsequent amendments, as discussed below). Some of this conceptualization may have arisen from the significant Quaker segment of the population in the colonies, especially in the Delaware Valley, and their religious views that all human beings, regardless of sex, age, or race or other characteristics, had the same Inner light. Quaker and Quaker-derived views would have informed the drafting and ratification of the Constitution, including through the direct influence of some of the Framers of the Constitution, such as John Dickinson (politician) and Thomas Mifflin, who were either Quakers themselves or came from regions founded by or heavily populated with Quakers.
Dickinson, Mifflin and other Framers who objected to slavery were outvoted on that question, however, and the original Constitution sanctioned slavery (although not based on race or other characteristic of the slave) and, through the Three-Fifths Compromise, counted slaves (who were not defined by race) as three-fifths of a Person for purposes of distribution of taxes and representation in the House of Representatives (although the slaves themselves were discriminated against in voting for such representatives). See Three-Fifths Compromise.
As the new Constitution took effect in practice, concern over individual liberties and concentration of power at the federal level, gave rise to the amendment of the Constitution through adoption of the Bill of Rights (the first ten amendments of the Constitution).
Courts and legislatures also began to vary in the interpretation of "Person," with some jurisdictions narrowing the meaning of "Person" to cover only people with property, only men, or only white men. For example, although women had been voting in some states, such as New Jersey, since the founding of the United States, and prior to that in the colonial era, other states denied them the vote. In 1756 Lydia Chapin Taft voted, casting a vote in the local town hall meeting in place of her deceased husband. In 1777 women lost the right to exercise their vote in New York, in 1780 women lost the right to exercise their vote in Massachusetts, and in 1784 women lost the right to exercise their vote in New Hampshire. From 1775 until 1807, the state constitution in New Jersey permitted all persons worth over fifty pounds (about $7,800 adjusted for inflation, with the election laws referring to the voters as "he or she") to vote; provided they had this property, free black men and single women regardless of race therefore had the vote until 1807, but not married women, who could have no independent claim to ownership of fifty pounds (anything they owned or earned belonged to their husbands by the Common law of Coverture). In 1790, the law was revised to specifically include women, but in 1807 the law was again revised to exclude them, an unconstitutional act since the state constitution specifically made any such change dependent on the general suffrage. See Women's suffrage in the United States. Through the doctrine of coverture, many states also denied married women the right to own property in their own name, although most allowed single women (widowed, divorced or never married) the "Person" status of men, sometimes pursuant to the common law concept of a femme sole. Over the years, a variety of claimants sought to assert that discrimination against women in voting, in property ownership, in occupational license, and other matters was unconstitutional given the Constitution's use of the term "Person", but the all-male courts did not give this fair hearing. See, e.g., Bradwell v. Illinois.
In the 1860s, after decades of conflict over southern states' continued practice of slavery, and northern states' outlawing it, the Civil War was fought, and in its aftermath the Constitution was amended to prohibit slavery and to prohibit states' denying rights granted in the Constitution. Among these amendments was the Fourteenth Amendment, which included an Equal Protection Clause which seemed to clarify that courts and states were prohibited in narrowing the meaning of "Persons". After the Fourteenth Amendment to the United States Constitution was adopted, Susan B. Anthony, buttressed by the equal protection language, voted. She was prosecuted for this, however, and ran into an all-male court ruling that women were not "Persons"; the court leveed a fine but it was never collected.
Fifty years later, in 1920, the Constitution was amended again, with the Nineteenth Amendment to definitively prohibit discrimination against women's suffrage.
In the 1970s, the Burger Court made a series of rulings clarifying that discrimination against women in the status of being Persons violated the Constitution and acknowledged that previous court rulings to the contrary had been Sui generis and an abuse of power. The most often cited of these is Reed v. Reed, which held that any discrimination against either sex in the rights associated with Person status must meet a strict scrutiny standard.
The 1970s also saw the adoption of the Twenty-seventh Amendment, which prohibited discrimination on the basis of age, for Persons 18 years old and over, in voting. Other attempts to address the developmental distinction between children and adults in Person status and rights have been addressed mostly by the Supreme Court, with the Court recognizing in 2012, in Miller v. Alabama a political and biological principle that children are different from adults.
In the 20th century, the United States took a leading role in the creation of the United Nations and in the drafting of the Universal Declaration of Human Rights. Much of the Universal Declaration of Human Rights was modeled in part on the U.S. Bill of Rights. Even as such, the United States is in violation of the Declaration, in as much that "everyone has the right to leave any country" because the government may prevent the entry and exit of anyone from the United States for foreign policy, national security, or child support rearage reasons by revoking their passport. The United States is also in violation of the United Nations' human rights Convention on the Rights of the Child which requires both parents to have a relationship with the child. Conflict between the human rights of the child and those of a mother or father who wishes to leave the country without paying child support or doing the personal work of child care for his child can be considered to be a question of Negative and positive rights.
Domestic legal protection structure
According to Human Rights: The Essential Reference, "the American Declaration of Independence was the first civic document that met a modern definition of human rights." The Constitution recognizes a number of inalienable human rights, including freedom of speech, freedom of assembly, freedom of religion, the right to keep and bear arms, freedom from cruel and unusual punishment, and the right to a fair trial by jury.
Constitutional amendments have been enacted as the needs of the society evolved. The Ninth Amendment and Fourteenth Amendment recognize that not all human rights have yet been enumerated. The Civil Rights Act and the Americans with Disabilities Act are examples of human rights that were enumerated by Congress well after the Constitution's writing. The scope of the legal protections of human rights afforded by the US government is defined by case law, particularly by the precedent of the Supreme Court of the United States.
Within the federal government, the debate about what may or may not be an emerging human right is held in two forums: the United States Congress, which may enumerate these; and the Supreme Court, which may articulate rights that the law does not spell out. Additionally, individual states, through court action or legislation, have often protected human rights not recognized at federal level. For example, Massachusetts was the first of several states to recognize same sex marriage.
Effect of international treaties
In the context of human rights and treaties that recognize or create individual rights, there are self-executing and non-self-executing treaties. Non-self-executing treaties, which ascribe rights that under the Constitution may be assigned by law, require legislative action to execute the contract (treaty) before it can apply to law. There are also cases that explicitly require legislative approval according to the Constitution, such as cases that could commit the U.S. to declare war or appropriate funds.
Treaties regarding human rights, which create a duty to refrain from acting in a particular manner or confer specific rights, are generally held to be self-executing, requiring no further legislative action. In cases where legislative bodies refuse to recognize otherwise self-executing treaties by declaring them to be non-self-executing in an act of legislative non-recognition, constitutional scholars argue that such acts violate the separation of powers—in cases of controversy, the judiciary, not Congress, has the authority under Article III to apply treaty law to cases before the court. This is a key provision in cases where the Congress declares a human rights treaty to be non-self-executing, for example, by contending it does not add anything to human rights under U.S. domestic law. The International Covenant on Civil and Political Rights is one such case, which, while ratified after more than two decades of inaction, was done so with reservations, understandings, and declarations.
The Equal Protection Clause of the Fourteenth Amendment to the United States Constitution guarantees that "the right of citizens of the United States to vote shall not be denied or abridged by the United States or by any State on account of race, color, or previous condition of servitude." In addition, Fifteenth Amendment to the United States Constitution prohibits the denial of a citizen of the right to vote based on that citizen's "race, color, or previous condition of servitude".
The United States was the first major industrialized country to enact comprehensive legislation prohibiting discrimination on the basis of race and national origin in the workplace in the Civil Rights Act of 1964 (CRA), while most of the world contains no such recourse for job discrimination. The CRA is perhaps the most prominent civil rights legislation enacted in modern times, has served as a model for subsequent anti-discrimination laws and has greatly expanded civil rights protections in a wide variety of settings. The United States' 1991 provision of recourse for victims of such discrimination for punitive damages and full back pay has virtually no parallel in the legal systems of any other nation.
In addition to individual civil recourse, the United States possesses anti-discrimination government enforcement bodies, such as the Equal Employment Opportunity Commission, while only the United Kingdom and Ireland possess faintly analogous bureaucracies. Beginning in 1965, the United States also began a program of affirmative action that not only obliges employers not to discriminate, but requires them to provide preferences for groups protected under the Civil Rights Act to increase their numbers where they are judged to be underrepresented.
Such affirmative action programs are also applied in college admissions. The United States also prohibits the imposition of any "...voting qualification or prerequisite to voting, or standard, practice, or procedure ... to deny or abridge the right of any citizen of the United States to vote on account of race or color," which prevents the use of grandfather clauses, literacy tests, poll taxes and white primaries.
Prior to the passage of the Thirteenth Amendment to the United States Constitution, slavery was legal in some states of the United States until 1865. Influenced by the principles of the Religious Society of Friends, Anthony Benezet formed the Pennsylvania Abolition Society in 1775, believing that all ethnic groups were considered equal and that human slavery was incompatible with Christian beliefs. Benezet extended the recognition of human rights to Native Americans and he argued for a peaceful solution to the violence between the Native and European Americans. Benjamin Franklin became the president of Benezet's abolition society in the late 18th century. In addition, the Fourteenth Amendment was interpreted to permit what was termed Separate but equal treatment of minorities until the United States Supreme Court overturned this interpretation in 1954, which consequently overturned Jim Crow laws.Native Americans did not have citizenship rights until the Dawes Act of 1887 and the Indian Citizenship Act of 1924.
Following the 2008 presidential election, Barack Obama was sworn in as the first African-American president of the United States on January 20, 2009. In his Inaugural Address, President Obama stated "A man whose father less than 60 years ago might not have been served at a local restaurant can now stand before you to take a most sacred oath....So let us mark this day with remembrance, of who we are and how far we have traveled".
The Nineteenth Amendment to the United States Constitution prohibits the states and the federal government from denying any citizen the right to vote because of that citizen's sex. While this does not necessarily guarantee all women the right to vote, as suffrage qualifications are determined by individual states, it does mean that states' suffrage qualifications may not prevent women from voting due to their gender.
The United States was the first major industrialized country to enact comprehensive CRA legislation prohibiting discrimination on the basis of gender in the workplace while most of the world contains no such recourse for job discrimination. The United States' 1991 provision of recourse for discrimination victims for punitive damages and full back pay has virtually no parallel in the legal systems of any other nation. In addition to individual civil recourse, the United States possesses anti-discrimination government enforcement bodies, such as the Equal Employment Opportunity Commission, while only the United Kingdom and Ireland possess faintly analogous bureaucracies. Beginning in 1965, the United States also began a program of affirmative action that not only obliges employers not to discriminate, but also requires them to provide preferences for groups protected under the CRA to increase their numbers where they are judged to be underrepresented. Such affirmative action programs are also applied in college admissions.
The United States was also the first country to legally define sexual harassment in the workplace. Because sexual harassment is therefore a Civil Rights violation, individual legal rights of those harassed in the workplace are comparably stronger in the United States than in most European countries. The Selective Service System does not require women to register for a possible military draft and the United States military does not permit women to serve in some front-line combat units.
The United States was the first country in the world to adopt sweeping antidiscrimination legislation for people with disabilities, the Americans with Disabilities Act of 1990 (ADA). The ADA reflected a dramatic shift toward the employment of persons with disabilities to enhance the labor force participation of qualified persons with disabilities and in reducing their dependence on government entitlement programs. The ADA amends the CRA and permits plaintiffs to recover punitive damages. The ADA has been instrumental in the evolution of disability discrimination law in many countries, and has had such an enormous impact on foreign law development that its international impact may be even larger than its domestic impact. Although ADA Title I was found to be unconstitutional, the Supreme Court has extended the protection to people with Acquired immune deficiency syndrome (AIDS).
It is important to note that federal benefits such as Social Security Disability Insurance (SSDI) and Supplemental Security Income (SSI) are often administratively viewed in the United States as being primarily or near-exclusively the entitlement only of impoverished U.S. people with disabilities, and not applicable to those with disabilities who make significantly above-poverty level income. This is proven in practice by the general fact that in the U.S., a disabled person on SSI without significant employment income who is suddenly employed, with a salary or wage at or above the living wage threshold, often discovers that government benefits they were previously entitled to have ceased, because supposedly the new job "invalidates" the need for this assistance. The U.S. is the only industrialized country in the world to have this particular approach to physical disability assistance programming.
The Constitution of the United States explicitly recognizes certain individual rights. The 14th Amendment has several times been interpreted using the living constitution doctrine, for example, civil rights for people of color, disability rights, and women's rights were long unrecognized. There may exist additional gender-related civil rights that are presently not recognized by US law but it does not explicitly state any sexual orientation rights. Some states have recognized sexual orientation rights, which are discussed below.
The United States Federal Government does not have any substantial body of law relating to marriage; these laws have developed separately within each state. The Full faith and credit clause of the US Constitution ordinarily guarantees the recognition of a marriage performed in one state by another. However, the Congress passed the Defense of Marriage Act of 1996, which affirmed that no state (or other political subdivision within the United States) need recognize a marriage between persons of the same sex, even if the marriage was concluded or recognized in another state and the Federal Government may not recognize same-sex marriages for any purpose, even if concluded or recognized by one of the states. The US Constitution does not grant the federal government any authority to limit state recognition of sexual orientation rights or protections. DOMA only limits the interstate recognition of individual state laws and does not limit state law in any way. DOMA has been ruled to be unconstitutional by various US federal courts for violating the 14th Amendment to the United States Constitution (specifically its due process and equal protection clauses) and will potentially be reviewed by the Supreme Court of the United States.
Wisconsin was the first state to pass a law explicitly prohibiting discrimination on the basis of sexual orientation. In 1996, Hawaii ruled same-sex marriage is a Hawaiian constitutional right. Massachusetts, Connecticut, Iowa, Vermont, New Hampshire, New York, Washington D.C., Washington State, and Illinois are the only states that allow same-sex marriage. Same-sex marriage rights were established by the California Supreme Court in 2008, and over 18,000 same-sex couples were married. In November 2008 voters passed Proposition 8, amending the state constitution to deny same-sex couples marriage rights, which was upheld in a May 2009 decision that also allowed existing same-sex marriages to stand.
Privacy is not explicitly stated in the United States Constitution. In the Griswold v. Connecticut case, the Supreme Court ruled that it is implied in the Constitution. In the Roe v. Wade case, the Supreme Court used privacy rights to overturn most laws against abortion in the United States. In the Cruzan v. Director, Missouri Department of Health case, the Supreme Court held that the patient had a right of privacy to terminate medical treatment. In Gonzales v. Oregon, the Supreme Court held that the Federal Controlled Substances Act can not prohibit physician-assisted suicide allowed by the Oregon Death with Dignity Act. The Supreme Court upheld the constitutionality of criminalizing oral and anal sex in the Bowers v. Hardwick 478 U.S. 186 (1986) decision; however, it overturned the decision in the Lawrence v. Texas 539 U.S. 558 (2003) case and established the protection to sexual privacy.
The United States maintains a presumption of innocence in legal procedures. The Fourth, Fifth, Sixth Amendment to the United States Constitution and Eighth Amendment to the United States Constitution deals with the rights of criminal suspects. Later the protection was extended to civil cases as well In the Gideon v. Wainwright case, the Supreme Court requires that indigent criminal defendants who are unable to afford their own attorney be provided counsel at trial. Since the Miranda v. Arizona case, the United States requires police departments to inform arrested persons of their rights, which is later called Miranda warning and typically begins with "You have the right to remain silent."
Freedom of religion
The establishment clause of the first amendment prohibits the establishment of a national religion by Congress or the preference of one religion over another. The clause was used to limit school praying, beginning with Engel v. Vitale, which ruled government-led prayer unconstitutional. Wallace v. Jaffree banned moments of silence allocated for praying. The Supreme Court also ruled clergy-led prayer at public high school graduations unconstitutional with Lee v. Weisman.
The free exercise clause guarantees the free exercise of religion. The Supreme Court's Lemon v. Kurtzman decision established the "Lemon test" exception, which details the requirements for legislation concerning religion. The Employment Division v. Smith decision, the Supreme Court maintained a "neutral law of general applicability" can be used to limit religion exercises. In the City of Boerne v. Flores decision, the Religious Freedom Restoration Act was struck down as exceeding congressional power; however, the decision's effect is limited by the Gonzales v. O Centro Espirita Beneficente Uniao do Vegetal decision, which requires states to express compelling interest in prohibiting illegal drug use in religious practices.
Freedom of expression
The United States, like other liberal democracies, is supposed to be a constitutional republic based on founding documents that restrict the power of government to preserve the liberty of the people. The freedom of expression (including speech, media, and public assembly) is an important right and is given special protection, as declared by the First Amendment of the constitution. According to Supreme Court precedent, the federal and lower governments may not apply prior restraint to expression, with certain exceptions, such as national security and obscenity. There is no law punishing insults against the government, ethnic groups, or religious groups. Symbols of the government or its officials may be destroyed in protest, including the American flag. Legal limits on expression include:
- Solicitation, fraud, specific threats of violence, or disclosure of classified information
- Advocating the overthrow of the U.S. government through speech or publication, or organizing political parties that advocate the overthrow of the U.S. government (the Smith Act)
- Civil offenses involving defamation, fraud, or workplace harassment
- Copyright violations
- Federal Communications Commission rules governing the use of broadcast media
- Crimes involving sexual obscenity in pornography and text only erotic stories.
- Ordinances requiring mass demonstrations on public property to register in advance.
- The use of free speech zones and protest free zones.
- Military censorship of blogs written by military personnel claiming some include sensitive information ineligible for release. Some critics view military officials as trying to suppress dissent from troops in the field. The US Constitution specifically limits the human rights of active duty members, and this constitutional authority is used to limit speech rights by members in this and in other ways.
In two high profile cases, grand juries have decided that Time magazine reporter Matthew Cooper and New York Times reporter Judith Miller must reveal their sources in cases involving CIA leaks. Time magazine exhausted its legal appeals, and Mr. Cooper eventually agreed to testify. Miller was jailed for 85 days before cooperating. U.S. District Chief Judge Thomas F. Hogan ruled that the First Amendment does not insulate Time magazine reporters from a requirement to testify before a criminal grand jury that's conducting the investigation into the possible illegal disclosure of classified information.
Approximately 30,000 government employees and contractors are currently employed to monitor telephone calls and other communications.
Right to peaceably assemble
Although Americans are supposed to enjoy the freedom to peacefully protest, protesters are sometimes mistreated, beaten, arrested, jailed or fired upon.
On February 19, 2011, Ray McGovern was dragged out of a speech by Hillary Clinton on Internet freedom, in which she said that people should be free to protest without fear of violence. McGovern, who was wearing a Veterans for Peace t-shirt, stood up during the speech and silently turned his back on Clinton. He was then assaulted by undercover and uniformed police, roughed up, handcuffed and jailed. He suffered bruises and lacerations in the attack and required medical treatment.
On May 4, 1970, Ohio National Guardsmen opened fire on protesting students at Kent State University, killing four students. Investigators determined that 28 Guardsmen fired 61 to 67 shots. The Justice Department concluded that the Guardsmen were not in danger and that their claim that they fired in self-defense was untrue. The nearest student was almost 100 yards away at the time of the shooting.
On March 7, 1965, approximately 600 civil rights marchers were violently dispersed by state and local police near the Edmund Pettus Bridge outside of Selma, Alabama.
In June 2009, the ACLU asked the Department of Defense to stop categorizing political protests as "low-level terrorism" in their training courses.
During the fall of 2011, large numbers of protesters taking part in the "Occupy movement" in cities around the country were arrested on various charges during protests for economic and political reforms.
Freedom of movement
|This section requires expansion. (July 2009)|
As per § 707(b) of the Foreign Relations Authorization Act, Fiscal Year 1979,United States passports are required to enter and exit the country, and as per the Passport Act of 1926 and Haig v. Agee, the Presidential administration may deny or revoke passports for foreign policy or national security reasons at any time. Perhaps the most notable example of enforcement of this ability was the 1948 denial of a passport to U.S. Representative Leo Isacson, who sought to go to Paris to attend a conference as an observer for the American Council for a Democratic Greece, a Communist front organization, because of the group's role in opposing the Greek government in the Greek Civil War.
The United States prevents U.S. citizens to travel to Cuba, citing national security reasons, as part of an embargo against Cuba that has been condemned as an illegal act by the United Nations General Assembly. The current exception to the ban on travel to the island, permitted since April 2009, has been an easing of travel restrictions for Cuban-Americans visiting their relatives. Restrictions continue to remain in place for the rest of the American populace.
On June 30, 2010, the American Civil Liberties Union filed a lawsuit on behalf of ten people who are either U.S. citizens or legal residents of the U.S., challenging the constitutionality of the government's "no-fly" list. The plaintiffs have not been told why they are on the list. Five of the plaintiffs have been stranded abroad. It is estimated that the "no-fly" list contained about 8,000 names at the time of the lawsuit.
The Secretary of State can deny a passport to anyone imprisoned, on parole, or on supervised release for a conviction for international drug trafficking or sex tourism, or to anyone who is behind on their child support payments.
The following case precedents are typically cited in defense of unencumbered travel within the United States:
"The use of the highway for the purpose of travel and transportation is not a mere privilege, but a common fundamental right of which the public and individuals cannot rightfully be deprived." Chicago Motor Coach v. Chicago, 337 Ill. 200; 169 N.E. 22 (1929).
"The right of the citizen to travel upon the public highways and to transport his property thereon, either by carriage or by automobile, is not a mere privilege which a city may prohibit or permit at will, but a common law right which he has under the right to life, liberty, and the pursuit of happiness." Thompson v. Smith, Supreme Court of Virginia, 155 Va. 367; 154 S.E. 579; (1930).
"Undoubtedly the right of locomotion, the right to move from one place to another according to inclination, is an attribute of personal liberty, and the right, ordinarily, of free transit from or through the territory of any State is a right secured by the 14th amendment and by other provisions of the Constitution." Schactman v. Dulles, 225 F.2d 938; 96 U.S. App. D.C. 287 (1955).
"The right to travel is a well-established common right that does not owe its existence to the federal government. It is recognized by the courts as a natural right." Schactman v. Dulles 225 F.2d 938; 96 U.S. App. D.C. 287 (1955) at 941.
"The right to travel is a part of the liberty of which the citizen cannot be deprived without due process of law under the Fifth Amendment." Kent v. Dulles, 357 US 116, 125 (1958).
Freedom of association
Freedom of association is the right of individuals to come together in groups for political action or to pursue common interests.
In 2008, the Maryland State Police admitted that they had added the names of Iraq War protesters and death penalty opponents to a terrorist database. They also admitted that other "protest groups" were added to the terrorist database, but did not specify which groups. It was also discovered that undercover troopers used aliases to infiltrate organizational meetings, rallies and group e-mail lists. Police admitted there was "no evidence whatsoever of any involvement in violent crime" by those classified as terrorists.
National security exceptions
The United States government has declared martial law, suspended (or claimed exceptions to) some rights on national security grounds, typically in wartime and conflicts such as the United States Civil War,Cold War or the War against Terror. 70,000 Americans of Japanese ancestry were legally interned during World War II under Executive Order 9066. In some instances the federal courts have allowed these exceptions, while in others the courts have decided that the national security interest was insufficient. Presidents Lincoln, Wilson, and F.D. Roosevelt ignored such judicial decisions.
Sedition laws have sometimes placed restrictions on freedom of expression. The Alien and Sedition Acts, passed by President John Adams during an undeclared naval conflict with France, allowed the government to punish "false" statements about the government and to deport "dangerous" immigrants. The Federalist Party used these acts to harass supporters of the Democratic-Republican Party. While Woodrow Wilson was president, another broad sedition law called the Sedition Act of 1918, was passed during World War I. It also caused the arrest and ten-year sentencing of Socialist Party of America Presidential candidate Eugene V. Debs for speaking out against the atrocities of World War I, although he was later released early by President Warren G. Harding. Countless others, labeled as "subverts" (especially the Wobblies), were investigated by the Woodrow Wilson Administration.
Presidents have claimed the power to imprison summarily, under military jurisdiction, those suspected of being combatants for states or groups at war against the United States. Abraham Lincoln invoked this power in the American Civil War to imprison Maryland secessionists. In that case, the Supreme Court concluded that only Congress could suspend the writ of habeas corpus, and the government released the detainees. During World War II, the United States interned thousands of Japanese-Americans on alleged fears that Japan might use them as saboteurs.
The Fourth Amendment of the United States Constitution forbids unreasonable search and seizure without a warrant, but some administrations have claimed exceptions to this rule to investigate alleged conspiracies against the government. During the Cold War, the Federal Bureau of Investigation established COINTELPRO to infiltrate and disrupt left-wing organizations, including those that supported the rights of black Americans.
National security, as well as other concerns like unemployment, has sometimes led the United States to toughen its generally liberal immigration policy. The Chinese Exclusion Act of 1882 all but banned Chinese immigrants, who were accused of crowding out American workers.
Nationwide Suspicious Activity Reporting Initiative
The federal government has set up a data collection and storage network that keeps a wide variety of data on tens of thousands of Americans who have not been accused of committing a crime. Operated primarily under the direction of the Federal Bureau of Investigation, the program is known as the Nationwide Suspicious Activity Reporting Initiative or SAR. Reports of suspicious behavior noticed by local law enforcement or by private citizens are forwarded to the program, and profiles are constructed of the persons under suspicion. see also Fusion Center.
Labor rights in the United States have been linked to basic constitutional rights. Comporting with the notion of creating an economy based upon highly skilled and high wage labor employed in a capital-intensive dynamic growth economy, the United States enacted laws mandating the right to a safe workplace, Workers compensation, Unemployment insurance, fair labor standards, collective bargaining rights, Social Security, along with laws prohibiting child labor and guaranteeing a minimum wage. While U.S. workers tend to work longer hours than other industrialized nations, lower taxes and more benefits give them a larger disposable income than those of most industrialized nations, however the advantage of lower taxes have been challenged. See: Disposable and discretionary income. U.S. workers are among the most productive in the world. During the 19th and 20th centuries, safer conditions and workers' rights were gradually mandated by law.
In 1935, the National Labor Relations Act recognized and protected "the rights of most workers in the private sector to organize labor unions, to engage in collective bargaining, and to take part in strikes and other forms of concerted activity in support of their demands." However, many states hold to the principle of at-will employment, which says an employee can be fired for any or no reason, without warning and without recourse, unless violation of State or Federal civil rights laws can be proven. In 2011, 11.8% of U.S. workers were members of labor unions with 37% of public sector (government) workers in unions while only 6.9% of private sector workers were union members.
The Universal Declaration of Human Rights, adopted by the United Nations in 1948, states that “everyone has the right to a standard of living adequate for the health and well-being of oneself and one’s family, including food, clothing, housing, and medical care.” In addition, the Principles of Medical Ethics of the American Medical Association require medical doctors to respect the human rights of the patient, including that of providing medical treatment when it is needed. Americans' rights in health care are regulated by the US Patients' Bill of Rights.
Unlike most other industrialized nations, the United States does not offer most of its citizens subsidized health care. The United States Medicaid program provides subsidized coverage to some categories of individuals and families with low incomes and resources, including children, pregnant women, and very low-income people with disabilities (higher-earning people with disabilities do not qualify for Medicaid, although they do qualify for Medicare). However, according to Medicaid's own documents, "the Medicaid program does not provide health care services, even for very poor persons, unless they are in one of the designated eligibility groups."
Nonetheless, some states offer subsidized health insurance to broader populations. Coverage is subsidized for persons age 65 and over, or who meet other special criteria through Medicare. Every person with a permanent disability, both young and old, is inherently entitled to Medicare health benefits — a fact not all disabled US citizens are aware of. However, just like every other Medicare recipient, a disabled person finds that his or her Medicare benefits only cover up to 80% of what the insurer considers reasonable charges in the U.S. medical system, and that the other 20% plus the difference in the reasonable amount and the actual charge must be paid by other means (typically supplemental, privately held insurance plans, or cash out of the person's own pocket). Therefore, even the Medicare program is not truly national health insurance or universal health care the way most of the rest of the industrialized world understands it.
The Emergency Medical Treatment and Active Labor Act of 1986, an unfunded mandate, mandates that no person may ever be denied emergency services regardless of ability to pay, citizenship, or immigration status. The Emergency Medical Treatment and Labor Act has been criticized by the American College of Emergency Physicians as an unfunded mandate.
46.6 million residents, or 15.9 percent, were without health insurance coverage in 2005. This number includes about ten million non-citizens, millions more who are eligible for Medicaid but never applied, and 18 million with annual household incomes above $50,000. According to a study led by the Johns Hopkins Children's Center, uninsured children who are hospitalized are 60% more likely to die than children who are covered by health insurance.
The Fourth, Fifth, Sixth and Eighteenth Amendments of the Bill of Rights, along with the Fourteenth Amendment, ensure that criminal defendants have significant procedural rights that are unsurpassed by any other justice system. The Fourteenth Amendment's incorporation of due process rights adds these constitutional protections to the state and local levels of law enforcement. Similarly, the United States possesses a system of judicial review over government action more powerful than any other in the world.
The USA was the only country in the G8 to have carried out executions in 2011. Three countries in the G20 carried out executions in 2011: China, Saudi Arabia and the USA. The USA and Belarus were the only two of the 56 Member States of the Organization for Security and Cooperation in Europe to have carried out executions in 2011.
Capital punishment is controversial. Death penalty opponents regard the death penalty as inhumane and criticize it for its irreversibility and assert that it lacks a deterrent effect, as have several studies and debunking studies that claim to show a deterrent effect. According to Amnesty International, "the death penalty is the ultimate, irreversible denial of human rights."
The 1972 US Supreme Court case Furman v. Georgia 408 U.S. 238 (1972) held that arbitrary imposition of the death penalty at the states' discretion constituted cruel and unusual punishment in violation of the Eighth Amendment to the United States Constitution. In California v. Anderson 64 Cal.2d 633, 414 P.2d 366 (Cal. 1972), the Supreme Court of California classified capital punishment as cruel and unusual and outlawed the use of capital punishment in California, until it was reinstated in 1976 after the federal supreme court rulings Gregg v. Georgia, 428 U.S. 153 (1976), Jurek v. Texas, 428 U.S. 262 (1976), and Proffitt v. Florida, 428 U.S. 242 (1976). As of January 25, 2008, the death penalty has been abolished in the District of Columbia and fourteen states, mainly in the Northeast and Midwest.
The UN special rapporteur recommended to a committee of the UN General Assembly that the United States be found to be in violation of Article 6 the International Covenant on Civil and Political Rights in regards to the death penalty in 1998, and called for an immediate capital punishment moratorium. The recommendation of the special rapporteur is not legally binding under international law, and in this case the UN did not act upon the lawyer's recommendation.
Since the reinstatement of the death penalty in 1976 there have been 1077 executions in the United States (as of May 23, 2007). There were 53 executions in 2006. Texas overwhelmingly leads the United States in executions, with 379 executions from 1976 to 2006; the second-highest ranking state is Virginia, with 98 executions.
A ruling on March 1, 2005, by the Supreme Court in Roper v. Simmons prohibits the execution of people who committed their crimes when they were under the age of 18. Between 1990 and 2005, Amnesty International recorded 19 executions in the United States for crime committed by a juvenile.
It is the official policy of the European Union and a number of non-EU nations to achieve global abolition of the death penalty. For this reason the EU is vocal in its criticism of the death penalty in the US and has submitted amicus curiae briefs in a number of important US court cases related to capital punishment. The American Bar Association also sponsors a project aimed at abolishing the death penalty in the United States, stating as among the reasons for their opposition that the US continues to execute minors and the mentally retarded, and fails to protect adequately the rights of the innocent.
Some opponents criticize the over-representation of blacks on death row as evidence of the unequal racial application of the death penalty. This over-representation is not limited to capital offenses, in 1992 although blacks account for 12% of the US population, about 34 percent of prison inmates were from this group. In McCleskey v. Kemp, it was alleged the capital sentencing process was administered in a racially discriminatory manner in violation of the Equal Protection Clause of the Fourteenth Amendment.
In 2003, Amnesty International reported those who kill whites are more likely to be executed than those who kill blacks, citing of the 845 people executed since 1977, 80 percent were put to death for killing whites and 13 percent were executed for killing blacks, even though blacks and whites are murdered in almost equal numbers.
The United States is seen by social critics, including international and domestic human rights groups and civil rights organizations, as a state that violates fundamental human rights, because of disproportionately heavy, in comparison with other countries, reliance on crime control, individual behavior control (civil liberties), and societal control of disadvantaged groups through a harsh police and criminal justice system. The U.S. penal system is implemented on the federal, and in particular on the state and local levels. This social policy has resulted in a high rate of incarceration, which affects Americans from the lowest socioeconomic backgrounds and racial minorities the hardest.
Some have criticized the United States for having an extremely large prison population, where there have been reported abuses. As of 2004 the United States had the highest percentage of people in prison of any nation. There were more than 2.2 million in prisons or jails, or 737 per 100,000 population, or roughly 1 out of every 136 Americans. According to The National Council on Crime and Delinquency, since 1990 the incarceration of youth in adult jails has increased 208%. In some states youth - juvenile is defined as young as 13 years old. The researchers for this report found that Juveniles often were incarcerated to await trial for up to two years and subjected to the same treatment of mainstream inmates. The incarcerated adolescent is often subjected to a highly traumatic environment during this developmental stage. The long term effects are often irreversible and detrimental. "Human Rights Watch believes the extraordinary rate of incarceration in the United States wreaks havoc on individuals, families and communities, and saps the strength of the nation as a whole."
Examples of mistreatment claimed include prisoners left naked and exposed in harsh weather or cold air; "routine" use of rubber bullets and pepper spray;,solitary confinement of violent prisoners in soundproofed cells for 23 or 24 hours a day; and a range of injuries from serious injury to fatal gunshot wounds, with force at one California prison "often vastly disproportionate to the actual need or risk that prison staff faced." Such behaviors are illegal, and, "Professional standards clearly limit staff use of force to that which is necessary to control prisoner disorder."
Human Rights Watch raised concerns with prisoner rape and medical care for inmates. In a survey of 1,788 male inmates in Midwestern prisons by Prison Journal, about 21% claimed they had been coerced or pressured into sexual activity during their incarceration and 7% claimed that they had been raped in their current facility. Tolerance of serious sexual abuse and rape in United States prisons are consistently reported as widespread. It has been fought against by organizations such as Stop Prisoner Rape.
The United States has been criticized for having a high amount of non-violent and victim-less offenders incarcerated, as half of all persons incarcerated under State jurisdiction are for non-violent offences and 20 percent are incarcerated for drug offences, mostly for possession of cannabis.
The United States is the only country in the world allowing sentencing of young adolescents to life imprisonment without the possibility of parole. There are currently 73 Americans serving such sentences for crimes they committed at the age of 13 or 14. In December 2006 the United Nations took up a resolution calling for the abolition of this kind of punishment for children and young teenagers. 185 countries voted for the resolution and only the United States against.
In a 1999 report, Amnesty International said it had "documented patterns of ill-treatment across the U.S., including police beatings, unjustified shootings and the use of dangerous restraint techniques." According to a 1998 Human Rights Watch report, incidents of police use of excessive force had occurred in cities throughout the U.S., and this behavior goes largely unchecked. An article in USA Today reports that in 2006, 96% of cases referred to the U.S. Justice Department for prosecution by investigative agencies were declined. In 2005, 98% were declined. In 2001, the New York Times reported that the U.S. government is unable or unwilling to collect statistics showing the precise number of people killed by the police or the prevalence of the use of excessive force. Since 1999, at least 148 people have died in the United States and Canada after being shocked with Tasers by police officers, according to a 2005 ACLU report. In one case, a handcuffed suspect was tasered nine times by a police officer before dying, and six of those taserings occurred within less than three minutes. The officer was fired and faced the possibility of criminal charges.
War on Terrorism
Inhumane treatment and torture of captured non-citizens
International and U.S. law prohibits torture and other ill-treatment of any person in custody in all circumstances. However, the United States Government has categorized a large number of people as unlawful combatants, a United States classification used mainly as an excuse to bypass international law, which denies the privileges of prisoner of war (POW) designation of the Geneva Conventions.
Certain practices of the United States military and Central Intelligence Agency have been condemned by some sources domestically and internationally as torture. A fierce debate regarding non-standard interrogation techniques exists within the US civilian and military intelligence community, with no general consensus as to what practices under what conditions are acceptable.
Abuse of prisoners is considered a crime in the United States Uniform Code of Military Justice. According to a January 2006 Human Rights First report, there were 45 suspected or confirmed homicides while in US custody in Iraq and Afghanistan; "Certainly 8, as many as 12, people were tortured to death."
Abu Ghraib prison abuse
In 2004, photos showing humiliation and abuse of prisoners were leaked from Abu Ghraib prison, causing a political and media scandal in the US. Forced humiliation of the detainees included, but is not limited to nudity, rape, human piling of nude detainees, masturbation, eating food out of toilets, crawling on hand and knees while American soldiers were sitting on their back sometimes requiring them to bark like dogs, and hooking up electrical wires to fingers, toes, and penises.Bertrand Ramcharan, acting UN High Commissioner for Human Rights stated that while the removal of Saddam Hussein represented "a major contribution to human rights in Iraq" and that the United States had condemned the conduct at Abu Ghraib and pledged to bring violators to justice, "willful killing, torture and inhuman treatment" represented a grave breach of international law and "might be designated as war crimes by a competent tribunal."
In addition to the acts of humiliation, there were more violent claims, such as American soldiers sodomizing detainees (including an event involving an underage boy), an incident where a phosphoric light was broken and the chemicals poured on a detainee, repeated beatings, and threats of death. Six military personnel were charged with prisoner abuse in the Abu Ghraib torture and prisoner abuse scandal. The harshest sentence was handed out to Charles Graner, who received a 10-year sentence to be served in a military prison and a demotion to private; the other offenders received lesser sentences.
In their report The Road to Abu Ghraib, Human Rights Watch describe how:
The severest abuses at Abu Ghraib occurred in the immediate aftermath of a decision by Secretary Rumsfeld to step up the hunt for "actionable intelligence" among Iraqi prisoners. The officer who oversaw intelligence gathering at Guantanamo was brought in to overhaul interrogation practices in Iraq, and teams of interrogators from Guantanamo were sent to Abu Ghraib. The commanding general in Iraq issued orders to "manipulate an internee's emotions and weaknesses." Military police were ordered by military intelligence to "set physical and mental conditions for favorable interrogation of witnesses." The captain who oversaw interrogations at the Afghan detention center where two prisoners died in detention posted "Interrogation Rules of Engagement" at Abu Ghraib, authorizing coercive methods (with prior written approval of the military commander) such as the use of military guard dogs to instill fear that violate the Geneva Conventions and the Convention against Torture and Other Cruel, Inhuman Degrading Treatment or Punishment.
Enhanced interrogation and waterboarding
On February 6, 2008, the CIA director General Michael Hayden stated that the CIA had used waterboarding on three prisoners during 2002 and 2003, namely Khalid Shaikh Mohammed, Abu Zubayda and Abd al-Rahim al-Nashiri.
The June 21, 2004, issue of Newsweek stated that the Bybee memo, a 2002 legal memorandum drafted by former OLC lawyer John Yoo that described what sort of interrogation tactics against suspected terrorists or terrorist affiliates the Bush administration would consider legal, was "...prompted by CIA questions about what to do with a top Qaeda captive, Abu Zubaydah, who had turned uncooperative ... and was drafted after White House meetings convened by George W. Bush's chief counsel, Alberto Gonzales, along with Defense Department general counsel William Haynes and David Addington, Vice President Dick Cheney's counsel, who discussed specific interrogation techniques," citing "a source familiar with the discussions." Amongst the methods they found acceptable was waterboarding.
In November 2005, ABC News reported that former CIA agents claimed that the CIA engaged in a modern form of waterboarding, along with five other "enhanced interrogation techniques", against suspected members of al Qaeda.
UN High Commissioner for Human Rights, Louise Arbour, stated on the subject of waterboarding "I would have no problems with describing this practice as falling under the prohibition of torture," and that violators of the UN Convention Against Torture should be prosecuted under the principle of universal jurisdiction.
Bent Sørensen, Senior Medical Consultant to the International Rehabilitation Council for Torture Victims and former member of the United Nations Committee Against Torture has said:
It’s a clear-cut case: Waterboarding can without any reservation be labeled as torture. It fulfils all of the four central criteria that according to the United Nations Convention Against Torture (UNCAT) defines an act of torture. First, when water is forced into your lungs in this fashion, in addition to the pain you are likely to experience an immediate and extreme fear of death. You may even suffer a heart attack from the stress or damage to the lungs and brain from inhalation of water and oxygen deprivation. In other words there is no doubt that waterboarding causes severe physical and/or mental suffering – one central element in the UNCAT’s definition of torture. In addition the CIA’s waterboarding clearly fulfills the three additional definition criteria stated in the Convention for a deed to be labeled torture, since it is 1) done intentionally, 2) for a specific purpose and 3) by a representative of a state – in this case the US.
Lt. Gen. Michael D. Maples, the director of the Defense Intelligence Agency, concurred by stating, in a hearing before the Senate Armed Services Committee, that he believes waterboarding violates Common Article 3 of the Geneva Conventions.
The CIA director testified that waterboarding has not been used since 2003.
In April 2009, the Obama administration released four memos in which government lawyers from the Bush administration approved tough interrogation methods used against 28 terror suspects. The rough tactics range from waterboarding (simulated drowning) to keeping suspects naked and denying them solid food.
These memos were accompanied by the Justice Department's release of four Bush-era legal opinions covering (in graphic and extensive detail) the interrogation of 14 high-value terror detainees using harsh techniques beyond waterboarding. These additional techniques include keeping detainees in a painful standing position for long periods (Used often, once for 180 hours), using a plastic neck collar to slam detainees into walls, keeping the detainee's cell cold for long periods, beating and kicking the detainee, insects placed in a confinement box (the suspect had a fear of insects), sleep-deprivation, prolonged shackling, and threats to a detainee's family. One of the memos also authorized a method for combining multiple techniques.
Details from the memos also included the number of times that techniques such as waterboarding were used. A footnote said that one detainee was waterboarded 83 times in one month, while another was waterboarded 183 times in a month. This may have gone beyond even what was allowed by the CIA's own directives, which limit waterboarding to 12 times a day. The Fox News website carried reports from an unnamed US official who claimed that these were the number of pourings, not the number of sessions.
Physicians for Human Rights has accused the Bush administration of conducting illegal human experiments and unethical medical research during interrogations of suspected terrorists. The group has suggested this activity was a violation of the standards set by the Nuremberg Trials.
The United States maintains a detention center at its military base at Guantánamo Bay, Cuba where numerous enemy combatants of the war on terror are held. The detention center has been the source of various controversies regarding the legality of the center and the treatment of detainees.Amnesty International has called the situation "a human rights scandal" in a series of reports. 775 detainees have been brought to Guantánamo. Of these, many have been released without charge. As of March 2013, 166 detainees remain at Guantanamo. The United States assumed territorial control over Guantánamo Bay under the 1903 Cuban-American Treaty, which granted the United States a perpetual lease of the area. United States, by virtue of its complete jurisdiction and control, maintains "de facto" sovereignty over this territory, while Cuba retained ultimate sovereignty over the territory. The current government of Cuba regards the U.S. presence in Guantánamo as illegal and insists the Cuban-American Treaty was obtained by threat of force in violation of international law.
A delegation of UN Special Rapporteurs to Guantanamo Bay claimed that interrogation techniques used in the detention center amount to degrading treatment in violation of the ICCPR and the Convention Against Torture.
In 2005 Amnesty International expressed alarm at the erosion in civil liberties since the 9/11 attacks. According to Amnesty International:
- The Guantánamo Bay detention camp has become a symbol of the United States administration’s refusal to put human rights and the rule of law at the heart of its response to the atrocities of 11 September 2001. It has become synonymous with the United States executive’s pursuit of unfettered power, and has become firmly associated with the systematic denial of human dignity and resort to cruel, inhuman or degrading treatment that has marked the USA’s detentions and interrogations in the "war on terror".
Amnesty International also condemned the Guantánamo facility as "...the gulag of our times," which raised heated conversation in the United States. The purported legal status of "unlawful combatants" in those nations currently holding detainees under that name has been the subject of criticism by other nations and international human rights institutions including Human Rights Watch and the International Committee of the Red Cross. The ICRC, in response to the US-led military campaign in Afghanistan, published a paper on the subject. HRW cites two sergeants and a captain accusing U.S. troops of torturing prisoners in Iraq and Afghanistan.
The US government argues that even if detainees were entitled to POW status, they would not have the right to lawyers, access to the courts to challenge their detention, or the opportunity to be released prior to the end of hostilities—and that nothing in the Third Geneva Convention provides POWs such rights, and POWs in past wars have generally not been given these rights. The U.S. Supreme Court ruled in Hamdan v. Rumsfeld on June 29, 2006, that they were entitled to the minimal protections listed under Common Article 3 of the Geneva Conventions. Following this, on July 7, 2006, the Department of Defense issued an internal memo stating that prisoners would in the future be entitled to protection under Common Article 3.
United States citizens and foreign nationals are occasionally captured and abducted outside of the United States and transferred to secret US administered detention facilities, sometimes being held incommunicado for periods of months or years, a process known as extraordinary rendition.
According to The New Yorker, "The most common destinations for rendered suspects are Egypt, Morocco, Syria, and Jordan, all of which have been cited for human-rights violations by the State Department, and are known to torture suspects."
In November 2001, Yaser Esam Hamdi, a U.S. citizen, was captured by Afghan Northern Alliance forces in Konduz, Afghanistan, amongst hundreds of surrendering Taliban fighters and was transferred into U.S. custody. The U.S. government alleged that Hamdi was there fighting for the Taliban, while Hamdi, through his father, has claimed that he was merely there as a relief worker and was mistakenly captured. Hamdi was transferred into CIA custody and transferred to the Guantanamo Bay Naval Base, but when it was discovered that he was a U.S. citizen, he was transferred to naval brig in Norfolk, Virginia and then he was transferred brig in Charleston, South Carolina. The Bush Administration identified him as an unlawful combatant and denied him access to an attorney or the court system, despite his Fifth Amendment right to due process. In 2002 Hamdi's father filed a habeas corpus petition, the Judge ruled in Hamdi's favor and required he be allowed a public defender; however, on appeal the decision was reversed. In 2004, in the case of Hamdi v. Rumsfeld the U.S. Supreme court reversed the dismissal of a habeas corpus petition and ruled detainees who are U.S. citizens must have the ability to challenge their detention before an impartial judge.
In December 2004, Khalid El-Masri, a German citizen, was apprehended by Macedonian authorities when traveling to Skopje because his name was similar to Khalid al-Masri, an alleged mentor to the al-Qaeda Hamburg cell. After being held in a motel in Macedonia for over three weeks he was transferred to the CIA and extradited to Afghanistan. While held in Afghanistan, El-Masri claims he was sodomized, beaten, and repeatedly interrorgated about alleged terrorist ties. After being in custody for five months, Condoleezza Rice learned of his detention and ordered his release. El-Masri was released at night on a desolate road in Albania, without apology or funds to return home. He was intercepted by Albanian guards, who believed he was a terrorist due to his haggard and unkept appearance. He was subsequently reunited with his wife who had returned to her family in Lebanon with their children because she thought her husband had abandoned them. Using isotope analysis, scientists at the Bavarian archive for geology in Munich analyzed his hair and verified that he was malnourished during his disappearance.
According to the Human Rights Watch report (September 2012) the United States government during the U.S. President Bush republican administration “waterboarding” tortured opponents of Muammar Gaddafi during interrogations, then transferred them to mistreatment in Libya. President Barack Obama has denied water torture.
Unethical human experimentation in the United States
Well-known cases include:
- Albert Kligman's dermatology experiments
- Greenberg v. Miami Children's Hospital Research Institute
- Henrietta Lacks
- Human radiation experiments
- Jesse Gelsinger
- Monster Study
- Moore v. Regents of the University of California
- Medical Experimentation on Black Americans
- Milgram experiment
- Radioactive iodine experiments
- Plutonium injections
- Stanford prison experiment
- Surgical removal of body parts to try to improve mental health
- Tuskegee syphilis experiment
- Willowbrook State School
According to Canadian historian Michael Ignatieff, during and after the Cold War, the United States placed greater emphasis than other nations on human rights as part of its foreign policy, awarded foreign aid to facilitate human rights progress, and annually assessed the human rights records of other national governments.
The U.S. Department of State publishes a yearly report "Supporting Human Rights and Democracy: The U.S. Record" in compliance with a 2002 law that requires the Department to report on actions taken by the U.S. Government to encourage respect for human rights. It also publishes a yearly "Country Reports on Human Rights Practices.". In 2006 the United States created a "Human Rights Defenders Fund" and "Freedom Awards." The "Ambassadorial Roundtable Series", created in 2006, are informal discussions between newly confirmed U.S. Ambassadors and human rights and democracy non-governmental organizations. The United States also support democracy and human rights through several other tools.
The "Human Rights and Democracy Achievement Award" recognizes the exceptional achievement of officers of foreign affairs agencies posted abroad.
- In 2006 the award went to Joshua Morris of the embassy in Mauritania who recognized necessary democracy and human rights improvements in Mauritania and made democracy promotion one of his primary responsibilities. He persuaded the Government of Mauritania to re-open voter registration lists to an additional 85,000 citizens, which includes a significant number of Afro-Mauritanian minority individuals. He also organized and managed the largest youth-focused democracy project in Mauritania in 5 years.
- Nathaniel Jensen of the embassy in Vietnam was runner-up. He successfully advanced the human rights agenda on several fronts, including organizing the resumption of a bilateral Human Rights Dialogue, pushing for the release of Vietnam’s prisoners of concern, and dedicating himself to improving religion freedom in northern Vietnam.
Under legislation by congress, the United States declared that countries utilizing child soldiers may no longer be eligible for US military assistance, in an attempt to end this practice.
The U.S. has signed and ratified the following human rights treaties:
- International Covenant on Civil and Political Rights (ICCPR) (ratified with 5 reservations, 5 understandings, and 4 declarations.)
- Optional protocol on the involvement of children in armed conflict
- International Convention on the Elimination of All Forms of Racial Discrimination
- Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment
- Protocol relating to the Status of Refugees
- Optional Protocol to the Convention on the Rights of the Child on the Sale of Children, Child Prostitution and Child Pornography
Non-binding documents voted for:
International Bill of Rights
The International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR) are the legal treaties that enshrine the rights outlined in the Universal Declaration of Human Rights. Together, and along with the first and second optional protocols of the ICCPR they constitute the International bill of rights The US has not ratified the ICESCR or either of the optional protocols of the ICCPR.
The US's ratification of the ICCPR was done with five reservations – or limits – on the treaty, 5 understandings and 4 declarations. Among these is the rejection of sections of the treaty that prohibit capital punishment. Included in the Senate's ratification was the declaration that "the provisions of Article 1 through 27 of the Covenant are not self-executing", and in a Senate Executive Report stated that the declaration was meant to "clarify that the Covenant will not create a private cause of action in U.S. Courts." This way of ratifying the treaty was criticized as incompatible with the Supremacy Clause by Louis Henkin.
As a reservation that is "incompatible with the object and purpose" of a treaty is void as a matter of international law, Vienna Convention on the Law of Treaties, art. 19, 1155 U.N.T.S. 331 (entered into force Jan. 27, 1980) (specifying conditions under which signatory States can offer "reservations"), there is some issue as to whether the non-self-execution declaration is even legal under domestic law. At any rate, the United States is but a signatory in name only.
International Criminal Court
The U.S. has not ratified the Rome Statute of the International Criminal Court (ICC), which was drafted for prosecuting individuals above the authority of national courts in the event of accusations of genocide, crimes against humanity, war crimes, and crime of aggression. Nations that have accepted the Rome Statute can defer to the jurisdiction of the ICC or must surrender their jurisdiction when ordered.
The US rejected the Rome Statute after its attempts to include the nation of origin as a party in international proceedings failed, and after certain requests were not met, including recognition of gender issues, "rigorous" qualifications for judges, viable definitions of crimes, protection of national security information that might be sought by the court, and jurisdiction of the UN Security Council to halt court proceedings in special cases. Since the passage of the statute, the US has actively encouraged nations around the world to sign "bilateral immunity agreements" prohibiting the surrender of US personnel before the ICC and actively attempted to undermine the Rome Statute of the International Criminal Court. The US Congress also passed a law, American Service-Members' Protection Act (ASPA) authorizing the use of military force to free any US personnel that are brought before the court rather than its own court system. Human Rights Watch criticized the United States for removing itself from the Statute.
Judge Richard Goldstone, the first chief prosecutor at The Hague war crimes tribunal on the former Yugoslavia, echoed these sentiments saying:
I think it is a very backwards step. It is unprecedented which I think to an extent smacks of pettiness in the sense that it is not going to affect in any way the establishment of the international criminal court...The US have really isolated themselves and are putting themselves into bed with the likes of China, the Yemen and other undemocratic countries.
While the US has maintained that it will "bring to justice those who commit genocide, crimes against humanity and war crimes," its primary objections to the Rome Statute have revolved around the issues of jurisdiction and process. A US ambassador for War Crimes Issues to the UN Security Council said to the US Senate Foreign Relations Committee that because the Rome Statute requires only one nation to submit to the ICC, and that this nation can be the country in which an alleged crime was committed rather than defendant’s country of origin, U.S military personnel and US foreign peaceworkers in more than 100 countries could be tried in international court without the consent of the US. The ambassador states that "most atrocities are committed internally and most internal conflicts are between warring parties of the same nationality, the worst offenders of international humanitarian law can choose never to join the treaty and be fully insulated from its reach absent a Security Council referral. Yet multinational peacekeeping forces operating in a country that has joined the treaty can be exposed to the court's jurisdiction even if the country of the individual peacekeeper has not joined the treaty."
Other treaties not signed or signed but not ratified
Where the signature is subject to ratification, acceptance or approval, the signature does not establish the consent to be bound. However, it is a means of authentication and expresses the willingness of the signatory state to continue the treaty-making process. The signature qualifies the signatory state to proceed to ratification, acceptance or approval. It also creates an obligation to refrain, in good faith, from acts that would defeat the object and the purpose of the treaty.
The U.S. has not ratified the following international human rights treaties:
- First Optional Protocol to the International Covenant on Civil and Political Rights (ICCPR)
- Second Optional Protocol to the International Covenant on Civil and Political Rights, aiming at the abolition of the death penalty
- Optional Protocol to CEDAW
- Optional Protocol to the Convention against Torture
- Convention relating to the Status of Refugees (1951)
- Convention Relating to the Status of Stateless Persons (1954)
- Convention on the Reduction of Statelessness (1961)
- International Convention on the Protection of the Rights of All Migrant Workers and Members of their Families
The US has signed but not ratified the following treaties:
- Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) (signed but not ratified)
- Convention on the Rights of the Child (CRC) (signed but not ratified)
- International Covenant on Economic, Social and Cultural Rights (signed but not ratified)
Non-binding documents voted against:
Inter-American human rights system
The US is a signatory to the 1948 American Declaration of the Rights and Duties of Man and has signed but not ratified the 1969 American Convention on Human Rights. It is a member of Inter-American Convention on the Granting of Political Rights to Women (1948). It does not accept the adjudicatory jurisdiction of the Costa Rica-based Inter-American Court of Human Rights.
- Protocol to the American Convention on Human Rights to Abolish the Death Penalty (1990)
- Additional Protocol to the American Convention on Human Rights in the Area of Economic, Social and Cultural Rights
- Inter-American Convention to Prevent and Punish Torture (1985)
- Inter-American Convention on the Prevention, Punishment and Eradication of Violence Against Women (1994)
- Inter-American Convention on Forced Disappearance of Persons (1994)
- Inter-American Convention on the Elimination of All Forms of Discrimination against Persons with Disabilities
Coverage of violations in the media
Studies have found that the New York Times coverage of worldwide human rights violations is seriously biased, predominantly focusing on the human rights violations in nations where there is clear U.S. involvement, while having relatively little coverage of the human rights violations in other nations.Amnesty International's Secretary General Irene Khan explains, "If we focus on the U.S. it's because we believe that the U.S. is a country whose enormous influence and power has to be used constructively ... When countries like the U.S. are seen to undermine or ignore human rights, it sends a very powerful message to others."
According to Freedom in the World, an annual report by US based think-tank Freedom House, which rates political rights and civil liberties, in 2007, the United States was ranked "Free" (the highest possible rating), together with 92 other countries.
According to the annual Worldwide Press Freedom Index published by Reporters Without Borders, due to wartime restrictions the United States was ranked 53rd from the top in 2006 (out of 168), 44th in 2005. 22nd in 2004, 31st in 2003 and 17th in 2002.
According to the annual Corruption Perceptions Index, which was published by Transparency International, the United States was ranked 20th from the top least corrupt in 2006 (out of 163), 17th in 2005, 18th in 2003, and 16th in 2002.
According to the Gallup International Millennium Survey, the United States ranked 23rd in citizens' perception of human rights observance when its citizens were asked, "In general, do you think that human rights are being fully respected, partially respected or are they not being respected at all in your country?"
In the aftermath of the devastation caused by Hurricane Katrina, criticism by some groups commenting on human rights issues was made regarding the recovery and reconstruction issues The American Civil Liberties Union and the National Prison Project documented mistreatment of the prison population during the flooding, while United Nations Special Rapporteur Doudou Diène delivered a 2008 report on such issues. The United States was elected in 2009 to sit on the United Nations Human Rights Council (UNHRC), which the U.S. State Department had previously asserted had lost its credibility by its prior stances and lack of safeguards against severe human rights violators taking a seat. In 2006 and 2007, the UNHCR and Martin Scheinin were critical of the United States regard permitting executions by lethal injection, housing children in adult jails, subjecting prisoners to prolonged isolation in supermax prisons, using enhanced interrogation techniques and domestic poverty gaps.
Criticism of the US Human rights record
US Human rights abuses
Organizations involved in US human rights
People involved in US human rights
Notable comments on Human Rights
- Ellis, Joseph J. (1998) . American Sphinx: The Character of Thomas Jefferson. Vintage Books. p. 63. ISBN 0-679-76441-0.
- Lauren, Paul Gordon (2007). "A Human Rights Lens on U.S. History: Human Rights at Home and Human Rights Abroad". In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: Portraits of the Movement III. Praeger Publishers. p. 4. ISBN 0-275-98824-4.
- Brennan, William, J., ed. Schwartz, Bernard, The Burger Court: counter-revolution or confirmation?, Oxford University Press US, 1998,ISBN 0-19-512259-3, page 10
- Schneebaum, Steven M. (Summer, 1998). "Human rights in the United States courts: The role of lawyers". Washington & Lee Law Review. Retrieved 2009-06-10.
- Declaration of Independence
- Henkin, Louis; Rosenthal, Albert J. (1990). Constitutionalism and rights: the influence of the United States constitution abroad. Columbia University Press. pp. 2–3. ISBN 0-231-06570-1.
- Morgan, Edmund S. (1989). Inventing the People: The Rise of Popular Sovereignty in England and America. W. W. Norton & Company. ISBN 0393306232.
- See, e.g., "Article VI". U.S. Constitution. 1787. "The Senators and Representative before mentioned, and the Members of the several State Legislatures, and all executive and judicial Officers, both of the United Sates and of the several States, shall be bound by Oath or Affirmation, to support this Constitution; but no religious Test shall ever be required as a Qualification to any Office or public Trust under the United States."
- See, e.g.,Fischer, David Hackett. Albion's Seed: Four British Folkways in America. Oxford University Press. pp. 490–498. ISBN 978-0-19-506905-1. ""On the subject of gender, the Quakers had a saying: "In souls, there is no sex." to g"
- "Blackstone Valley". Blackstonedaily.com. Retrieved June 29, 2011.
- "Blackstone Valley". Blackstonedaily.com. Retrieved June 29, 2011.
- "Women in Politics: A Timeline". International Women's Democracy Center. Retrieved January 3, 2012.
- "The Liz Library Presents: The Woman Suffrage Timeline". Thelizlibrary.org. August 26, 1920. Retrieved June 29, 2011.
- [dead link]
- Ignatieff, Michael (2005). "Introduction: American Exceptionalism and Human Rights". American Exceptionalism and Human Rights. Princeton University Press. ISBN 0-691-11648-2.
- National Coordinating Committee for UDHR (1998). "Drafting and Adoption: The Universal Declaration of Human Rights". Franklin and Eleanor Roosevelt Institute. Retrieved 02-07-2008.
- See Haig v. Agee, Passport Act of 1926, 18 U.S.C. § 1185(b) and the Personal Responsibility and Work Opportunity Act of 1996
- Devine, Carol; Carol Rae Hansen, Ralph Wilde, Hilary Poole (1999). Human rights: The essential reference. Greenwood Publishing Group. pp. 26–29. ISBN 1-57356-205-X, 9781573562058 Check
|isbn=value (help). Retrieved 6/11/2009.
- Leebrick, Kristal (2002). The United States Constitution. Capstone Press. pp. 26–39. ISBN 0-7368-1094-3.
- Burge, Kathleen (2003-11-18). "SJC: Gay marriage legal in Mass.". Boston Globe.
- Foster v. Neilson, 27 U.S. 253, 314-15 (1829) U.S. Supreme Court, Chief Justice Marshall writing: “Our constitution declares a treaty to be the law of the land. It is, consequently, to be regarded in courts of justice as equivalent to an act of the legislature, whenever it operates of itself without the aid of any legislative provision. But when the terms of the stipulation import a contract, when either of the parties engages to perform a particular act, the treaty addresses itself to the political, not the judicial department, and the legislature must execute the contract before it can become a rule for the Court.” at 314, cited in Martin International Human Rights and Humanitarian Law et al.
- Martin, F.International Human Rights and Humanitarian Law. Cambridge University Press. 2006. p. 221 and following. ISBN 0-521-85886-0, ISBN 978-0-521-85886-1
- McWhirter,Darien A.m Equal Protection', Oryx Press, 1995, page 1
- Gould, William B., Agenda for Reform: The Future of Employment Relationships and the Law< MIT Press, 1996, ISBN 0-262-57114-5, page 27
- Nivola, Pietro S. (2002). Tense commandments: federal prescriptions and city problems,. Brookings Institution Press. pp. 127–8. ISBN 0-8157-6094-9.
- Capozzi, Irene Y., The Civil Rights Act: background, statutes and primer, Nova Publishers, 2006, ISBN 1-60021-131-3, page 6
- James W. Russell, Double standard: social policy in Europe and the United States, Rowman & Littlefield, 2006, ISBN 0-7425-4693-4, pages 147-150
- Lauren, Paul Gordon (2003). "My Brother's and Sister's Keeper: Visions and the Birth of Human Rights". The Evolution of International Human Rights: Visions Seen (Second ed.). University of Pennsylvania Press. p. 33. ISBN 0-8122-1854-X.
- Benezet also stated that "Liberty is the right of every human creature, as soon as he breathes the vital air. And no human law can deprive him of the right which he derives from the law of nature." Grimm, Robert T., Anthony Benezet (1716–1784), Notable American Philanthropists: Biographies of Giving and Volunteering, Greenwood Publishing Group, 2002, ISBN 1-57356-340-4, pages 26-28
- Vorenberg, Michael, Final Freedom: The Civil War, the Abolition of Slavery, and the Thirteenth Amendment, Cambridge University Press, 2001, page 1
- Martin, Waldo E. (1998). Brown v. Board of Education: a brief history with documents. Palgrave Macmillan. pp. 3&231. ISBN 0-312-12811-8.
- Brown v. Board of Education, 98 F. Supp. 797 (August 3, 1951).
- "Barack Obama Becomes 44th President of the United States". America.gov. Retrieved 2009-05-23.
- Nineteenth Amendment, CRS/LII Annotated Constitution
- Zippel, Kathrina, SEXUAL HARASSMENT AND TRANSNATIONAL RELATIONS:WHY THOSE CONCERNED WITH GERMAN-AMERICAN RELATIONS SHOULD CARE, American Institute for Contemporary German Studies, The Johns Hopkins University, 2002, page 6
- Employment Discrimination Law Under Title VII, Oxford University Press US, 2008, ISBN 0-19-533898-7
- Rostker v. Goldberg, 453 U.S. 57 (1981)
- Midgley, James, and Michelle Livermore, The Handbook of Social Policy, SAGE, 2008, ISBN 1-4129-5076-7, page 448
- Blanck, Peter David and David L. Braddock, The Americans with Disabilities Act and the emerging workforce: employment of people with mental retardation, AAMR, 1998, ISBN 0-940898-52-7, page 3
- Capozzi, Irene Y., The Civil Rights Act: background, statutes and primer, Nova Publishers, 2006, ISBN 1-60021-131-3, page 60-61
- Lawson, Anna, Caroline Gooding, Disability rights in Europe: from theory to practice, Hart Publishing, 2005, ISBN 1-84113-486-4, page 89
- Jones, Nancy Lee, The Americans with Disabilities Act (ADA): overview, regulations and interpretations, Nova Publishers, 2003, ISBN 1-59033-663-1, pages 7-13
- "Flashpoints USA: God and Country". PBS. 2007-01-27. Retrieved 2007-06-03.
- HRC: Wisconsin Non-Discrimination Law Wisconsin law explicitly prohibits discrimination based on sexual orientation in employment, housing, public education, credit and public accommodations. Citations: WIS. STAT. §36.12, § 106.50, § 106.52§ 111.31, § 230.18, § 224.77.
- Egelko, Bob (2009-02-04). "State high court to hear Prop. 8 case March 5". San Francisco Chronicle. Retrieved 2009-02-12.
- "California Supreme Court filings pertaining to Proposition 8". Retrieved 2009-01-14.
- McCarthy v. Arndstein, 266 U.S. 34 (1924)
- "100 Documents That Shaped America:President Franklin Roosevelt's Annual Message (Four Freedoms) to Congress (1941)". U.S. News & World Report. U.S. News & World Report, L.P. Retrieved 2008-04-11.
- Near v. Minnesota
- "Dennis vs. United States". Audio Case Files. Retrieved 2008-09-06.
- "U.S. Army clamping down on soldiers' blogs". Reuters (CNN). 2007-05-02. Archived from the original on 2007-05-22. Retrieved 2007-05-27.
- "Soldiers' Iraq Blogs Face Military Scrutiny". NPR. 2004-08-24. Retrieved 2007-06-14.
- Borland, John (2001-02-26). "Battle lines harden over Net copyright". CNET. Retrieved 2007-05-28.
- "Fatal Flaws in the Bipartisan Campaign Reform Act of 2002" (PDF). Brookings Institution. Archived from the original on 2007-06-16. Retrieved 2007-05-27.
- Fareed Zakaria (September 4, 2010). "What America Has Lost". Newsweek.
- Rachel Quigley (February 19, 2011). "War veteran, 71, dragged out for staging silent protest during Hillary Clinton address... on freedom of speech". Daily Mail.
- "Anti-Bush protesters sue over arrests". Herald Tribune. August 7, 2003.
- Jarrett Murphy (September 3, 2004). "A Raw Deal For RNC Protesters?". CBS News.
- Rick Hampson (May 4, 2010). "1970 Kent State shootings are an enduring history lesson". USA Today.
- "Selma-to-Montgomery March". National Park Service. Retrieved February 20, 2011.
- "Pentagon Exam Calls Protests 'Low-Level Terrorism,' Angering Activists". Fox News Channel. June 17, 2009.
- "Dozens of Occupy protesters arrested in Texas, Oregon". CNN News. October 31, 2011.
- Pub.L. 95–426, 92 Stat. 993, enacted October 7, 1978, 18 U.S.C. § 1185(b)
- Haig v. Agee, 453 U.S. 280 (1981), at 302
- "FOREIGN RELATIONS: Bad Ammunition". Time. 12 April 1948.
- "UN condemns US embargo on Cuba". BBC News. 12 Nov. 2007. Retrieved 14 Apr. 2009. http://news.bbc.co.uk/2/hi/americas/2455923.stm
- Padgett, Tim. "Will Obama Open Up All U.S. Travel to Cuba?" Time Magazine. 14 Apr. 2009. Retrieved 14 Apr. 2009.
- Scott Shane (July 1, 2010). "A.C.L.U. Sues Over No-Fly List". The New York Times.
- Abbie Boudreau and Scott Zamost (July 14, 2010). "Thousands of sex offenders receive U.S. passports". CNN.
- "COINTELPRO". PBS. Retrieved June 25, 2010.
- Lisa Rein (October 8, 2008). "Md. Police Put Activists' Names On Terror Lists". The Washington Post.
- Constitutional Dictatorship: Crisis Government in the Modern Democracies. Clinton Rossiter. 2002. Page X. ISBN 0-7658-0975-3
- Dana Priest and William M. Arkin (December 20, 2010). "Monitoring America: How the U.S. Sees You". CBS News.
- NWI Right to Organize
- Azari-Rad, Hamid; Philips, Peter; Prus, Mark J. (2005). The economics of prevailing wage laws. Ashgate Publishing, Ltd. p. 3. ISBN 0-7546-3255-5.
- United States of America Working conditions, Information about Working conditions in United States of America
- A Curriculum of United States Labor History for Teachers
- "Union Members Summary". U.S. Dept. of Labor, Bureau of Labor Statistics. January 27, 2012.
- Steven Greenhouse (January 27, 2012). "Union Membership Rate Fell Again in 2011". The New York Times.
- National Health Care for the Homeless Council. "Human Rights, Homelessness and Health Care".
- American Medical Association. "Principles of medical ethics".
- Overview - What is Not Covered, U.S. Department of Health & Human Services
- Centers for Medicare & Medicaid Services: Emergency Medical Treatment & Labor Act
- American College of Emergency Physicians Fact Sheet: EMTALA. Retrieved 2007-11-01.
- Rowes, Jeffrey (2000). "EMTALA: OIG/HCFA Special Advisory Bulletin Clarifies EMTALA, American College of Emergency Physicians Criticizes It". Journal of Law, Medicine & Ethics 28 (1): 9092. Archived from the original on 2008-01-29. Retrieved 2008-01-02.
- "The number of uninsured Americans is at an all-time high". CBPP. 2006-08-29. Retrieved 2007-05-28.
- N. Gregory Mankiw (November 4, 2007). "Beyond Those Health Care Numbers". The New York Times.
- "Lack of Insurance May Have Figured In Nearly 17,000 Childhood Deaths, Study Shows". John Hopkins Children's Center. October 29. 2009.
- Lieberman, Jethro Koller (1999). A practical companion to the Constitution: how the Supreme Court has ruled on issues from abortion to zoning. University of California Press. p. 382. ISBN 0-520-21280-0.
- Lieberman, Jethro Koller, A practical companion to the Constitution: how the Supreme Court has ruled on issues from abortion to zoning, University of California Press, 1999, ISBN 0-520-21280-0, page 6
- Death sentences and executions in 2011 Amnesty International March 2012
- Dan Malone (Fall 2005). Cruel and Unusual: Executing the mentally ill. Amnesty International Magazine.
- "Abolish the death penalty". Amnesty International. Retrieved 2008-01-25.
- "The Death Penalty and Deterrence". Amnestyusa.org. 2008-02-22. Retrieved 2009-05-23.
- Sheila Berry (2000-09-22). "Death Penalty No Deterrent". Truthinjustice.org. Retrieved 2009-05-23.
- "John W. Lamperti | Capital Punishment". Math.dartmouth.edu. 1973-03-10. Retrieved 2009-05-23.[dead link]
- "Discussion of Recent Deterrence Studies | Death Penalty Information Center". Deathpenaltyinfo.org. Retrieved 2009-05-23.
- Melissa S. Green (May 2005). "History of the Death Penalty & Recent Developments". Justice Center, University of Alaska Anchorage. Retrieved 2008-01-25. Unknown parameter
- "Death Penalty Policy By State". Death Penalty Information Center. Retrieved 2008-01-25.
- Rights Watch (1998). Death Penalty Issue Addressed by Special Rapporteur XXXV (2). UN Chronicle.
- Death Penalty Info
- Death Penalty Info: Executions by Year
- List of individuals executed in Texas
- List of individuals executed in Virginia
- "S court bans juvenile executions". BBC News. 2005-03-01. Retrieved 2007-06-03.
- "Executions of child offenders since 1990". Amnesty International. Retrieved 2007-06-03.
- "Abolition of the Death Penalty". The EU's Human rights & Democratisation Policy. Retrieved 2007-06-02.
- "Death Penalty Moratorium Implementation Project". The American Bar Association. Retrieved 2008-01-25.
- "Why a moratorium?". American Bar Association (Death Penalty Moratorium Implementation Project). Retrieved 2008-01-25.
- Free, Marvin D. Jr. (November 1997). "The Impact of Federal Sentencing Reforms on African Americans". Journal of Black Studies 28 (2): 268–286. ISSN 0021-9347. JSTOR 2784855.
- "Death Penalty Discrimination: Those Who Murder Whites Are More Likely To Be Executed". Associated Press (CBS News). 2003-04-24. Retrieved 2007-06-03.
- Amnesty International, Human Rights in United States of America, Amnesty International
- "One in 100: Behind Bars in America 2008", Pew Research Center
- "One in 31: The Long Reach of American Corrections", Pew Research Center, released March 2, 2009
- Tuhus-Dubrow, Rebecca (2003-12-19). "Prison Reform Talking Points". The Nation. Retrieved 2007-05-27.
- [The Consequences Aren’t Minor, The Impact of Trying Youth as Adults and Strategies for Reform - A Campaign for Youth Justice Report March 2007 pg 7.]
- "Facts about Prisons and Prisoners" (PDF). The Sentencing Project. December 2006. Retrieved 2007-05-27.
- Fellner, Jamie. "US Addiction to Incarceration Puts 2.3 Million in Prison". Human Rights Watch. Retrieved 2007-06-02.
- Speech by Bonnie Kerness, January 14, 2006, before the United Nations Committee on the Elimination of Discrimination Against Women
- Journal of Law & Policy Vol 22:145 - http://law.wustl.edu/Journal/22/p145Martin.pdf
- Amnesty International Report 1998
- "Inhumane Prison Conditions Still Threaten Life, Health of Alabama Inmates Living with HIV/AIDS, According to Court Filings". Human Rights Watch. Retrieved 2006-06-13.
- Cindy Struckman-Johnson & David Struckman-Johnson (2000). "Sexual Coercion Rates in Seven Midwestern Prisons for Men" (PDF). The Prison Journal.
- Abramsky, Sasha (January 22, 2002). Hard Time Blues: How Politics Built a Prison Nation. Thomas Dunne Books.
- Hardaway, Robert (October 30, 2003). No Price Too High: Victimless Crimes and the Ninth Amendment. Praeger Publishers. ISBN 0-275-95056-5.
- "Prisoners in 2005" (PDF). United States Department of Justice: Office of Justice Programs. November 2006. Archived from the original on 2007-04-09. Retrieved 2007-06-03.
- "America's One-Million Nonviolent Prisoners". Center on Juvenile and Criminal Justice. Retrieved 2007-06-003.
- "Race, Rights and Police Brutality". Amnesty International USA. 1999. Retrieved 2007-12-22.
- "Report Charges Police Abuse in U.S. Goes Unchecked". Human Rights Watch. July 7, 1998. Retrieved 2007-12-22.
- Johnson, Kevin (2007-12-17). "Police brutality cases on rise since 9/11". USA Today. Retrieved 2007-12-22.
- Butterfield, Fox (2001-04-29). "When the Police Shoot, Who's Counting?". The New York Times. Retrieved 2007-12-22.
- "Unregulated Use of Taser Stun Guns Threatens Lives, ACLU of Northern California Study Finds". ACLU. October 6, 2005. Retrieved 2007-12-22.
- "Man dies after cop hits him with Taser 9 times". CNN. undated article. Retrieved 2008-09-06.
- "Human Rights Watch: Summary of International and U.S. Law Prohibiting Torture and Other Ill-treatment of Persons in Custody". May 24, 2004. Retrieved 2007-05-27.
- ICRC official statement: The relevance of IHL in the context of terrorism, 21 July 2005
- "CIA's Harsh Interrogation Techniques Described". 2005-11-18. Retrieved 2007-05-27.
- "Conclusions and recommendations of the Committee against Torture" (PDF). The United Nations Committee against Torture. 2006-05-19. Archived from the original on 2006-12-11. Retrieved 2007-06-02.
- "Non-standard interrogation techniques" are alleged to have at times included:
Extended forced maintenance of "stress positions" such as standing or squatting; psychological tricks and "mind games"; sensory deprivation; exposure to loud music and noises; extended exposure to flashing lights; prolonged solitary confinement; denigration of religion; withholding of food, drink, or medical care; withholding of hygienic care or toilet facilities; prolonged hooding; forced injections of unknown substances; sleep deprivation; magneto-cranial stimulation resulting in mental confusion; threats of bodily harm; threats of rendition to torture-friendly states or Guantánamo; threats of rape or sodomy; threats of harm to family members; threats of imminent execution; prolonged constraint in contorted positions (including strappado, or "Palestinian hanging"); facial smearing of real or simulated feces, urine, menstrual blood, or semen; sexual humiliation; beatings, often requiring surgery or resulting in permanent physical or mental disability; release or threat of release to attack dogs, both muzzled or un-muzzled; near-suffocation or asphyxiation via multiple detainment hoods, plastic bags, water-soaked towels or blankets, duct tape, or ligatures; gassing and chemical spraying resulting in unconsciousness; confinement in small chambers too small to fully stand or recline; underwater immersion just short of drowning (i.e. dunking); and extended exposure to extreme temperatures below freezing or above 120 °F (48 °C).
- "Human Rights First Releases First Comprehensive Report on Detainee Deaths in U.S. Custody". Human Rights First. 2006-02-22. Retrieved 2007-05-28.
- Higham, Scott; Stephens, Joe (2004-05-21). "New Details of Prison Abuse Emerge". The Washington Post. p. A01. Retrieved 2007-06-23.
- "UN Says Abu Ghraib Abuse Could Constitute War Crime", By Warren Hoge, New York Times, June 4, 2004
- "Prisoner Abuse: The Accused". ABC News. Retrieved 2007-05-28.
- The Road to Abu Ghraib, Human Rights Watch
- Price, Caitlin. "CIA chief confirms use of waterboarding on 3 terror detainees". Jurist Legal News & Research. University of Pittsburgh School of Law. Retrieved 2008-05-13.
- "CIA finally admits to waterboarding". The Australian. 2008-02-07. Retrieved 2008-02-18.
- Hirsh, Michael; John Barry, Daniel Klaidman (2004-06-21). "A tortured debate: amid feuding and turf battles, lawyers in the White House discussed specific terror-interrogation techniques like 'water-boarding' and 'mock-burials'". Newsweek. Retrieved 2007-12-20.
- "Waterboarding qualifies as torture: UN". Retrieved 2008-02-24.
- Bent Sørensen on waterboarding as torture
- Former member of UN Committee Against Torture: "Yes, waterboarding is torture" International Rehabilitation Council for Torture Victims, February 12, 2008
- Violating international law Army Official: Yes, Waterboarding Breaks International Law By Paul Kiel, Talking Points Memo, February 27, 2008
- White House defends waterboarding; CIA chief uncertain, Associated Press, February 7, 2008
- No charges against CIA officials for waterboarding: WTOP, April 16, 2009
- BBC website, CIA torture exemption 'illegal', Sunday, 19 April 2009
- The Guardian, Obama releases Bush torture memos, 16 April 2009
- "Justice Department Memos on Interrogation Techniques". The New York Times. Retrieved 2009-04-30.
- BBC Today Programme, 20 April 2009
- Despite Reports, Khalid Sheikh Mohammed Was Not Waterboarded 183 Times, Joseph Abrams, Fox News Channel, April 28, 2009
- Bob Drogin (June 8, 2010). "Physicians group accuses CIA of testing torture techniques on detainees". Los Angeles Times.
- "Evidence Indicates that the Bush Administration Conducted Experiments and Research on Detainees to Design Torture Techniques and Create Legal Cover". Physicians for Human Rights. June 7, 2010.
- Monbiot, George. One rule for them.
- In re Guantanamo Detainee Cases, 355 F.Supp.2d 443 (D.D.C. 2005).
- "Guantánamo Bay - a human rights scandal". Amnesty International. Archived from the original on 2006-02-06. Retrieved 2006-03-15.
- Julian, Finn and Julie Tate (2013-03-16). "Guantanamo Bay detainees’ frustrations simmering, lawyers and others say". The Washington Post. Retrieved 2013-03-18. "A majority of the 166 detainees remaining at Guantanamo Bay are housed in Camp 6"
- Amy, Goodman (2013-03-14). "Prisoner protest at Guantánamo Bay stains Obama's human rights record". The Guardian. Retrieved 2013-03-18. "Prisoner letters and attorney eyewitness accounts, however, support the claim that well over 100 of the 166 Guantánamo prisoners are into at least the second month of the strike."
- De Zayas, Alfred. (2003.) The Status of Guantánamo Bay and the Status of the Detainees.
- ECONOMIC, SOCIAL AND CULTURAL RIGHTS CIVIL AND POLITICAL RIGHTS Situation of detainees at Guantánamo Bay Report of the Chairperson-Rapporteur of the Working Group on Arbitrary Detention, Leila Zerrougui; the Special Rapporteur on the independence of judges and lawyers, Leandro Despouy; the Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, Manfred Nowak; the Special Rapporteur on freedom of religion or belief, Asma Jahangir; and the Special Rapporteur on the right of everyone to the enjoyment of the highest attainable standard of physical and mental health, Paul Hunt
- "Guantánamo and beyond: The continuing pursuit of unchecked executive power". Amnesty International. 2005-05-13. Archived from the original on 2007-05-09. Retrieved 2007-05-29.
- The legal situation of unlawful/unprivileged combatants (IRRC March 2003 Vol.85 No 849). See Unlawful combatant.
- "New Account of Torture by U.S. Tropps, Soldiers Say Failures by Command Led to Abuse". Human Rights Watch. 2005-09-24. Retrieved 2007-05-29.
- "Huckabee Says Guantanamo Bay Offers Better Conditions to Detainees Than Most U.S. Prisons - You Decide 2008". Fox News Channel. 2007-06-11. Retrieved 2009-05-23.
- "Guantanamo Detainees Info Sheet #1 – November 14, 2005" (PDF). White House. Retrieved 2007-11-17.
- "Hamdan v. Rumsfeld" (PDF). June 29 2006. Retrieved 2007-02-10.
- "US detainees to get Geneva rights". BBC. 2006-07-11. Retrieved January 5, 2010.
- "White House: Detainees entitled to Geneva Convention protections". CNN. 2006-07-11.[dead link]
- "White House Changes Gitmo Policy". CBS News. 2006-07-11.
- Mayer, Jane (2005-02-14). "Outsourcing Torture". The New Yorker. Retrieved 2007-05-29.
- Markon, Jerry (2006-05-19). "Lawsuit Against CIA is Dismissed". The Washington Post. Retrieved 2007-05-29.
- Georg Mascolo, Holger Stark: The US Stands Accused of Kidnapping. SPIEGEL ONLINE, February 14, 2005
- "Map of Freedom in the World". freedomhouse.org. 2004-05-10. Retrieved 2009-05-23.
- [http://www.hrw.org/news/2012/09/05/us-torture-and-rendition-gaddafi-s-libya US: Torture and Rendition to Gaddafi’s Libya Human Rights Watch September 6, 2012
- Delivered Into Enemy Hands US-Led Abuse and Rendition of Opponents to Gaddafi’s Libya Human Rights Watch 2012
- HRW: USA käytti vesikidutusta libyalaisiin yle 6.9.2012 (Finnish)
- "Supporting Human Rights and Democracy: The U.S. Record". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- "Human Rights". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-05-28.
- "International Human Rights Week". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Archived from the original on 2007-05-09. Retrieved 2007-05-28.
- "Ambassadorial Roundtable Series". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Archived from the original on 2007-05-09. Retrieved 2007-05-28.
- "Bureau of Democracy, Human Rights, and Labor". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- "2006 Human Rights and Democracy Achievement Award". United States Department of State: Bureau of Democracy, Human Rights, and Labor. Retrieved 2007-06-22.
- Jo Becker, children’s rights advocate (2008-12-11). "US Limits Military Aid to Nations Using Child Soldiers | Human Rights Watch". Human Rights Watch. Retrieved 2009-05-23.
- U.S. reservations, declarations, and understandings, International Covenant on Civil and Political Rights, 138 Cong. Rec. S4781-01 (daily ed., April 2, 1992).
- Status of US international treaty ratifications
- DPI Press Kit
- "OHCHR International law". OHCHR. Retrieved 2009-06-23.
- UN OHCHR Fact Sheet No.2 (Rev.1), The International Bill of Human Rights
- OHCHR Reservations and declarations on ratificatons
- America'S Problem With Human Rights
- 138 Cong. Rec. S4781-84 (1992)
- S. Exec. Rep. No. 102-23 (1992)
- Louis Henkin, U.S. Ratification of Human Rights Treaties: The Ghost of Senator Bricker, 89 Am. J. Int’l L. 341, 346 (1995)
- 98/07/23 Amb. Scheffer on international criminal court
- Coalition for the ICC
- Human Rights Watch - The US and the International Criminal Court
- Human Rights Watch, “U.S.: 'Hague Invasion Act' Becomes Law.” 3 August 2002. Retrieved 8 January 2007.
- John Sutherland, “Who are America's real enemies?” The Guardian, 8 July 2002. Retrieved 8 January 2007.
- BBC News
- Vienna Convention on the Law of Treaties between States and International Organizations or between International Organizations 1986 Article 18.
- Declaration on the Rights of Indigenous Peoples
- Basic Documents - Ratifications of the Convention
- Organization of American States
- "All the News That's Fit to Print? New York Times Coverage of Human-Rights Violations". The Harvard International Journal of Press 4 (4, Fall 1999): 48–69. Retrieved 2007-05-28.
- "Paper presented at the annual meeting of the American Political Science Association". All Academic, Inc. 2006-08-31. Retrieved 2007-05-28.
- Satter, Raphael (2007-05-24). "Report hits US on human rights". Associated Press (published on Globe). Retrieved 2007-05-29.
- Polity IV data sets
- "North Korea, Eritrea and Turkmenistan are the world’s "black holes" for news". Reporters Without Borders. October 2005. Retrieved 2007-05-29.
- "East Asia and Middle East have worst press freedom records". Reporters Without Borders. October 2004. Retrieved 2007-05-29.
- "Cuba second from last, just ahead of North Korea". Reporters Without Borders. October 2003. Retrieved 2007-05-29.
- "Reporters Without Borders publishes the first worldwide press freedom index". Reporters Without Borders. October 2002. Retrieved 2007-05-29.
- [347=x-347-559597 "Leading surveillance societies in the EU and the World 2007"]. Privacy International. December 2007. Retrieved 2009-01-01.
- "Universal Human Rights?". Gallup International. 1999.
- Klapper, Bradley (2006-07-28). "U.N. Panel Takes U.S. to Task Over Katrin". The America's Intelligence Wire. Associated Press.
- 26. The Committee, while taking note of the various rules and regulations prohibiting discrimination in the provision of disaster relief and emergency assistance, remains concerned about information that poor people and in particular African-Americans, were disadvantaged by the rescue and evacuation plans implemented when Hurricane Katrina hit the United States of America, and continue to be disadvantaged under the reconstruction plans. (articles 6 and 26) The State party should review its practices and policies to ensure the full implementation of its obligation to protect life and of the prohibition of discrimination, whether direct or indirect, as well as of the United Nations Guiding Principles on Internal Displacement, in the areas of disaster prevention and preparedness, emergency assistance and relief measures. In the aftermath of Hurricane Katrina, it should increase its efforts to ensure that the rights of poor people and in particular African-Americans, are fully taken into consideration in the reconstruction plans with regard to access to housing, education and health care. The Committee wishes to be informed about the results of the inquiries into the alleged failure to evacuate prisoners at the Parish prison, as well as the allegations that New Orleans residents were not permitted by law enforcement officials to cross the Greater New Orleans Bridge to Gretna, Louisiana. See: "Concluding Observations of the Human Rights Committee on the Second and Third U.S. Reports to the Committee (2006).". Human Rights Committee. University of Minnesota Human Rights Library. 2006-07-28.
- "Hurricane Katrina and the Guiding Principles on Internal Displacement" (PDF). Institute for Southern Studies. January 2008. pp. 18–19. Retrieved 2009-05-18. See also: Sothern, Billy (2006-01-02). "Left to Die". The Nation. pp. 19–22.
- "Report says U.S. Katrina response fails to meet its own human rights principles". New Orleans CityBusiness. 2008-01-16. See also: "Hurricane Katrina and the Guiding Principles on Internal Displacement" (PDF). Institute for Southern Studies. January 2008. Retrieved 2009-05-18.
- "Report of the Special Rapporteur". United Nations Human Rights Council. April 28, 2009. p. 30. Retrieved 2009-05-24.
- "U.S. Elected To U.N. Human Rights Council". ACLU. Retrieved 2009-06-06.
- "Daily Press Briefing". United States Department of State. 2007-05-06. Retrieved 2006-06-24.[dead link]
- United Nations General Assembly Verbotim Report meeting 72 session 60 page 5, Mr. Toro Jiménez Venezuela on 15 March 2006 at 11:00 (retrieved 2007-09-19)
- "U.N. Torture Committee Critical of U.S.". Human Rights Watch. 2006-05-19. Retrieved 2007-06-14.
- "Conclusions and recommendations of the Committee" (PDF).
- Leopold, Evelyn (2007-05-25). "U.N. expert faults U.S. on human rights in terror laws". Reuters. Retrieved 2007-06-03. Also published on The Boston Globe, on Yahoo News, and on ABC News.
- Rizvi, Haider. Racial Poverty Gaps in U.S. Amount to Human Rights Violation, Says U.N. Expert. OneWorld.net (published on CommonDreams). 2005-11-30. Retrieved on 2007-08-13. (archived link)
- Ignatieff, Michael (2005). American Exceptionalism and Human Rights. Princeton University Press. ISBN 0-691-11648-2.
- Ishay, Micheline (2008). The History of Human Rights: From Ancient Times to the Globalization Era (Second ed.). University of California Press. ISBN 0-520-25641-7.
- Lauren, Paul Gordon (2003). The Evolution of International Human Rights: Visions Seen (Second ed.). University of Pennsylvania Press. ISBN 0-8122-1854-X.
- Olyan, Saul M.; Martha C. Nussbaum (1998). Sexual Orientation and Human Rights in American Religious Discourse. New York: Oxford University Press. ISBN 0-19-511942-8.
- Rhoden, Nancy Lee; Ian Kenneth Steele (2000). The Human Tradition in the American Revolution. Rowman & Littlefield. ISBN 0-8420-2748-3.
- Shapiro, Steven R.; Human Rights Watch, American Civil Liberties Union (1993). Human Rights Violations in the United States: A Report on U.S. Compliance with The International Covenant on Civil and Political Rights. Human Rights Watch. ISBN 1-56432-122-3.
- ed. by Cynthia Soohoo ... (2007). In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: A History of Human Rights in the United States I. Praeger Publishers. ISBN 0-275-98822-8.
- ed. by Cynthia Soohoo ... (2007). In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: From Civil Rights to Human Rights II. Praeger Publishers. ISBN 0-275-98823-6.
- Quigley, William; Sharda Sekaran (2007). "A Call for the Right to Return in the Gulf Coast". In Soohoo, Cynthia; Albisa, Catherine; Davis, Martha F. Bringing Human Rights Home: Portraits of the Movement III. Praeger Publishers. pp. 291–304. ISBN 0-275-98824-4.
- Weissbrodt, David; Connie de la Vega (2007). International Human Rights Law: An Introduction. University of Pennsylvania Press. ISBN 0-8122-4032-4.
- Yount, David (2007). How the Quakers Invented America. Rowman & Littlefield. ISBN 0-7425-5833-9.
- Human Rights in the US and the International Community, UNL Initiative on Human Rights & Human Diversity site—research and study source directed at secondary and post-secondary students
- Freedom in the World 2006: United States from Freedom House
- Human Rights from United States Department of State
- United States: Human Rights World Report 2006 from Human Rights Watch | http://en.mobile.wikipedia.org/wiki/Human_rights_in_the_United_States | 13 |
16 | COMPLETING YOUR SCIENCE FAIR PROJECT
• CONDUCT YOUR EXPERIMENT:
During experimentation, keep detailed notes of each and every experiment,
measurement and observation in a logbook. Do not rely on memory!
What questions arise from your research? Take plenty of pictures
during your experiment. Be sure to have a definite way to measure
the changes you may observe. Besides, judges love logbooks!
Use tables or charts to record your quantitative data.
• ANALYZE YOUR RESULTS:
When you complete your experiments, examine and
organize your findings. Use appropriate graphs
to help visualize your data. Identify patterns
from the graphs. This will help you answer your testable question:
1. Did your experiments give you the expected results? Why or why not?
2. Was your experiment preformed with the exact same steps each time? Are there other explanations that you had not considered or observed?
3. Were there experimental errors in your data taking, experimental design or observations?
Remember, that understanding errors is a key skill
scientists must develop. In addition, reporting
that a suspected variable did not change the results
can still be valuable information. That is just as
important of a “discovery” as if there was some
change due to the variable. In addition, statistically
analyze your data using the statistics that you can
understand (Student's T-Test or Chi -Squared, etc.)
as well as being able to explain their meaning, especially to the judges!
• DRAWING CONCLUSIONS:
1.Did the variable(s) tested cause a change when compared to the standard you are using?
2.What patterns do you see from your graph analysis that demonstrates a relationship between your variables? Which variables are important?
3.Did you collect enough data? Once or twice is not enough to get a valid result and/or to insure that your findings are statistically significant.
4.Do you need to conduct more experimentation? Keep an open mind – never alter results to fit a hypothesis.
5.If your results do not support your hypothesis, that’s ok and in some cases good! Try to explain why you obtained different results than your literature research predicted for you.
6.Were there sources of error that may have caused these differences? If so,
identify them. These may be factors you didn’t consider before the experiment,
or limitations to your ability to measure the results. Even if the results do
differ, you still have accomplished successful scientific research because you
have taken a question and attempted to discover the answer through quantitative
testing. This is the way knowledge is obtained in the world of science. Think
project be used in the real world? Finally, explain how you would improve the
experiment and what would you do differently. | http://www.sussexcountysciencefair.org/completing.htm | 13 |
10 | Chapter 29 vocabularyyy
the bouncing back of a particle or wave that strikes the boundary between two media.
a line perpendicular to it's surface
|angle of incidence||angle between an incident ray and the normal to a surface|
|angle of reflection||angle between a reflected ray and the normal to a surface|
|law of reflection||the angle of incidence for a wave that strikes a surface is equal to the angle of reflection|
|virtual image|| |
an image formed through reflection or refraction that can be seen by an observer but cannot be projected on a screen because light form the object does not actually come to a focus
|diffuse reflection|| |
the reflection of waves in many directions from a rough surface
|reverberations||Persistence of a sound, as in an echo, due to multiple reflections.|
|refraction||The change in direction of a wave as it crosses the boundary between two media in which the wave travels at different speeds|
|wave fronts|| |
The crest, trough, or any continuous portion of a 2-D or 3-D wave in which the vibrations are all the same way at the same time.
|mirage||A floating image that appears in the distance and is due to the refraction of light in Earth's atmosphere|
the separation of light into colors arranged according to their frequency
|critical angle||the minimum angle of incidence for which a light ray is totally reflected within a medium.|
|total internal reflection||the 100% reflection(with no transmission) of light that strikes the boundary between two media at an angle greater than the critical angle|
|optical fibers||A transparent fiber, usually of glass or plastic, that can transmit light down its lenght by means of total internal reflection.|
Flickr Creative Commons Images
Some images used in this set are licensed under the Creative Commons through Flickr.com. Click to see the original works with their full license.
- "Reflection" image
- "normal" image
- "virtual image" image
- "diffuse reflection" image
- "wave fronts" image
- "dispersion" image
This product uses the Flickr API but is not endorsed or certified by Flickr. | http://quizlet.com/4484528/chapter-29-vocabularyyy-flash-cards/ | 13 |
11 | Lynching in the United States
Lynching, the practice of killing people by extrajudicial mob action, occurred in the United States chiefly from the late 18th century through the 1960s. Lynchings took place most frequently in the Southern United States from 1890 to the 1920s, with a peak in the annual toll in 1892. However, lynchings were also very common in the Old West.
It is associated with re-imposition of white supremacy in the South after the Civil War. The granting of U.S. Constitutional rights to freedmen in the Reconstruction era (1865–77) aroused anxieties among American citizens, who came to blame African Americans for their own wartime hardship, economic loss, and forfeiture of social privilege. Black Americans, and Whites active in the pursuit of integration rights, were sometimes lynched in the South during Reconstruction. Lynchings reached a peak in the late 19th and early 20th centuries, when Southern states changed their constitutions and electoral rules to disfranchise blacks and, having regained political power, enacted a series of segregation and Jim Crow laws to reestablish White supremacy. Notable lynchings of integration rights workers during the 1960s in Mississippi contributed to galvanizing public support for the Civil Rights Movement and civil rights legislation.
The Tuskegee Institute has recorded 3,446 blacks and 1,297 whites were lynched between 1882 and 1968. Southern states created new constitutions between 1890 and 1910, with provisions that effectively disfranchised most blacks, as well as many poor whites. People who did not vote were excluded from serving on juries, and most blacks were shut out of the official political system.
African Americans mounted resistance to lynchings in numerous ways. Intellectuals and journalists encouraged public education, actively protesting and lobbying against lynch mob violence and government complicity in that violence. The National Association for the Advancement of Colored People (NAACP), as well as numerous other organizations, organized support from white and black Americans alike and conducted a national campaign to get a federal anti-lynching law passed; in 1920 the Republican Party promised at its national convention to support passage of such a law. In 1921 Leonidas C. Dyer sponsored an anti-lynching bill; it was passed in January 1922 in the United States House of Representatives, but a Senate filibuster by Southern white Democrats defeated it in December 1922. With the NAACP, Representative Dyer spoke across the country in support of his bill in 1923, but Southern Democrats again filibustered it in the Senate. He tried once more but was again unsuccessful.
African-American women's clubs, such as the Association of Southern Women for the Prevention of Lynching, raised funds to support the work of public campaigns, including anti-lynching plays. Their petition drives, letter campaigns, meetings and demonstrations helped to highlight the issues and combat lynching. From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." No bill was approved by the Senate because of the powerful opposition of the Southern Democratic voting block. In the Great Migration, extending in two waves from 1910 to 1970, 6.5 million African Americans left the South, primarily for northern and mid-western cities, for jobs and to escape the risk of lynchings.
Name origin
The term "Lynch's Law" – subsequently "lynch law" and "lynching" – apparently originated during the American Revolution when Charles Lynch, a Virginia justice of the peace, ordered extralegal punishment for Loyalists. In the South, members of the abolitionist movement and other people opposing slavery were also targets of lynch mob violence before the Civil War.
Social characteristics
One motive for lynchings, particularly in the South, was the enforcement of social conventions – punishing perceived violations of customs, later institutionalized as Jim Crow laws, mandating segregation of whites and blacks.
Financial gain and the ability to establish political and economic control provided another motive. For example, after the lynching of an African-American farmer or an immigrant merchant, the victim's property would often become available to whites. In much of the Deep South, lynchings peaked in the late 19th and early 20th centuries, as white racists turned to terrorism to dissuade blacks from voting. In the Mississippi Delta, lynchings of blacks increased beginning in the late 19th century, as white planters tried to control former slaves who had become landowners or sharecroppers. Lynchings were more frequent at the end of the year, when accounts had to be settled.
Lynchings would also occur in frontier areas where legal recourse was distant. In the West, cattle barons took the law into their own hands by hanging those whom they perceived as cattle and horse thieves.
African-American journalist and anti-lynching crusader Ida B. Wells wrote in the 1890s that black lynching victims were accused of rape or attempted rape only about one-third of the time. The most prevalent accusation was murder or attempted murder, followed by a list of infractions that included verbal and physical aggression, spirited business competition and independence of mind. White lynch mobs formed to restore the perceived social order. Lynch mob "policing" usually led to murder of the victims by white mobs. Law-enforcement authorities sometimes participated directly or held suspects in jail until a mob formed to carry out the murder.
Frontier and Civil War
There is much debate[by whom?] over the violent history of lynchings on the frontier, obscured by the mythology of the American Old West. In unorganized territories or sparsely settled states, security was often provided only by a U.S. Marshal who might, despite the appointment of deputies, be hours, or even days, away by horseback.
Lynchings in the Old West were often carried out against accused criminals in custody. Lynching did not so much substitute for an absent legal system as to provide an alternative system that favored a particular social class or racial group. Historian Michael J. Pfeifer writes, "Contrary to the popular understanding, early territorial lynching did not flow from an absence or distance of law enforcement but rather from the social instability of early communities and their contest for property, status, and the definition of social order."
The San Francisco Vigilance Movement, for example, has traditionally been portrayed as a positive response to government corruption and rampant crime, though revisionists have argued that it created more lawlessness than it eliminated. It also had a strongly nativist tinge, initially focused against the Irish and later evolving into mob violence against Chinese and Mexican immigrants.. In 1871, at least 18 Chinese-Americans were killed by the mob rampaging through Old Chinatown in Los Angeles, after a white businessman was inadvertently caught in the crossfire of a tong battle.
During the California Gold Rush, at least 25,000 Mexicans had been longtime residents of California. The Treaty of 1848 expanded American territory by one-third. To settle the war, Mexico ceded all or parts of Arizona, California, Colorado, Kansas, New Mexico, Nevada, Oklahoma, Texas, Utah and Wyoming to the United States. In 1850, California became a state within the United States.
Many of the Mexicans who were native to what would become a state within the United States were experienced miners and had had great success mining gold in California. Their success aroused animosity by white prospectors who intimidated Mexican miners with the threat of violence and committed violence against some. Between 1848 and 1860, at least 163 Mexicans were lynched in California alone. One particularly infamous lynching occurred on July 5, 1851, when a Mexican woman named Juana Loaiza was lynched by a mob in Downieville, California. She was accused of killing a white man who had attempted to assault her after breaking into her home.
Another well-documented episode in the history of the American West is the Johnson County War, a dispute over land use in Wyoming in the 1890s. Large-scale ranchers, with the complicity of local and federal Republican politicians, hired mercenaries and assassins to lynch the small ranchers, mostly Democrats, who were their economic competitors and whom they portrayed as "cattle rustlers".
During the Civil War, Southern Home Guard units sometimes lynched white Southerners whom they suspected of being Unionists or deserters. One example of this was the hanging of Methodist minister Bill Sketoe in the southern Alabama town of Newton in December 1864. Other (fictional) examples of extrajudicial murder are portrayed in Charles Frazier's novel Cold Mountain.
Reconstruction (1865–1877)
The first heavy period of violence in the South was between 1868 and 1871. White Democrats attacked black and white Republicans. This was less the result of mob violence characteristic of later lynchings, however, than insurgent secret vigilante actions by groups such as the Ku Klux Klan. To prevent ratification of new constitutions formed during Reconstruction, the opposition used various means to harass potential voters. Failed terrorist attacks led to a massacre during the 1868 elections, with the systematic insurgents' murders of about 1,300 voters across various southern states ranging from South Carolina to Arkansas.
After this partisan political violence had ended, lynchings in the South focused more on race than on partisan politics. They could be seen as a latter-day expression of the slave patrols, the bands of poor whites who policed the slaves and pursued escapees. The lynchers sometimes murdered their victims but sometimes whipped them to remind them of their former status as slaves. White terrorists often made nighttime raids of African-American homes in order to confiscate firearms. Lynchings to prevent freedmen and their allies from voting and bearing arms can be seen as extralegal ways of enforcing the Black Codes and the previous system of social dominance. The Freedman's Bureau and later passed Reconstruction Amendments override the Slave Codes.
Although some states took action against the Klan, the South needed federal help to deal with the escalating violence. President Ulysses S. Grant and Congress passed the Force Acts of 1870 and the Civil Rights Act of 1871, also known as the Ku Klux Klan Act, because it was passed to suppress the vigilante violence of the Klan. This enabled federal prosecution of crimes committed by groups such as the Ku Klux Klan, as well as use of federal troops to control violence. The administration began holding grand juries and prosecuting Klan members. In addition, it used martial law in some counties in South Carolina, where the Klan was the strongest. Under attack, the Klan dissipated. Vigorous federal action and the disappearance of the Klan had a strong effect in reducing the numbers of murders.
From the mid-1870s on in the Deep South, violence rose. In Mississippi, Louisiana, the Carolinas and Florida especially, the Democratic Party relied on paramilitary "White Line" groups such as the White Camelia to terrorize, intimidate and assassinate African American and white Republicans in an organized drive to regain power. In Mississippi, it was the Red Shirts; in Louisiana, the White League that were paramilitary groups carrying out goals of the Democratic Party to suppress black voting. Insurgents targeted politically active African Americans and unleashed violence in general community intimidation. Grant's desire to keep Ohio in the Republican aisle and his attorney general's maneuvering led to a failure to support the Mississippi governor with Federal troops. The campaign of terror worked. In Yazoo County, for instance, with a Negro population of 12,000, only seven votes were cast for Republicans. In 1875, Democrats swept into power in the state legislature.
Once Democrats regained power in Mississippi, Democrats in other states adopted the Mississippi Plan to control the election of 1876, using informal armed militias to assassinate political leaders, hunt down community members, intimidate and turn away voters, effectively suppressing African American suffrage and civil rights. In state after state, Democrats swept back to power. From 1868 to 1876, most years had 50–100 lynchings.
White Democrats passed laws and constitutional amendments making voter registration more complicated, to further exclude black voters from the polls.
Disfranchisement (1877-1917)
Following white Democrats' regaining political power in the late 1870s, legislators gradually increased restrictions on voting, chiefly through statute. From 1890 to 1908, most of the Southern states, starting with Mississippi, created new constitutions with further provisions: poll taxes, literacy and understanding tests, and increased residency requirements, that effectively disfranchised most blacks and many poor whites. Forcing them off voter registration lists also prevented them from serving on juries, whose members were limited to voters. Although challenges to such constitutions made their way to the Supreme Court in Williams v. Mississippi (1898) and Giles v. Harris (1903), the states' provisions were upheld.
Most lynchings from the late 19th through the early 20th century were of African Americans in the South, with other victims including white immigrants, and, in the southwest, Latinos. Of the 468 victims in Texas between 1885 and 1942, 339 were black, 77 white, 53 Hispanic, and 1 Indian. They reflected the tensions of labor and social changes, as the whites imposed Jim Crow rules, legal segregation and white supremacy. The lynchings were also an indicator of long economic stress due to falling cotton prices through much of the 19th century, as well as financial depression in the 1890s. In the Mississippi bottomlands, for instance, lynchings rose when crops and accounts were supposed to be settled.
The late 19th and early 20th centuries history of the Mississippi Delta showed both frontier influence and actions directed at repressing African Americans. After the Civil War, 90% of the Delta was still undeveloped. Both whites and African Americans migrated there for a chance to buy land in the backcountry. It was frontier wilderness, heavily forested and without roads for years. Before the start of the 20th century, lynchings often took the form of frontier justice directed at transient workers as well as residents. Thousands of workers were brought in[by whom?] to do lumbering and work on levees. Whites were lynched at a rate 35.5% higher than their proportion in the population, most often accused of crimes against property (chiefly theft). During the Delta's frontier era, blacks were lynched at a rate lower than their proportion in the population, unlike the rest of the South. They were most often accused of murder or attempted murder in half the cases, and rape in 15%.
There was a clear seasonal pattern to the lynchings, with the colder months being the deadliest. As noted, cotton prices fell during the 1880s and 1890s, increasing economic pressures. "From September through December, the cotton was picked, debts were revealed, and profits (or losses) realized... Whether concluding old contracts or discussing new arrangements, [landlords and tenants] frequently came into conflict in these months and sometimes fell to blows." During the winter, murder was most cited as a cause for lynching. After 1901, as economics shifted and more blacks became renters and sharecroppers in the Delta, with few exceptions, only African-Americans were lynched. The frequency increased from 1901 to 1908, after African-Americans were disenfranchised. "In the twentieth century Delta vigilantism finally became predictably joined to white supremacy."
After their increased immigration to the US in the late 19th century, Italian Americans also became lynching targets, chiefly in the South, where they were recruited for laboring jobs. On March 14, 1891, eleven Italian Americans were lynched in New Orleans after a jury acquitted them in the murder of David Hennessy, an ethnic Irish New Orleans police chief. The eleven were falsely accused of being associated with the Mafia. This incident was one of the largest mass lynchings in U.S. history. A total of twenty Italians were lynched in the 1890s. Although most lynchings of Italian Americans occurred in the South, Italians had not immigrated there in great numbers. Isolated lynchings of Italians also occurred in New York, Pennsylvania, and Colorado.
Particularly in the West, Chinese immigrants, East Indians, Native Americans and Mexicans were also lynching victims. The lynching of Mexicans and Mexican Americans in the Southwest was long overlooked in American history, attention being chiefly focused on the South. The Tuskegee Institute, which kept the most complete records, noted the victims as simply black or white. Mexican, Chinese, and Native American lynching victims were recorded as white.
Researchers estimate 597 Mexicans were lynched between 1848 and 1928. Mexicans were lynched at a rate of 27.4 per 100,000 of population between 1880 and 1930. This statistic was second only to that of the African American community, which endured an average of 37.1 per 100,000 of population during that period. Between 1848 and 1879, Mexicans were lynched at an unprecedented rate of 473 per 100,000 of population.
Henry Smith, a troubled ex-slave accused of murdering a policeman's daughter, was one of the most famous lynched African-Americans. He was lynched at Paris, Texas, in 1893 for killing Myrtle Vance, the three-year-old daughter of a Texas policeman, after the policeman had assaulted Smith. Smith was not tried in a court of law. A large crowd followed the lynching, as was common then, in the style of public executions. Henry Smith was fastened to a wooden platform, tortured for 50 minutes by red-hot iron brands, then finally burned alive while over 10,000 spectators cheered.
Enforcing Jim Crow
After 1876, the frequency of lynching decreased somewhat as white Democrats had regained political power throughout the South. The threat of lynching was used to terrorize freedmen and whites alike to maintain re-asserted dominance by whites.. Southern Republicans in Congress sought to protect black voting rights by using Federal troops for enforcement. A congressional deal to elect Ohio Republican Rutherford B. Hayes as President in 1876 (in spite of his losing the popular vote to New York Democrat Samuel J. Tilden) included a pledge to end Reconstruction in the South. The Redeemers, white Democrats who often included members of paramilitary groups such as White Cappers, White Camellia, White League and Red Shirts, had used terrorist violence and assassinations to reduce the political power that black and white Republicans had gained during Reconstruction.
Lynchings both supported the power reversal and were public demonstrations of white power. Racial tensions had an economic base. In attempting to reconstruct the plantation economy, planters were anxious to control labor. In addition, agricultural depression was widespread and the price of cotton kept falling after the Civil War into the 1890s. There was a labor shortage in many parts of the Deep South, especially in the developing Mississippi Delta. Southern attempts to encourage immigrant labor were unsuccessful as immigrants would quickly leave field labor. Lynchings erupted when farmers tried to terrorize the laborers, especially when time came to settle and they were unable to pay wages, but tried to keep laborers from leaving.
More than 85 percent of the estimated 5,000 lynchings in the post-Civil War period occurred in the Southern states. 1892 was a peak year when 161 African Americans were lynched. The creation of the Jim Crow laws, beginning in the 1890s, completed the revival of white supremacy in the South. Terror and lynching were used to enforce both these formal laws and a variety of unwritten rules of conduct meant to assert white domination. In most years from 1889 to 1923, there were 50–100 lynchings annually across the South.
The ideology behind lynching, directly connected with denial of political and social equality, was stated forthrightly by Benjamin Tillman, a South Carolina governor and senator, speaking on the floor of the U.S. Senate in 1900:
We of the South have never recognized the right of the negro to govern white men, and we never will. We have never believed him to be the equal of the white man, and we will not submit to his gratifying his lust on our wives and daughters without lynching him.
Often victims were lynched by a small group of white vigilantes late at night. Sometimes, however, lynchings became mass spectacles with a circus-like atmosphere because they were intended to emphasize majority power. Children often attended these public lynchings. A large lynching might be announced beforehand in the newspaper. There were cases in which a lynching was timed so that a newspaper reporter could make his deadline. Photographers sold photos for postcards to make extra money. The event was publicized so that the intended audience, African Americans and whites who might challenge the society, was warned to stay in their places.
Fewer than one percent of lynch mob participants were ever convicted by local courts. By the late 19th century, trial juries in most of the southern United States were all white because African Americans had been disfranchised, and only registered voters could serve as jurors. Often juries never let the matter go past the inquest.
Such cases happened in the North as well. In 1892, a police officer in Port Jervis, New York, tried to stop the lynching of a black man who had been wrongfully accused of assaulting a white woman. The mob responded by putting the noose around the officer's neck as a way of scaring him. Although at the inquest the officer identified eight people who had participated in the lynching, including the former chief of police, the jury determined that the murder had been carried out "by person or persons unknown."
In Duluth, Minnesota, on June 15, 1920, three young African American traveling circus workers were lynched after having been jailed and accused of having raped a white woman. A physician's examination subsequently found no evidence of rape or assault. The alleged "motive" and action by a mob were consistent with the "community policing" model. A book titled The Lynchings in Duluth documented the events.
Although the rhetoric surrounding lynchings included justifications about protecting white women, the actions basically erupted out attempts to maintain domination in a rapidly changing society and fears of social change. Victims were the scapegoats for peoples' attempts to control agriculture, labor and education as well as disasters such as the boll weevil.
According to an article, April 2, 2002, in Time:
- "There were lynchings in the Midwestern and Western states, mostly of Asians, Mexicans, and Native Americans. But it was in the South that lynching evolved into a semiofficial institution of racial terror against blacks. All across the former Confederacy, blacks who were suspected of crimes against whites—or even "offenses" no greater than failing to step aside for a white man's car or protesting a lynching—were tortured, hanged and burned to death by the thousands. In a prefatory essay in Without Sanctuary, historian Leon F. Litwack writes that between 1882 and 1968, at least 4,742 African Americans were murdered that way.
At the start of the 20th century in the United States, lynching was photographic sport. People sent picture postcards of lynchings they had witnessed. The practice was so base, a writer for Time noted in 2000, "Even the Nazis did not stoop to selling souvenirs of Auschwitz, but lynching scenes became a burgeoning subdepartment of the postcard industry. By 1908, the trade had grown so large, and the practice of sending postcards featuring the victims of mob murderers had become so repugnant, that the U.S. Postmaster General banned the cards from the mails."
- "The photographs stretch our credulity, even numb our minds and senses to the full extent of the horror, but they must be examined if we are to understand how normal men and women could live with, participate in, and defend such atrocities, even reinterpret them so they would not see themselves or be perceived as less than civilized. The men and women who tortured, dismembered, and murdered in this fashion understood perfectly well what they were doing and thought of themselves as perfectly normal human beings. Few had any ethical qualms about their actions. This was not the outburst of crazed men or uncontrolled barbarians but the triumph of a belief system that defined one people as less human than another. For the men and women who composed these mobs, as for those who remained silent and indifferent or who provided scholarly or scientific explanations, this was the highest idealism in the service of their race. One has only to view the self-satisfied expressions on their faces as they posed beneath black people hanging from a rope or next to the charred remains of a Negro who had been burned to death. What is most disturbing about these scenes is the discovery that the perpetrators of the crimes were ordinary people, not so different from ourselves – merchants, farmers, laborers, machine operators, teachers, doctors, lawyers, policemen, students; they were family men and women, good churchgoing folk who came to believe that keeping black people in their place was nothing less than pest control, a way of combating an epidemic or virus that if not checked would be detrimental to the health and security of the community."
African Americans emerged from the Civil War with the political experience and stature to resist attacks, but disenfranchisement and the decrease in their civil rights restricted their power to do much more than react after the fact by compiling statistics and publicizing the atrocities. From the early 1880s, the Chicago Tribune reprinted accounts of lynchings from the newspaper lists with which they exchanged, and to publish annual statistics. These provided the main source for the compilations by the Tuskegee Institute to document lynchings, a practice it continued until 1968.
In 1892 journalist Ida B. Wells-Barnett was shocked when three friends in Memphis, Tennessee were lynched because their grocery store competed successfully with a white-owned store. Outraged, Wells-Barnett began a global anti-lynching campaign that raised awareness of the social injustice. As a result of her efforts, black women in the US became active in the anti-lynching crusade, often in the form of clubs which raised money to publicize the abuses. When the National Association for the Advancement of Colored People (NAACP) was formed in 1909, Wells became part of its multi-racial leadership and continued to be active against lynching.
In 1903 leading writer Charles Waddell Chesnutt published his article "The Disfranchisement of the Negro", detailing civil rights abuses and need for change in the South. Numerous writers appealed to the literate public.
In 1904 Mary Church Terrell, the first president of the National Association of Colored Women, published an article in the influential magazine North American Review to respond to Southerner Thomas Nelson Page. She took apart and refuted his attempted justification of lynching as a response to assaults on white women. Terrell showed how apologists like Page had tried to rationalize what were violent mob actions that were seldom based on true assaults.
Great Migration
In what has been viewed as multiple acts of resistance, tens of thousands of African Americans left the South annually, especially from 1910–1940, seeking jobs and better lives in industrial cities of the North and Midwest, in a movement that was called the Great Migration. More than 1.5 million people went North during this phase of the Great Migration. They refused to live under the rules of segregation and continual threat of violence, and many secured better educations and futures for themselves and their children, while adapting to the drastically different requirements of industrial cities. Northern industries such as the Pennsylvania Railroad and others, and stockyards and meatpacking plants in Chicago and Omaha, vigorously recruited southern workers. For instance, 10,000 men were hired from Florida and Georgia by 1923 by the Pennsylvania Railroad to work at their expanding yards and tracks.
Federal action limited by Solid South
President Theodore Roosevelt made public statements against lynching in 1903, following George White's murder in Delaware, and in his sixth annual State of the Union message on December 4, 1906. When Roosevelt suggested that lynching was taking place in the Philippines, southern senators (all white Democrats) demonstrated power by a filibuster in 1902 during review of the "Philippines Bill". In 1903 Roosevelt refrained from commenting on lynching during his Southern political campaigns.
Despite concerns expressed by some northern Congressmen, Congress had not moved quickly enough to strip the South of seats as the states disfranchised black voters. The result was a "Solid South" with the number of representatives (apportionment) based on its total population, but with only whites represented in Congress, essentially doubling the power of white southern Democrats.
My Dear Governor Durbin, ... permit me to thank you as an American citizen for the admirable way in which you have vindicated the majesty of the law by your recent action in reference to lynching... All thoughtful men... must feel the gravest alarm over the growth of lynching in this country, and especially over the peculiarly hideous forms so often taken by mob violence when colored men are the victims – on which occasions the mob seems to lay more weight, not on the crime but on the color of the criminal.... There are certain hideous sights which when once seen can never be wholly erased from the mental retina. The mere fact of having seen them implies degradation.... Whoever in any part of our country has ever taken part in lawlessly putting to death a criminal by the dreadful torture of fire must forever after have the awful spectacle of his own handiwork seared into his brain and soul. He can never again be the same man.
Durbin had successfully used the National Guard to disperse the lynchers. Durbin publicly declared that the accused murderer—an African American man—was entitled to a fair trial. Roosevelt's efforts cost him political support among white people, especially in the South. In addition, threats against him increased so that the Secret Service increased the size of his detail.
World War I to World War II
African-American writers used their talents in numerous ways to publicize and protest against lynching. In 1914, Angelina Weld Grimké had already written her play Rachel to address racial violence. It was produced in 1916. In 1915, W. E. B. Du Bois, noted scholar and head of the recently formed NAACP, called for more black-authored plays.
African-American women playwrights were strong in responding. They wrote ten of the fourteen anti-lynching plays produced between 1916 and 1935. The NAACP set up a Drama Committee to encourage such work. In addition, Howard University, the leading historically black college, established a theater department in 1920 to encourage African-American dramatists. Starting in 1924, the NAACP's major publications Crisis and Opportunity sponsored contests to encourage black literary production.
New Klan
The Klan revived and grew because of white peoples' anxieties and fear over the rapid pace of change. Both white and black rural migrants were moving into rapidly industrializing cities of the South. Many Southern white and African-American migrants also moved north in the Great Migration, adding to greatly increased immigration from southern and eastern Europe in major industrial cities of the Midwest and West. The Klan grew rapidly and became most successful and strongest in those cities that had a rapid pace of growth from 1910–1930, such as Atlanta, Birmingham, Dallas, Detroit, Indianapolis, Chicago, Portland, Oregon; and Denver, Colorado. It reached a peak of membership and influence about 1925. In some cities, leaders' actions to publish names of Klan members provided enough publicity to sharply reduce membership.
The 1915 murder near Atlanta, Georgia of factory manager Leo Frank, an American Jew, was a notorious lynching of a Jewish man. Initially sensationalist newspaper accounts stirred up anger about Frank, found guilty in the murder of Mary Phagan, a girl employed by his factory. He was convicted of murder after a flawed trial in Georgia. His appeals failed. Supreme Court justice Oliver Wendell Holmes's dissent condemned the intimidation of the jury as failing to provide due process of law. After the governor commuted Frank's sentence to life imprisonment, a mob calling itself the Knights of Mary Phagan kidnapped Frank from the prison farm at Milledgeville, and lynched him.
Georgia politician and publisher Tom Watson used sensational coverage of the Frank trial to create power for himself. By playing on people's anxieties, he also built support for revival of the Ku Klux Klan. The new Klan was inaugurated in 1915 at a mountaintop meeting near Atlanta, and was composed mostly of members of the Knights of Mary Phagan. D. W. Griffith's 1915 film The Birth of a Nation glorified the original Klan and garnered much publicity.
Continuing resistance
The NAACP mounted a strong nationwide campaign of protests and public education against the movie The Birth of a Nation. As a result, some city governments prohibited release of the film. In addition, the NAACP publicized production and helped create audiences for the 1919 releases The Birth of a Race and Within Our Gates, African-American directed films that presented more positive images of blacks.
On April 1, 1918 Missouri Rep. Leonidas C. Dyer of St. Louis introduced the Dyer Anti-Lynching Bill to the House. Rep. Dyer was concerned over increased lynching and mob violence disregard for the "rule of law" in the South. The bill made lynching a federal crime, and those who participated in lynching would be prosecuted by the federal government.
In 1920 the black community succeeded in getting its most important priority in the Republican Party's platform at the National Convention: support for an anti-lynching bill. The black community supported Warren G. Harding in that election, but were disappointed as his administration moved slowly on a bill.
Dyer revised his bill and re-introduced it to the House in 1921. It passed the House on January 22, 1922, due to "insistent country-wide demand", and was favorably reported out by the Senate Judiciary Committee. Action in the Senate was delayed, and ultimately the Democratic Solid South filibuster defeated the bill in the Senate in December. In 1923, Dyer went on a midwestern and western state tour promoting the anti-lynching bill; he praised the NAACP's work for continuing to publicize lynching in the South and for supporting the federal bill. Dyer's anti-lynching motto was "We have just begun to fight," and he helped generate additional national support. His bill was twice more defeated in the Senate by Southern Democratic filibuster. The Republicans were unable to pass a bill in the 1920s.
African-American resistance to lynching carried substantial risks. In 1921 in Tulsa, Oklahoma, a group of African American citizens attempted to stop a lynch mob from taking 19-year-old assault suspect Dick Rowland out of jail. In a scuffle between a white man and an armed African-American veteran, the white man was killed. Whites retaliated by rioting, during which they burned 1,256 homes and as many as 200 businesses in the segregated Greenwood district, destroying what had been a thriving area. Confirmed dead were 39 people: 26 African Americans and 13 whites. Recent investigations suggest the number of African-American deaths may have been much higher. Rowland was saved, however, and was later exonerated.
The growing networks of African-American women's club groups were instrumental in raising funds to support the NAACP public education and lobbying campaigns. They also built community organizations. In 1922 Mary Talbert headed the Anti-Lynching Crusade, to create an integrated women's movement against lynching. It was affiliated with the NAACP, which mounted a multi-faceted campaign. For years the NAACP used petition drives, letters to newspapers, articles, posters, lobbying Congress, and marches to protest the abuses in the South and keep the issue before the public.
While the second KKK grew rapidly in cities undergoing major change and achieved some political power, many state and city leaders, including white religious leaders such as Reinhold Niebuhr in Detroit, acted strongly and spoke out publicly against the organization. Some anti-Klan groups published members' names and quickly reduced the energy in their efforts. As a result, in most areas, after 1925 KKK membership and organizations rapidly declined. Cities passed laws against wearing of masks, and otherwise acted against the KKK.
In 1930, Southern white women responded in large numbers to the leadership of Jessie Daniel Ames in forming the Association of Southern Women for the Prevention of Lynching. She and her co-founders obtained the signatures of 40,000 women to their pledge against lynching and for a change in the South. The pledge included the statement:
In light of the facts we dare no longer to... allow those bent upon personal revenge and savagery to commit acts of violence and lawlessness in the name of women.
Despite physical threats and hostile opposition, the women leaders persisted with petition drives, letter campaigns, meetings and demonstrations to highlight the issues. By the 1930s, the number of lynchings had dropped to about ten per year in Southern states.
In the 1930s, communist organizations, including a legal defense organization called the International Labor Defense (ILD), organized support to stop lynching. (See The Communist Party USA and African-Americans). The ILD defended the Scottsboro Boys, as well as three black men accused of rape in Tuscaloosa in 1933. In the Tuscaloosa case, two defendants were lynched under circumstances that suggested police complicity. The ILD lawyers narrowly escaped lynching. Many Southerners resented them for their perceived "interference" in local affairs. In a remark to an investigator, a white Tuscaloosan said, "For New York Jews to butt in and spread communistic ideas is too much."
Federal action and southern resistance
Anti-lynching advocates such as Mary McLeod Bethune and Walter Francis White campaigned for presidential candidate Franklin D. Roosevelt in 1932. They hoped he would lend public support to their efforts against lynching. Senators Robert F. Wagner and Edward P. Costigan drafted the Costigan-Wagner bill in 1934 to require local authorities to protect prisoners from lynch mobs. Like the Dyer bill, it made lynching a Federal crime in order to take it out of state administration.
Southern Senators continued to hold a hammerlock on Congress. Because of the Southern Democrats' disfranchisement of African Americans in Southern states at the start of the 20th century, Southern whites for decades had nearly double the representation in Congress beyond their own population. Southern states had Congressional representation based on total population, but essentially only whites could vote and only their issues were supported. Due to seniority achieved through one-party Democratic rule in their region, Southern Democrats controlled many important committees in both houses. Southern Democrats consistently opposed any legislation related to putting lynching under Federal oversight. As a result, Southern white Democrats were a formidable power in Congress until the 1960s.
In the 1930s, virtually all Southern senators blocked the proposed Wagner-Costigan bill. Southern senators used a filibuster to prevent a vote on the bill. Some Republican senators, such as the conservative William Borah from Idaho, opposed the bill for constitutional reasons. He felt it encroached on state sovereignty and, by the 1930s, thought that social conditions had changed so that the bill was less needed. He spoke at length in opposition to the bill in 1935 and 1938. There were 15 lynchings of blacks in 1934 and 21 in 1935, but that number fell to eight in 1936, and to two in 1939.
A lynching in Fort Lauderdale, Florida, changed the political climate in Washington. On July 19, 1935, Rubin Stacy, a homeless African-American tenant farmer, knocked on doors begging for food. After resident complaints, deputies took Stacy into custody. While he was in custody, a lynch mob took Stacy from the deputies and murdered him. Although the faces of his murderers could be seen in a photo taken at the lynching site, the state did not prosecute the murder of Rubin Stacy.
Stacy's murder galvanized anti-lynching activists, but President Roosevelt did not support the federal anti-lynching bill. He feared that support would cost him Southern votes in the 1936 election. He believed that he could accomplish more for more people by getting re-elected.
World War II to present
Second Great Migration
The industrial buildup to World War II acted as a "pull" factor in the second phase of the Second Great Migration starting in 1940 and lasting until 1970. Altogether in the first half of the 20th century, 6.5 million African Americans migrated from the South to leave lynchings and segregation behind, improve their lives and get better educations for their children. Unlike the first round, composed chiefly of rural farm workers, the second wave included more educated workers and their families who were already living in southern cities and towns. In this migration, many migrated west from Louisiana, Mississippi and Texas to California in addition to northern and midwestern cities, as defense industries recruited thousands to higher-paying, skilled jobs. They settled in Los Angeles, San Francisco and Oakland.
Federal action
In 1946, the Civil Rights Section of the Justice Department gained its first conviction under federal civil rights laws against a lyncher. Florida constable Tom Crews was sentenced to a $1,000 fine and one year in prison for civil rights violations in the killing of an African-American farm worker.
In 1946, a mob of white men shot and killed two young African-American couples near Moore's Ford Bridge in Walton County, Georgia 60 miles east of Atlanta. This lynching of four young sharecroppers, one a World War II veteran, shocked the nation. The attack was a key factor in President Harry S. Truman's making civil rights a priority of his administration. Although the Federal Bureau of Investigation (FBI) investigated the crime, they were unable to prosecute. It was the last documented lynching of so many people.
In 1947, the Truman Administration published a report titled To Secure These Rights, which advocated making lynching a federal crime, abolishing poll taxes, and other civil rights reforms. The Southern Democratic bloc of senators and congressmen continued to obstruct attempts at federal legislation.
In the 1940s, the Klan openly criticized Truman for his efforts to promote civil rights. Later historians documented that Truman had briefly made an attempt to join the Klan as a young man in 1924, when it was near its peak of social influence in promoting itself as a fraternal organization. When a Klan officer demanded that Truman pledge not to hire any Catholics if he was reelected as county judge, Truman refused. He personally knew their worth from his World War I experience. His membership fee was returned and he never joined the KKK.
Lynching and the Cold War
With the beginning of the Cold War after World War II, the Soviet Union criticized the United States for the frequency of lynchings of black people. In a meeting with President Harry Truman in 1946, Paul Robeson urged him to take action against lynching. In 1951, Paul Robeson and the Civil Rights Congress made a presentation entitled "We Charge Genocide" to the United Nations. They argued that the US government was guilty of genocide under Article II of the UN Genocide Convention because it failed to act against lynchings. The UN took no action.
In the postwar years of the Cold War, the FBI was worried more about possible Communist connections among anti-lynching groups than about the lynching crimes. For instance, the FBI branded Albert Einstein a communist sympathizer for joining Robeson's American Crusade Against Lynching. J. Edgar Hoover, head of the FBI for decades, was particularly fearful of the effects of Communism in the US. He directed more attention to investigations of civil rights groups for communist connections than to Ku Klux Klan activities against the groups' members and other innocent blacks.
Civil Rights Movement
By the 1950s, the Civil Rights Movement was gaining momentum. Membership in the NAACP increased in states across the country. The NAACP achieved a significant US Supreme Court victory in 1954 ruling that segregated education was unconstitutional. A 1955 lynching that sparked public outrage about injustice was that of Emmett Till, a 14-year-old boy from Chicago. Spending the summer with relatives in Money, Mississippi, Till was killed for allegedly having wolf-whistled at a white woman. Till had been badly beaten, one of his eyes was gourged out, and he was shot in the head before being thrown into the Tallahatchie River, his body weighed down with a 70-pound (32 kg) cotton gin fan tied around his neck with barbed wire. His mother insisted on a public funeral with an open casket, to show people how badly Till's body had been disfigured. News photographs circulated around the country, and drew intense public reaction. People in the nation were horrified that a boy could have been killed for such an incident. The state of Mississippi tried two defendants, but they were speedily acquitted.
In the 1960s the Civil Rights Movement attracted students to the South from all over the country to work on voter registration and other issues. The intervention of people from outside the communities and threat of social change aroused fear and resentment among many whites. In June 1964, three civil rights workers disappeared in Neshoba County, Mississippi. They had been investigating the arson of a black church being used as a "Freedom School". Six weeks later, their bodies were found in a partially constructed dam near Philadelphia, Mississippi. James Chaney of Meridian, Mississippi, and Michael Schwerner and Andrew Goodman of New York had been members of the Congress of Racial Equality. They had been dedicated to non-violent direct action against racial discrimination.
The US prosecuted eighteen men for a Ku Klux Klan conspiracy to deprive the victims of their civil rights under 19th-century Federal law, in order to prosecute the crime in Federal court. Seven men were convicted but received light sentences, two men were released because of a deadlocked jury, and the remainder were acquitted. In 2005, 80-year-old Edgar Ray Killen, one of the men who had earlier gone free, was retried by the state of Mississippi, convicted of three counts of manslaughter in a new trial, and sentenced to 60 years in prison.
Because of J. Edgar Hoover's and others' hostility to the Civil Rights Movement, agents of the FBI resorted to outright lying to smear civil rights workers and other opponents of lynching. For example, the FBI disseminated false information in the press about the lynching victim Viola Liuzzo, who was murdered in 1965 in Alabama. The FBI said Liuzzo had been a member of the Communist Party USA, had abandoned her five children, and was involved in sexual relationships with African Americans in the movement.
After the Civil Rights Movement
From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." No bill was approved by the Senate because of the powerful opposition of the Southern Democratic voting bloc.
Although lynchings have become rare following the civil rights movement and changing social mores, some have occurred. In 1981, two KKK members in Alabama randomly selected a 19-year-old black man, Michael Donald, and murdered him, to retaliate for a jury's acquittal of a black man accused of murdering a police officer. The Klansmen were caught, prosecuted, and convicted. A $7 million judgment in a civil suit against the Klan bankrupted the local subgroup, the United Klans of America.
In 1998, Shawn Allen Berry, Lawrence Russel Brewer, and ex-convict John William King murdered James Byrd, Jr. in Jasper, Texas. Byrd was a 49-year-old father of three, who had accepted an early-morning ride home with the three men. They attacked him and dragged him to his death behind their truck. The three men dumped their victim's mutilated remains in the town's segregated African-American cemetery and then went to a barbecue. Local authorities immediately treated the murder as a hate crime and requested FBI assistance. The murderers (two of whom turned out to be members of a white supremacist prison gang) were caught and stood trial. Brewer and King were sentenced to death; Berry was sentenced to life in prison.
On June 13, 2005, the United States Senate formally apologized for its failure in the early 20th century, "when it was most needed", to enact a Federal anti-lynching law. Anti-lynching bills that passed the House were defeated by filibusters by powerful Southern Democratic senators. Prior to the vote, Louisiana Senator Mary Landrieu noted, "There may be no other injustice in American history for which the Senate so uniquely bears responsibility." The resolution was passed on a voice vote with 80 senators cosponsoring. The resolution expressed "the deepest sympathies and most solemn regrets of the Senate to the descendants of victims of lynching, the ancestors of whom were deprived of life, human dignity and the constitutional protections accorded all citizens of the United States".
There are three primary sources for lynching statistics, none of which cover the entire time period of lynching in the United States. Before 1882, no reliable statistics are available. In 1882, the Chicago Tribune began to systematically record lynchings. Then, in 1892, Tuskegee Institute began a systematic collection and tabulation of lynching statistics, primarily from newspaper reports. Finally, in 1912, the National Association for the Advancement of Colored People started an independent record of lynchings. The numbers of lynchings from each source vary slightly, with the Tuskegee Institute's figures being considered "conservative" by some historians.
Tuskegee Institute, now Tuskegee University, has defined conditions that constitute a recognized lynching:
- "There must be legal evidence that a person was killed. That person must have met death illegally. A group of three or more persons must have participated in the killing. The group must have acted under the pretext of service to Justice, Race, or Tradition."
Tuskegee remains the single most complete source of statistics and records on this crime since 1882. As of 1959, which was the last time that their annual Lynch Report was published, a total of 4,733 persons had died as a result of lynching since 1882. To quote the report,
- "Except for 1955, when three lynchings were reported in Mississippi, none has been recorded at Tuskegee since 1951. In 1945, 1947, and 1951, only one case per year was reported. The most recent case reported by the institute as a lynching was that of Emmett Till, 14, a Negro who was beaten, shot to death, and thrown into a river at Greenwood, Mississippi on August 28, 1955... For a period of 65 years ending in 1947, at least one lynching was reported each year. The most for any year was 231 in 1892. From 1882 to 1901, lynchings averaged more than 150 a year. Since 1924, lynchings have been in a marked decline, never more than 30 cases, which occurred in 1926...."
Opponents of legislation often said lynchings prevented murder and rape. As documented by Ida B. Wells, rape charges or rumors were present in less than one-third of the lynchings; such charges were often pretexts for lynching blacks who violated Jim Crow etiquette or engaged in economic competition with whites. Other common reasons given included arson, theft, assault, and robbery; sexual transgressions (miscegenation, adultery, cohabitation); "race prejudice", "race hatred", "racial disturbance;" informing on others; "threats against whites;" and violations of the color line ("attending white girl", "proposals to white woman").
Tuskegee's method of categorizing most lynching victims as either black or white in publications and data summaries meant that the mistreatment of some minority and immigrant groups was obscured. In the West, for instance, Mexican, Native Americans, and Chinese were more frequent targets of lynchings than African Americans, but their deaths were included among those of whites. Similarly, although Italian immigrants were the focus of violence in Louisiana when they started arriving in greater numbers, their deaths were not identified separately. In earlier years, whites who were subject to lynching were often targeted because of suspected political activities or support of freedmen, but they were generally considered members of the community in a way new immigrants were not.
Popular culture
Famous fictional treatments
- Owen Wister's The Virginian, a 1902 seminal novel that helped create the genre of Western novels in the U.S., dealt with a fictional treatment of the Johnson County War and frontier lynchings in the West.
- Angelina Weld Grimké's Rachel was the first play about the toll of racial violence in African-American families, written in 1914 and produced in 1916.
- Following the commercial and critical success of the film Birth of a Nation, African-American director and writer Oscar Micheaux responded in 1919 with the film Within Our Gates. The climax of the film is the lynching of a black family after one member of the family is wrongly accused of murder. While the film was a commercial failure at the time, it is considered historically significant and was selected for preservation in the National Film Registry.
- Regina M. Anderson's Climbing Jacob's Ladder was a play about a lynching performed by the Krigwa Players (later called the Negro Experimental Theater), a Harlem theater company.
- Lynd Ward's 1932 book Wild Pilgrimage (printed in woodblock prints, with no text) includes three prints of the lynching of several black men.
- In Irving Berlin's 1933 musical As Thousands Cheer, a ballad about lynching, "Supper Time" was introduced by Ethel Waters. Waters wrote in her 1951 autobiography, His Eye Was on the Sparrow, "if one song could tell the story of an entire race, that was it."
- Murder in Harlem (1935), by director Oscar Micheaux, was one of three films Micheaux made based on events in the Leo Frank trial. He portrayed the character analogous to Frank as guilty and set the film in New York, removing sectional conflict as one of the cultural forces in the trial. Micheaux's first version was a silent film The Gunsaulus Mystery (1921). Lem Hawkins Confession (1935) was also related to the Leo Frank trial
- The film They Won't Forget (1937) was inspired by the Frank case, with the Leo Frank character portrayed as a Christian.
- In Fury (1936), the German expatriate Fritz Lang depicts a lynch mob burning down a jail in which Joe Wilson (played by Spencer Tracy) was held as a suspect in a kidnapping, a crime for which Wilson was soon after cleared. The story was modeled on a 1933 lynching in San Jose, California, which was captured on newsreel footage and in which Governor of California James Rolph refused to intervene.
- In Walter Van Tilburg Clark's 1940 The Ox-Bow Incident, two drifters are drawn into a posse, formed to find the murderer of a local man. After suspicion centered on three innocent cattle rustlers, they were lynched, an event that deeply affected the drifters. The novel was filmed in 1943 as a wartime defense of United States' values versus the characterization of Nazi Germany as mob rule.
- In Harper Lee's 1960 novel To Kill a Mockingbird, Tom Robinson, a black man wrongfully accused of rape, narrowly escapes lynching. Robinson is later killed while attempting to escape from prison, after having been wrongfully convicted. A movie was made in 1962.
- The 1988 film Mississippi Burning includes an accurate depiction of a man being lynched.
- Peter Matthiessen depicted several lynchings in his Killing Mr. Watson trilogy (first volume published in 1990).
- Vendetta, a 1999 HBO film starring Christopher Walken and directed by Nicholas Meyer, is based on events that took place in New Orleans in 1891. The acquittal of 18 Italian-American men falsely accused of the murder of police chief David Hennessy led to 11 of them being shot or hanged in one of the largest mass lynchings in American history.
"Strange Fruit"
Southern trees bear a strange fruit,
Blood on the leaves and blood at the root,
Black bodies swinging in the Southern breeze,
Strange fruit hanging from the poplar trees.
Pastoral scene of the gallant south
the bulging eyes and the twisted mouth
scent of magnolia
sweet and fresh
then the sudden smell of burning flesh
Here is a fruit
for the crows to pluck
for the rain to gather
for the wind to suck
for the sun to rot
for the tree to drop
Here is a strange
and bitter crop
Although Holiday's regular label of Columbia declined, Holiday recorded it with Commodore. The song became identified with her and was one of her most popular ones. The song became an anthem for the anti-lynching movement. It also contributed to activism of the American civil rights movement. A documentary about a lynching, and the effects of protest songs and art, entitled Strange Fruit (2002) and produced by Public Broadcasting Service, was aired on U.S. television.
For most of the history of the United States, lynching was rarely prosecuted, as the same people who would have had to prosecute were generally on the side of the action. When it was prosecuted, it was under state murder statutes. In one example in 1907–09, the U.S. Supreme Court tried its only criminal case in history, 203 U.S. 563 (U.S. v. Sheriff Shipp). Shipp was found guilty of criminal contempt for doing nothing to stop the mob in Chattanooga, Tennessee that lynched Ed Johnson, who was in jail for rape. In the South, blacks generally were not able to serve on juries, as they could not vote, having been disfranchised by discriminatory voter registration and electoral rules passed by majority-white legislatures in the late 19th century, a time coinciding with their imposition of Jim Crow laws.
Starting in 1909, federal legislators introduced more than 200 bills in Congress to make lynching a Federal crime, but they failed to pass, chiefly because of Southern legislators' opposition. Because Southern states had effectively disfranchised African Americans at the start of the 20th century, the white Southern Democrats controlled all the seats of the South, nearly double the Congressional representation that white residents alone would have been entitled to. They were a powerful voting block for decades. The Senate Democrats formed a block that filibustered for a week in December 1922, holding up all national business, to defeat the Dyer Anti-Lynching Bill. It had passed the House in January 1922 with broad support except for the South. Rep. Leonidas C. Dyer, the chief sponsor, undertook a national speaking tour in support of the bill in 1923, but the Southern Senators defeated it twice more in the next two sessions.
Under the Franklin D. Roosevelt Administration, the Civil Rights Section of the Justice Department tried, but failed, to prosecute lynchers under Reconstruction-era civil rights laws. The first successful Federal prosecution of a lyncher for a civil rights violation was in 1946. By that time, the era of lynchings as a common occurrence had ended. Adam Clayton Powell, Jr. succeeded in gaining House passage of an anti-lynching bill, but it was defeated in the Senate.
Many states have passed anti-lynching statutes. California defines lynching, punishable by 2–4 years in prison, as "the taking by means of a riot of any person from the lawful custody of any peace officer", with the crime of "riot" defined as two or more people using violence or the threat of violence. A lyncher could thus be prosecuted for several crimes arising from the same action, e.g., riot, lynching, and murder. Although lynching in the historic sense is virtually nonexistent today, the lynching statutes are sometimes used in cases where several people try to wrest a suspect from the hands of police in order to help him escape, as alleged in a July 9, 2005, violent attack on a police officer in San Francisco.
South Carolina law defines second-degree lynching as "any act of violence inflicted by a mob upon the body of another person and from which death does not result shall constitute the crime of lynching in the second degree and shall be a felony. Any person found guilty of lynching in the second degree shall be confined at hard labor in the State Penitentiary for a term not exceeding twenty years nor less than three years, at the discretion of the presiding judge." In 2006, five white teenagers were given various sentences for second-degree lynching in a non-lethal attack of a young black man in South Carolina.
From 1882 to 1968, "...nearly 200 anti-lynching bills were introduced in Congress, and three passed the House. Seven presidents between 1890 and 1952 petitioned Congress to pass a federal law." In 2005 by a resolution sponsored by senators Mary Landrieu of Louisiana and George Allen of Virginia, and passed by voice vote, the Senate made a formal apology for its failure to pass an anti-lynching law "when it was most needed."
See also
- "And you are lynching Negroes", Soviet Union response to United States' allegations of human-rights violations in the Soviet Union
- Domestic terrorism
- East St. Louis Riot of 1917
- Hanging judge
- Hate crime laws in the United States
- Mass racial violence in the United States
- New York Draft Riots of 1863
- Omaha Race Riot of 1919
- Red Summer of 1919
- Rosewood, Florida, race riot of 1923
- Tarring and feathering
- "Lynchings: By State and Race, 1882–1968". University of Missouri-Kansas City School of Law. Retrieved 2010-07-26. "Statistics provided by the Archives at Tuskegee Institute."
- Davis, Angela Y. (1983). Women, Race & Class. New York: Vintage Books, pp. 194–195
- Associated Press, "Senate Apologizes for Not Passing Anti-Lynching Laws", Fox News
- Lynching an Abolitionist in Mississippi. New York Times. September 18, 1857. Retrieved on 2011-11-08.
- Nell Painter Articles – Who Was Lynched?. Nellpainter.com (1991-11-11). Retrieved on 2011-11-08.
- Pfeifer, Michael J. Rough Justice: Lynching and American Society, 1874–1947, Chicago: University of Illinois Press, 2004
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Latinas: Area Studies Collections. Memory.loc.gov. Retrieved on 2011-11-08.
- Budiansky, 2008, passim
- Dray, Philip. At the Hands of Persons Unknown: The Lynching of Black America, New York: Random House, 2002
- Lemann, 2005, pp. 135–154.
- Lemann, 2005, p. 180.
- "Lynchings: By Year and Race". University of Missouri-Kansas City School of Law. Retrieved 2010-07-26. "Statistics provided by the Archives at Tuskegee Institute."
- Ross, John R. "Lynching". Handbook of Texas Online. Texas State Historical Association. Retrieved 2011-11-07.
- Willis, 2000, pp. 154–155
- Willis, 2000, p. 157.
- "Chief of Police David C. Hennessy". The Officer Down Memorial Page, Inc. Retrieved 2011-11-07.
- "Under Attack". American Memory, Library of Congress, Retrieved February 26, 2010
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Carrigan, William D. "The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928". Retrieved 2011-11-07.
- Davis, Gode (2005-09). "American Lynching: A Documentary Feature". Retrieved 2011-11-07.
- Burned at the Stake: A Black Man Pays for a Town’s Outrage. Historymatters.gmu.edu. Retrieved on 2011-11-08.
- "Deputy Sheriff George H. Loney". The Officer Down Memorial Page, Inc. Retrieved 2011-11-07.
- "Shaped by Site: Three Communities' Dialogues on the Legacies of Lynching." National Park Service. Accessed October 29, 2008.
- Herbert, Bob (January 22, 2008). "The Blight That Is Still With Us". The New York Times. Retrieved January 22, 2008.
- Pfeifer, 2004, p. 35.
- Fedo, Michael, The Lynchings in Duluth. St. Paul, Minnesota: Minnesota Historical Society Press, 2000. ISBN 0-87351-386-X
- Robert A. Gibson. "The Negro Holocaust: Lynching and Race Riots in the United States,1880–1950". Yale-New Haven Teachers Institute. Retrieved 2010-07-26.
- Richard Lacayo, "Blood At The Root", Time, April 2, 2000
- Wexler, Laura (2005-06-19). "A Sorry History: Why an Apology From the Senate Can't Make Amends". Washington Post. pp. B1. Retrieved 2011-11-07.
- SallyAnn H. Ferguson, ed., Charles W. Chesnutt: Selected Writings. Boston: Houghton Mifflin Company, 2001, pp. 65–81
- Davis, Angela Y. (1983). Women, Race & Class. New York: Vintage Books. p. 193
- Maxine D. Rogers, Larry E. Rivers, David R. Colburn, R. Tom Dye, and William W. Rogers, Documented History of the Incident Which Occurred at Rosewood, Florida in January 1923, December 1993, accessed 28 March 2008
- Morris, 2001, pp. 110–11, 246–49, 250, 258–59, 261–62, 472.
- McCaskill; with Gebard, 2006, pp. 210–212.
- Jackson, 1967, p. 241.
- Ernest Harvier, "Political Effect of the Dyer Bill: Delay in Enacting Anti-Lynching Law Diverted Thousands of Negro Votes", New York Times, 9 July 1922, accessed 26 July 2011
- "Filibuster Kills Anti-Lynching Bil", New York Times, 3 December 1922, accessed 20 July 2011
- Rucker; with Upton and Howard, 2007, pp. 182–183.
- Jackson, 1992, ?
- "Proceedings of the U.S. Senate on June 13, 2005 regarding the "Senate Apology" as Reported in the 'Congressional Record'", "Part 3, Mr. Craig", at African American Studies, University of Buffalo, accessed 26 July 2011
- Wood, Amy Louise. Lynching and Spectacle: Witnessing Racial Violence in America, 1890-1940. University of North Carolina Press. p. 196.
- Rubin Stacy. Ft. Lauderdale, Florida. July 19, 1935. strangefruit.org (1935-07-19). Retrieved on 2011-11-08.
- Wexler, Laura. Fire in a Canebrake: The Last Mass Lynching in America, New York: Scribner, 2003
- "To Secure These Rights: The Report of the President's Committee on Civil Rights". Harry S. Truman Library and Museum. Retrieved 2010-07-26.
- Wade, 1987, p. 196, gave a similar account, but suggested that the meeting was a regular Klan one. An interview with Truman's friend Hinde at the Truman Library's web site (http://www.trumanlibrary.org/oralhist/hindeeg.htm, retrieved June 26, 2005) portrayed the meeting as one-on-one at the Hotel Baltimore with a Klan organizer named Jones. Truman's biography, written by his daughter Margaret (Truman, 1973), agreed with Hinde's version but did not mention the $10 initiation fee. The biography included a copy of a telegram from O.L. Chrisman stating that reporters from the Hearst Corporation papers had questioned him about Truman's past with the Klan. He said he had seen Truman at a Klan meeting, but that "if he ever became a member of the Klan I did not know it."
- Fred Jerome, The Einstein File, St. Martin's Press, 2000; foia.fbi.gov/foiaindex/einstein.htm
- Detroit News, September 30, 2004; [dead link]
- "Ku Klux Klan", Spartacus Educational, retrieved June 26, 2005.
- "Closing arguments today in Texas dragging-death trial", CNN, February 22, 1999
- "The murder of James Byrd, Jr.", The Texas Observer, September 17, 1999
- Thomas-Lester, Avis (June 14, 2005), "A Senate Apology for History on Lynching", The Washington Post, p. A12, retrieved June 26, 2005.
- "1959 Tuskegee Institute Lynch Report", Montgomery Advertiser; April 26, 1959, re-printed in 100 Years Of Lynching by Ralph Ginzburg (1962, 1988).
- Ida B. Wells, Southern Horrors, 1892.
- The lynching of persons of Mexican origin or descent in the United States, 1848 to 1928. Journal of Social History. Findarticles.com. Retrieved on 2011-11-08.
- Matthew Bernstein (2004). "Oscar Micheaux and Leo Frank: Cinematic Justice Across the Color Line". Film Quarterly 57 (4): 8.
- "Killing Mr. Watson", New York ''Times'' review. Retrieved on 2011-11-08.
- "Strange Fruit". PBS. Retrieved August 23, 2011.
- Linder, Douglas O., US Supreme Court opinion in United States vs. Shipp, University of Missouri-Kansas City School of Law
- [dead link]
- South Carolina Code of Laws section 16-3-220 Lynching in the second degree
- Guilty: Teens enter pleas in lynching case. The Gaffney Ledger. 2006-01-11. retrieved June 29, 2007.
Books and references
- This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). Encyclopædia Britannica (11th ed.). Cambridge University Press.
- Allen, James, Hilton Als, John Lewis, and Leon F. Litwack, Without Sanctuary: Lynching Photography in America (Twin Palms Publishers: 2000) ISBN 978-0-944092-69-9
- Brundage, William Fitzhugh, Lynching in the New South: Georgia and Virginia, 1880–1930. Urbana, Illinois: University of Illinois Press, 1993.
- Budiansky, Steven (2008). The Bloody Shirt: Terror After the Civil War. New York: Plume. ISBN 978-0-452-29016-7
- Curriden, Mark and Leroy Phillips, Contempt of Court: The Turn-of-the-Century Lynching That Launched a Hundred Years of Federalism, ISBN 978-0-385-72082-3
- Ginzburg, Ralph. 100 Years Of Lynching, Baltimore: Black Classic Press, 1962, 1988.
- Howard, Marilyn K.; with Rucker, Walter C., Upton, James N. (2007). Encyclopedia of American race riots. Westport, CT: Greenwood Press. ISBN 0-313-33301-7
- Hill, Karlos K. "Black Vigilantism: African American Lynch Mob Activity in the Mississippi and Arkansas Deltas, 1883-1923," Journal of African American History 95 no. 1 (Winter 2010): 26-43.
- Jackson, Kenneth T. (1992). The Ku Klux Klan in the City, 1915–1930. New York: Oxford University Press. ISBN 0-929587-82-0
- Lemann, Nicholas (2005). Redemption: The Last Battle of the Civil War. New York: Farrar, Strauss and Giroux. ISBN 0-374-24855-9
- McCaskill, Barbara; with Gebard, Caroline, ed. (2006). Post-Bellum, Pre-Harlem: African American Literature and Culture, 1877–1919. New York: New York University Press. ISBN 978-0-8147-3167-3
- Markovitz, Jonathan, Legacies of Lynching: Racial Violence and Memory, Minneapolis: University of Minnesota Press, 2004 ISBN 0-8166-3994-9.
- Morris, Edmund (2001). Theodore Rex. New York: Random House. ISBN 978-0-394-55509-6
- Newton, Michael and Judy Ann Newton, Racial and Religious Violence in America: A Chronology. N.Y.: Garland Publishing, Inc., 1991
- Pfeifer, Michael J. Rough Justice: Lynching and American Society, 1874–1947. Urbana: University of Illinois Press, 2004 ISBN 0-252-02917-8.
- Smith, Tom. The Crescent City Lynchings: The Murder of Chief Hennessy, the New Orleans "Mafia" Trials, and the Parish Prison Mob
- Thirty Years of Lynching in the United States, 1889–1918 New York City: Arno Press, 1969.
- Thompson, E.P. Customs in Common: Studies in Traditional Popular Culture. New York: The New Press, 1993.
- Tolnay, Stewart E., and Beck, E.M. A Festival of Violence: An Analysis of Southern Lynchings, 1882–1930, Urbana and Chicago: University of Illinois Press, 1992 ISBN 0-252-06413-5.
- Truman, Margaret. Harry S. Truman. New York: William Morrow and Co., 1973.
- Wade, Wyn Craig. The Fiery Cross: The Ku Klux Klan in America. New York: Simon and Schuster, 1987.
- Willis, John C. (2000). Forgotten Time: The Yazoo-Mississippi Delta after the Civil War. Charlottesville: University of Virginia Press. ISBN 0-374-24855-9
- Wright, George C. Racial Violence in Kentucky 1865–1940 by George C. Wright. Baton Rouge: Louisiana State University Press, 1990 ISBN 0-8071-2073-1.
- Wyatt-Brown, Bertram. Southern Honor: Ethics & Behavior in the Old South. New York: Oxoford University Press, 1982.
- Zinn, Howard. Voices of a People's History of the United States. New York: Seven Stories Press, 2004 ISBN 1-58322-628-1.
Further reading
- Lynching calendar 1865–1965
- Origin of the word Lynch
- Lynchings in the State of Kansas
- Houghton Mifflin: The Reader's Companion to American History – Lynching (password protected)
- Lynching of John Heath
- "Lynch Law"—An American Community Enigma, Henry A. Rhodes
- Intimations of Citizenship: Reressions and expressions of Equal Citizenship in the era of Jim Crow, James W. Fox, Jr., Howard Law Journal, Volume 50, Issue 1, Fall 2006
- Lycnching of Detective Carl Etherington in 1910 See also
- American Lynching – web site for a documentary; links, bibliographical information, images
- The 1856 Committee of Vigilance – A treatment of the San Francisco vigilante movement, sympathetic to the vigilantes.
- "The Last Lynching in Athens."
- "A Civil War Lynching in Athens."
- "'Bloody Injuries:' Lynchings in Oconee County, 1905–1921."
- Mark Auslander "Holding on to Those Who Can't be Held": Reenacting a Lynching at Moore's Ford, Georgia" Southern Spaces, November 8, 2010.
|Wikimedia Commons has media related to: Lynchings|
- Young adult website
- Lynching in the United States (photos)
- Without Sanctuary: Lynching Photography in America
- A review of Without Sanctuary: Lynching Photography in America, James Allen et al.
- Lynching, Orange Texas
- Lynching of WW I veteran Manuel Cabeza in 1921
- 1868 Lynching of Steve Long and Moyer brothers Laramie City, Wyoming
- Lynching of Will Brown in Omaha Race Riot of 1919. (Graphic) | http://en.wikipedia.org/wiki/Lynching_in_the_United_States | 13 |
22 | Facilitator/Educator Guide: Coin Toss-up
If you toss a coin, there is a fifty-fifty chance it will land tails-side up. But what if you toss it five times, can you predict how often you'll get one tails and four heads versus three tails and two heads? In this activity, use a coin and some graph paper to explore how the accuracy of predictions is influenced by sample size.
|Activity's uses:||Classroom demo or small group exploration|
|Area(s) of science:||Math|
|Prep time:||<10 minutes|
|Activity time:||30-45 minutes|
|Key terms:||Probability, statistics, independent trials, random process, histogram|
If we are lucky, the questions we have to answer on a daily basis have only one of two outcomes. Will the grocery store be open or will it be closed? Will my football team win or lose? How probable is it that either of these outcomes will happen, and how can we calculate the answer? The solution is to use the rules of probability, the branch of mathematics that deals with calculating the likelihood of any single event happening.
Flipping a coin is an easy way to demonstrate the concepts of probability. The probability of flipping heads is 0.5, or 50% of the time, and the probability of flipping tails is also 50%. Each flip is also independent of the flip prior to it. This means that the next flip of the coin doesn't depend on the result of the previous flip. But if you flip a coin a number of times, is it equally likely to come up all tails compared to a combination of heads and tails? The answer is no, but there are ways to calculate the probability of each combination. All of these definitions contribute to why flipping a coin is a random process-it is because of indeterminacy in resulting combinations of coin flips, meaning not all combinations are equally likely.
Is it possible to predict accurately a combination of flips based on a prior series of flips? In this math activity, the students will investigate the probabilities associated with flipping coins. Each of the student groups will start by flipping a coin five times, noting the results, and repeating this process nine more times. Then they will count and plot the number of times every combination of heads and tails occurs. The students will then execute a series of five coin flips ten more times, for a total of 20 series of coin flips, and determine if they can accurately predict probability based on a small number of series.
This math activity can be used as a starting point for a variety of science and math discussions. Here are a few sample questions that can be used to start a discussion:
- What is the definition of mathematical probability?
- What are some examples where you made a decision based on probability?
- Who is Jacob Bernoulli and what is a Bernoulli trial?
- A few coins for each student group
- Data tables from the student activity directions for each student group to record their data
- Graph paper (1 package)
What to Do
Prepare Ahead (<10 minutes)
- Print a copy of the student activity directions for each demo or small group. Each group will be able to record their data in the tables provided in the student directions.
- Allot each group a few coins, the data tables, two sheets of graph paper, and a few pencils
Science Activity (30-45 minutes)
- Each student group should have a few coins, a notebook, and a few pencils. Have the students practice flipping several coins for a few minutes, then ask them to choose the one that they think works best. Once the group chooses a coin, they should stick to that coin for the duration of the activity.
|Coin flip data|
- Have students in each group flip a coin five times and write down the results of each flip in first line of Table 1. For example: heads, tails, tails, heads, tails could be noted as HTTHT.
- Each group should repeat step 2 nine more times, making sure that all of the results for each series of flips are recorded in Table 1 on the corresponding line.
- Once all the groups have finished ten series of flips, count the number of times each of six possible combinations of heads and tails occurs. Note that HTTHT and TTTHH would both be counted as two heads, three tails. Record the data in Table 2 under column 1 ("Count for Series #1-10").
|4 Heads, 1 Tails|
|3 Heads, 2 Tails|
|2 Heads, 3 Tails|
|1 Heads, 4 Tails|
|Count data for each combination of heads and tails|
- Now have each group graph the count data for each combination (the result is called a histogram) using a sheet of graph paper.
- Repeat steps 2 and 3 another ten times for a second series of flips (#11 through #20). Record the data in the second column of results on Table 1.
- The groups should then count the number of times each combination occurs in the #11 through #20 coin flip series and add that information to Table 2 under the second column. Then, add the first and second columns in Table 2 and place the sum in the third column of Table 2. This is the count data for all 20 series.
- Each group should graph the data for the 20 series on a second histogram using another sheet of graph paper.
- Compare the two histograms. Is there a difference? What does this tell you about making conclusions based on a small number of series?
Each group should have a plot approaching a binomial distribution, or bell-shaped curve, for the 20 series of flips. But the plot for the series of ten may not have the typical bell-shaped feature. The difference in the histograms illustrates the concept that it is not accurate to base predictions on a small amount of data.
For Further Exploration
This science activity can be expanded or modified in a number of ways. Here are a few options:
- Continue for 30 series and then plot the data. What does the plot look like and how does it compare to the plot for the 20 series?
- Redo the activity with seven coin flips in each series and a total of 20 series. Is the resulting histogram similar to the one with 20 series of five flips per series?
- Write a computer program to generate the data rather than flipping a real coin.
Downloads and Links
Michelle Maranowski, PhD, Science Buddies | http://www.sciencebuddies.org/science-fair-projects/Classroom_Activity_Teacher_CoinTossup.shtml | 13 |
14 | Fractions for Firefighter Exam Study Guide
Problems involving fractions may be straightforward calculation questions, or they may be word problems. Typically, they ask you to add, subtract, multiply, divide, or compare fractions.
Working with Fractions
A fraction is a part of something.
Example:Let's say that a pizza was cut into 8 equal slices and you ate 3 of them. The fraction tells you what part of the pizza you ate. The pizza below shows this: 3 of the 8 pieces (the ones you ate) are shaded.
Three Kinds of Fractions
|Proper fraction:||The top number is less than the bottom number:|
|The value of a proper fraction is less than 1.|
|Improper fraction:||The top number is greater than or equal to the bottom number:|
|The value of an improper fraction is 1 or more.|
|Mixed number:||A fraction is written to the right of a whole number:|
|The value of a mixed number is more than 1: It is the sum of the whole number plus the fraction.|
Changing Improper Fractions into Mixed or Whole Numbers
It's easier to add and subtract fractions that are mixed numbers rather than improper fractions. To change an improper fraction, say into a mixed number, follow these steps:
- Divide the bottom number (2) into the top number (13) to get the whole number portion (6) of the mixed number:
- Write the remainder of the division (1) over the old bottom number (2):
- Check: Change the mixed number back into an improper fraction (see the following steps).
Changing Mixed Numbers into Improper Fractions
It's easier to multiply and divide fractions when you are working with improper fractions rather than mixed numbers. To change a mixed number, say , into an improper fraction, follow these steps:
|1. Multiply the whole number (2) by the bottom number (4).||2 × 4 = 8|
|2. Add the result (8) to the top number (3).||8 + 3 = 11|
|3. Put the total (11) over the bottom number (4).|
|4. Check: Reverse the process by changing the improper fraction into a mixed number. If you get back the number you started with, your answer is right.|
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- First Grade Sight Words List | http://www.education.com/reference/article/fraction-review/ | 13 |
12 | Basic Machines 1: Lever, Pulley, Gear, and Cam
|There are a number of mechanisms common to mechanical systems of all sorts that are useful to know about when you want to control motion of any sort. While this page is by no means comprehensive, it will help get you started on understanding a few basics of how to move things.
Before we go much further, there is one important thing you need to know. All of these machines are used to do work of some kind. Work, in this case, is defined as a force applied over a certain distance. Machines are designed to allow you to vary the ratio of force to distance in order to get the job done. Read on, this will make more sense as you see the examples.
The lever is about the simplest of machines, one we're used to deling with every day, from can openers to car jacks and millions of other everyday devices. A lever is used to change the direction of a force, and to apply Every lever has a fulcrum point, about which the lever pivots. The two arms of a lever are not equal. A lever lets you move a heavy weight with a small force, by taking advantage of this inequality. By applying half the force on the longer side of the lever, but moving it twice as far, we move the weight on the short side of the lever. In the lever below, we can move a 1 kilogram weight half a meter by putting a half kilogram weight on the long arm of the lever, and letting it push down one meter:
The ratio of the long arm to the short arm tells us the mechanical advantage of the lever, which is the ratio of work put into the system to work done. If the long arm is twice as long as the short arm, the ratio is 2:1, and the mechanical advantage is 2. So we could move the weight with half the force needed if we're willing to move our force twice as far. This comes in handy when you've got a motor that can apply only so much force.
Pulleys are series of moving wheels and ropes, chains or wires used to gain mechanical advantage. In the pulley system below, the mechanical advantage is 2. Note that we have to pull ourrope up two meters to move the weight 1 meter. What we lost in distance, we gained in force, because it takes only half the force of gravity acting on the weight to pull it up.
The more pulleys we add, the greater the mechanical advantage we get. At the same time, we increase the distance we have to pull in order to move our weight the same amount. In general, the mechanical advantage is equal to the number of ropes needed to lift the load. Below are a few pulley arrangements and their mechanical advantages:
|These notes are heavily indebted to a number of sources:
Physics : Principles with Applications, Douglas C. Giancoli, ©1990-1998, Prentice Hall
Flying Pig's mechanics section
Animatronics: A Guide to Animated Holiday Displays, Edwin Wise, ©2000, Prompt Publications
And others I am doubtless forgetting.
Gears are used to convert rotational motion, both by changing direction and by trading speed for torque. The gear ratio corresponds to the mechanical advantage of the gear, and is determined by measuring the ratio of the distances from the center of the gears to the point of contact between them. In the gear system below, the smaller gear is half the size of the larger, so the gear ratio is 2:1. The larger gear will move half the speed of the smaller, but will provide twice the torque.
Note also in the gears above that the direction is reversed. So if the smaller gear were attached to our motor, the larger gear would move at half the speed of the motor and provide twice the torque in the opposite direction of rotation.
Certain gears, such as bevel gears or helical gears, can be used to change the axis of motion as well. Bevel gears have their teeth mounted at an angle to the axis of the gear, so that the mating gear does not have to be mounted at the same angle. Helical gears have their teeth cut at an angle to the face of the gear, to provide greater mating efficiency.
Worm gear mechanisms combine a helical gear and a screw gear. They generally hve very high gear ratios, and convert the axis of motion. Worm gears look like this:
Rack and Pinion gears used to convert rotary motion to linear motion. In a rack and pinion, teeth are mounted along a linar track (rack) which moves by being run against a normal gear (pinion), as follows:
|For more on gears, see HowStuffWorks' gear notes, or Boston Gear's Gearology Guidebook.
Jen Lewin also has an excellent set of notes on gears.
The cam is a wheel mounted eccentrically (off-center) on a shaft. It seems so simple, but serves a number of purposes. The simplest use of a cam is as a vibrator. By spinning the motor, the weight of the cam causes the motor's axis to shift. If the motor is attached to some solid surface, say, a pager or cell phone body, the surface vibrates.
Cams can also be used to create oscillating motion from rotary motion. By placing a shaft against the edge of a cam, the shaft will move up and down as the cam rotates eccentrically on the motor. The image below shows a cam in motion, rotating on an axis. Notice how, as the cam goes through successive rotations, the shaft pushing against it follows a curve:
The curve is roughly a sine-wave oscillation. The cam has converted the rotary motion of the motor into a sinusoidal oscillating motion.
For more irregular motion, cams with irregular shapes can be used. For example, a cam like the following would produce smooth motion with two sudden jumps:
Another useful form of cam is the camshaft, in which a rotating shaft has bends in it to produce several cam-like protrusions all on the same axis. Camshafts are used in car engines to move the valves. A typical camshaft would look like this (this one's a little irregular, as all the cams are different offsets. It wouldn't work well in a car):
Camshafts are often used in mechanical automata, where one crank may turn several dancing figurines at a time. Check out some of the models from Cabaret Mechanical Theatre for examples.
|Flying Pig has some cams in motion on their site.|
|Basic machines 2: Joints and linkages| | http://www.tigoe.net/pcomp/machines.shtml | 13 |
26 | This is the fourth of six chapters with attention directed to the physical geography of Mars and its two satellites - the scars from Astra's fragmentation - the evidence. This topic encompasses chapters 1 through 6.
Nothing would set off a cycle of magnetic surges, and an orchestra of volcanic activity more quickly than would a close planetary flyby. A sudden, invading gravity such as the Earth would disturb and disrupt the molecules of the magma within Mars. How much would the magma be disturbed and relocated? For rotating planets, tidal forces respond according to the inverse cube law. Tides vary according to the inverse of the distance cubed.
For planets like Mars and the Earth, this means that as the distance between the two is halved, the tidal surges therein, created by each in the other planet, are increased eight fold.
As distances were halved twice, tidal volumes proportionately increase 64-fold. As distances were halved three times, as from 240,000 miles to 120,000 to 60,000 to 30,000, tidal volumes respond by increasing (8 x 8 x 8) 512-fold. From 240,000 miles, the distance of the Moon to 15,000 miles, tidal forces would have increased 4,096 fold (512x8). These conditions would produce massive magnetic surges.
The Location Of The Volcanoes Of Mars
The momentary subject is magnetic tides within Mars, not within the Earth. Mars has an estimated crustal thickness of about 20 miles, twice the thickness of the Earth. Also its radius of curvature is greater, making its crust more rigid.
The crust of Mars has very little flex, or elasticity in contrast to the Earth's crust. Therefore, for Mars, with no crustal elasticity, its primary mechanism was to relieve internal distress through volcanism. This, plus the probability that Mars suffered several hundred Earth and Venus flybys, is why the volcanoes of Mars are so gigantic.
By the consensus of geologists, there have been 170 paleomagnetic polarity reversals recorded in Earth lava flows. A foundation will be laid that each flyby produced a geomagnetic field polarity reversal. Therefore there is some evidence that Mars-Earth flybys repeated, and recurred perhaps 150 to 200 times. There is reason to assume that Mars-Venus flybys were similarly numerous. Therefore there might have been 350 planetary flybys involving Mars all together.
On Mars, its gigantic volcanoes are all located in its Serene Hemisphere, just like the two bulges, the Tharsis Bulge, the Elysium Bulge and gigantic rift system, the Valles Marineris. These are all indications that the Martian crust is somewhat thinner in its Serene Hemisphere. On Mars, 90% of the volcanoes are located on the huge Tharsis Bulge, and 10% are located on the Elysium Bulge.
Basaltic traps occur when there are one or two vast bleedings of lava on a planet's crust. Basaltic traps are generally flat, and like lakes, sometimes are very broad.
On the other hand, volcanic cones occur when there have been many dozens, perhaps hundreds of successive lava outflows out of one caldera, or volcanic crater. They build up by repeated eruptions. Volcanic cones typically have gradients of 5%, 6% or even 7%.
In the Serene Hemisphere of Mars, gigantic volcano cones occur whereas very little ones are found on the Earth. The size of the Martian volcanoes is a reflection of how many and how close its ancient flybys were of the Earth and Venus.
The Sizes Of The Volcanoes On Mars
Sizes Of The Calderas (Ejecta Craters)
In the Serene Hemisphere of Mars are found the loftiest, broadest dozen volcanoes in this Solar System. They are few in number, and that is a reflection of how thick the crust of Mars is. But their gigantic sizes are a reflection of how many spasms of catastrophism the red planet experienced.
Vast volcanism is a reflection on which planet experienced the most internal distress, and which planet suffered the greatest number of crustal/magma squeezes. During those repeated squeezes, magma gushed out of its vents or calderas in vigorous volumes of white hot gushing lava.
BASALTIC TRAPS. The Columbia Plateau of Washington-Oregon-Idaho is such a basaltic plateau, covering 150,000 sq. miles of surface, including minor parts of Southern British Columbia and Northern Utah in addition. The vents were many and the bleeding was simultaneous. The basaltic Deccan Plateau of India is an even bigger example, and its genesis will be discussed in Volume III. Vents such as this are found in Arizona also. A basaltic trap, formed by one vast bleeding of magma, is not to be confused with a volcanic cone.
VOLCANIC CONES. In contrast, single, individual volcanoes are built up slowly and regularly by numerous layers from repeated eruptions of lava and ejecta from the same caldera. Volcanoes require multiple dozens, if not hundreds of eruptions and outflows of magma, to gradually build up their cones.
Two or three eruptions do not produce sufficient ejecta for a volcanic cone. Two or three dozen eruptions might create a small volcanic cone. A hundred eruptions from the same crater will build a volcano on the Earth; it was the same on Mars. As was mentioned earlier, volcanoes have steep gradients, with slopes having angles typically ranging around 6%. On the other hand, basaltic plateaus are flat, like lakes.
Volcanoes have deep vents, where lava outflows, or flows out at temperatures ranging between 2,000 and 3,000° F. In time, the magma cooled, hardened and became black. Volcanoes build up slowly, by repeated spasms, and by repeated up thrust flows through unplugged vents. When a vent becomes plugged with cold, solidified lava, it is called a “pipe.” Sometimes volcanic pipes contain a sprinkling of diamonds.
CALDERAS (BLOW-HOLES). On the Earth, calderas, craters in the cone of a volcano, are usually but a fraction of a mile or two in diameter, though some have diameters up to three miles. Crater diameters are one indication of how much ejecta was expelled through its vent (or vents). Nothing on the Earth's surface compares to the caldera size of Martian volcanoes.
The caldera of Olympus Mons is measured at 50 miles in diameter. The caldera diameter another huge giant, Arsia Mons, is even wider at 65 miles. The caldera diameter Pavonis Mons is estimated at 20 miles. That of Ascraeus Mons is estimated at 24 miles. The caldera of Ulysses Patera is about 20 miles in diameter as is the caldera of Biblis Patera. Alber Tholus has a crater diameter estimated at 15 miles. Tharsis Tholus is estimated at 12 miles wide. By Earth standards, all of these calderas are immense vents, indications of vast volumes of lava outflows.
One could stuff all of metropolitan Los Angeles into the caldera of Arsia Mons. New York City and most of its suburbs could be stuffed into the caldera of Olympus Mons. Chicago, including all of Lake County and Du Page County also could be stuffed into the caldera of Olympus Mons. Two Philadelphia’s or ten San Francisco’s could be stuffed into the caldera of Pavonis Mons.
By contrast, on the Earth, the Mauna Loa caldera on Hawaii averages only three miles in diameter, and this is the Earth's largest volcano, rising from 20,000 feet below the surface of the Pacific Ocean to 13,700 feet above sea level.
The caldera of the famed Kilimanjaro is 1.5 miles in diameter. This volcano is only 13,000 feet high, rising from a base on a 6,000-foot plateau to 19,324 feet. The caldera of the famed Ararat is one mile in diameter. Ararat rises only 12,000 feet above the highlands of Eastern Turkey.
Compared to the volcanoes of Mars, volcanoes such as Kilimanjaro, Ararat, Etna, Popocatapetl, Shasta and Rainier are just five widely scattered pimples on different continents. The vast volcanoes of Mars, on the other hand, are much larger and are clustered into only two regions.
The Estimated Elevations Of The Volcanoes Of Mars
ELEVATIONS ON THE EARTH. Huge volcanoes on the Earth typically rise 6,000 to 10,000 feet above the surrounding terrain. Examples are the fore mentioned Kilimanjaro and Ararat. Other well-known examples include Rainier, Popocatepetl, Etna, Shasta, Hood, Baker and Cotopaxi. Mauna Loa-Mauna Kea on Hawaii, rising some 30,000 feet from an oceanic floor, is not typical.
ON MARS. On Mars, the highest volcano is Olympus Mons. Its elevation is estimated at 82,500 feet, or 15.6 miles. The height of Olympus Mons is measured from the surrounding plain; there is no such thing as mean sea level on Mars.
Ascraeus Mons is the second highest volcano in our Solar System, and has a crater rim about 50,000 feet above the surrounding plain. There is 5% to 7% grades for the slopes of Martian volcanoes. Arsia Mons is estimated at 40,000 feet, as is Pavonis Mons. They are third and fourth highest and widest in the Solar System.
The Volumes Of The Vast Volcanoes Of Mars
CONIC BASES. Olympus Mons has a conic base with a diameter of 325 miles, and an area of 101,000 sq. miles. Its volcano base area compares to the area of such states as Arizona, or Colorado or New Mexico. On the basis of conic volumes, Olympus Mons contains 525,000 cubic miles of ejecta materials, largely lava.
Ascraeus Mons has a volcanic base of 50,000 miles, more area than the state of Ohio. Its area also compares well with the areas of Pennsylvania or Bulgaria. Ascraeus Mons, at 50,000 ft., has a volume of nearly 150,000 cu. miles.
The cone of Arsia Mons has an area of some 29,000 sq. miles, and is 40,000 ft. high. Its volcanic base area is about equal to that of South Carolina. Arsia's volume is estimated at 73,000 cu. miles.
Pavonis Mons has a volcanic cone base area of 38,000 sq. miles, much like the areas of Hungary or Indiana. At 40,000 ft. high, its volume is 76,000 cu. mi.
The volume of these four giants totals some 825,000 cu. mi. If this volume of basalt were used to pave the state of Illinois, from Chicago and the Wisconsin line down to Cairo, the entire state of Illinois would be paved 14.5 miles high.
Figure 7 - Olympus Mons
Figure 7 is a photo of Olympus Mons, in the “Serene” Hemisphere of Mars. It is the serene side of Mars only for craters, but not for rifting or for volcanism. For comparisons in both height and breadth, Figure 8 compares the sizes of Olympus Mons on Mars, Mt. Kilimanjaro in Tanzania, and Mt. Ararat in Turkey.
Figure 8 - A Comparison of Olympus Mons
to Volcanic Mts. Ararat and Kilimanjaro
By contrast the following table shows the volumes for some of the best
known volcanoes on the Earth:
|These four largest volcanoes on Mars spewed out gases, rocks and lava
with a total volume of well over 800,000 cu. mi. This volume of basalt
would pave all 48 continental states evenly to a depth of 1,350 feet.
Eysium Mons, in the other bulge on Mars, covers as much area as Switzerland or Belgium. Added to this total are the lesser lava outflows from Alba Patera, Albor Tholus, Biblio Patera, Hecates Tholus, Tharsis Tholus, Ulysses Patera, Uranius Patera, and Uranius Tholus. Thus the total volume of lava flows on Mars exceeds 900,000 cubic miles from its various volcanoes. The surfaces of the Earth and Venus have nothing to compare to this scope of volcanism.
Best Known Volcanoes on The Earth
The Viewing Of The Eruptions On Mars During Flybys
Ancient Greeks reported sighting and also, perhaps, the timing the orbits of tiny Deimos and Phobos. The three diameters of the fragment, Phobos, average 14.7 miles. Three of the diameters of even littler Deimos average 7.90 miles. Tiny or otherwise, nevertheless, the ancient Greeks did report seeing them, and apparently they (or more likely their Sumerian predecessors) named them. Whether or not someone in ancient times observed and sketched their orbits is a topic for Chapter 11.
Those furious, fiery eruptions on Mars, glowing in the dark, were perhaps a mile or two wide and were scores of miles long; some were over 100 miles long. The flowing rivers of lava on Mars were longer if not wider than Deimos and Phobos. And they started out white hot, and changed to yellow, orange, red and reddish black. Deimos and Phobos are low in albedo (reflectivity). Those rivers of lava probably glowed brighter than did the two satellites of Mars.
The caldera of Olympus Mons is 50 miles in diameter. The diameters of Deimos and Phobos are 10 and 15 miles in diameter. The caldera of Olympus Mons is almost 2,000 sq. miles in area. This is three times the diameter of Phobos and five times the diameter of Deimos. As the rivers of lava were longer than Deimos and Phobos, the calderas of the big four on Mars were wider also. If Deimos and Phobos could be seen by Greek eyes, so could they.
On Olympus Mons, those glowing rivers of lava began at the caldera, the crater. Rivers of hot, incandescent lava flowed for up to 180 miles apparently in all directions down its sides and onto the plain beyond. If they were seen by ancient Greek eyes, were they also reported?
Hesiod On The Appearance Of Ares In 701 B.C.E.
Hesiod saw the final flyby in 701 B.C.E. from Greece. He saw it
in its celestial splendor, and in its frightful context also. He
penned the following:
There grew a hundred snake heads, those of a dreaded dragon,
and the heads licked with dark tongues, and from the eyes on
the inhuman heads fire glittered from under the eyelids;
from all his heads fire flared from his eyes' glancing; [n1]
On manslaughtering Ares, as he came onward, keeping his dread eyes upon him,
Like a lion that has come on a victim, and with his strong claws, violently tears up the hide. [n2]
In Theogony, another work of Hesiod, he endeavored to put down the cosmic history of Mars scenes for perhaps 1,500 years. The following passage is about Thyphoeus (Typhoon), an archetype of Mars. This is the origin of our word “typhoon.”
The hands and arms of him are mighty, and have work in them,
and the feet of the powerful god were tireless, and up from his shoulders,
there grew a hundred snake heads, those of a dreaded dragon,
and the heads licked with dark tongues, and from the eyes on
the inhuman heads fire glittered from under the eyelids;
from all his heads fire flared from his eyes' glancing;
and inside each one of these horrible heads there were voices
that threw out every sort of horrible sound, … [n3]
At least seven times in Theogony, Hesiod referred back to the earliest times in Greek collective memory, which were the times of one “Iapetus”. Iapetus has long been identified as the Hebrew Japheth of Genesis 8, a grandson of Noah. One of Japheth's sons was Javan, from which is derived “Ionian.” And one of Javan's sons was Elishah, or Hellas. Greeks prefer to call themselves “Hellenes”, after Hellas, a great grandson of Noah. “Hellas” is also the name of the largest asteroid crater in the Solar System.
Other families related to the Hellenes include Tarshish (Trojans), Kittim (Cypriots) and Dodanim (Dodecanese). Clearly, Hesiod assessed a historical era for the Greeks that was parallel to that of the Hebrews as occurs in early Genesis.
Job On The Celestial Scenery During A Mars Flyby
Job lived during a serious Mars flyby time, he also probably saw the
rivers of lava flow from the calderas of Olympus Mons, Arsia, Ascraeus
and/or Pavonis Mons during the devastating flyby of his era, so damaging
to his land of Uz. The Book of Job describes an October case Mars
flyby. Mars was the celestial dragon of the cosmos, going by the
nickname of Leviathan, the serpent of the cosmos. It was during the
October Mars flyby of the 18th century B.C.E.
They are joined one to another, they stick together, that they cannot be sundered.
By his sneezings a light doth shine, and his eyes are like the eyelids of the morning.
Out of his mouth go burning lamps, and sparks of fire leap out.
Out of his nostrils goeth smoke, as out of a seething pot or cauldron.
His breath kindleth coals, and a flame goeth out of his mouth. Job 41:17-21
Apparently Job saw the volcanoes of Mars “awaken, like eyelids in the morning. When viewed from the Earth, and through the Earth's ashy, hazy, smoky atmosphere, those flows of hot lava may have taken on a reddish hue, due to the vast volume of contaminants beclouding our planets’ atmosphere. People who have fought forest fires have observed at noontime a blood-red Sun and/or Moon.
Average Volcanic Flow Volumes Olympus Mons
Out of the caldera of Olympus Mons, over its active life, bled an estimated 525,000 cubic miles of lava and ejecta. Geologists estimate 170 paleomagnetic polarity reversals - their estimate may or may not be correct.
Geologists, gradualists in principle, despite all evidence to the contrary, do not realize that it was Mars flybys that caused paleomagnetic polarity reversals.
Even more important, they have failed to realize the collective Mars flybys were the ancient generator (dynamo) of the Earth's geomagnetic field, a field now dead. As a result of no more flybys, the Earth's geomagnetic field is dead with a residual decay rate of a half life of 1,350 years. A foundation for this knowledge will be laid in Volume IV.
Perhaps there were 350 Mars flybys of the Earth, some very close, some not as close to the Earth and Venus. A “close” flyby is considered to have been under 75,000 miles, planet center to center. To be discrete, 20% of the ancient Mars flybys were “mega-catastrophes”, but the majority of 80% were in the 35,000 to 65,000 mile range. The position and resulting influence of Saturn, for instance, could readjust how close Mars came from flyby to flybys. A foundation for this conclusion will be laid in Volume III, entitled The Flood of Noah.
This estimate of flybys indicates that there were 350. Divide 525,000 cubic miles of lava by 350. Thus, during an “average” flyby of Mars, of either Venus or the Earth, 1,500 CUBIC MILES OF LAVA flowed out of the caldera of Olympic Mons alone. Flows from Ascraeus Mons, Arsia Mons, Pavonis Mons, etc. were in addition. What Job saw was an average flyby; what Hesiod saw was a mega-catastrophe.
The streams of lava on Olympic Mons may have been gushing in a stream 150 miles long before they began to cool down and ceased to flow during those frigid Mars nights, normal temperatures below -150° F. Beginning as white hot, cooling while flowing, they glowed at colors successively from white to yellow, orange, red, dark red and black.
Consider what kind of a river of lava would be made by a 1,500 cubic mile flow, or even 15 cubic mile flow. Consider what an impressive visual a reproduction of a flyby scene would make. If Phobos and Deimos were seen by ancient Greeks and their predecessors, it is probable that those streams of glowing, reddish, orange lava also were seen. Seen also were the eruptions preceding the lava flows. They were seen flowing down the sides of Olympus Mons, Arsia, Ascraeus, Pavonis and other Martian volcanoes. Lava must have flowed down the sides of Olympus Mons at velocities of 20, 30 and even 35 mph.
What does the Book of Job mean when it describes Leviathan: “Out of his nostrils goeth smoke, as out of a seething pot or caldron. His breath kindleth coals, and a flame goeth out of his mouth. Job. 41:21. ... “out of his mouth go burning lamps, and sparks of fire leap out.” Translators and Bible readers have puzzled for a couple of millennia.
The distance between Mars and the Earth for the Final Flyby that Hesiod reported was much closer than an the average flyby. In chapters 9 and 10, an estimate is made of the Final Flyby, 27,000 miles from the Earth's center, and 21,000 miles from closest surface to closest surface. In Chapters 9 and 10, a method is developed for estimating the distances for the final flybys by Mars of both Venus and of the Earth.
Volcanic outflows comprise a measure of how much internal distress Mars experienced from flybys of the Earth and Venus. Paleomagnetic polarity reversals and 108-year cyclicism suggest how intensely and often Mars was squeezed? The astronomical history of Mars has been one of repeated, massive cosmic squeezes.
Perhaps the best comparison is the repeated firing of blast furnaces, daily producing fiery iron ingots. The difference between our blast furnaces and pouring of fiery iron and these blast furnaces of Mars is that on Mars, the magma was squeezed out of huge vents into huge rivers flowing up to 150 miles long across its cold surface. At blast furnaces, liquid iron is poured into small heat resistant molds, a little at a time.
The vast, vicious, violent, voracious, volcanic flows from Mars (Olympus Mons, etc.) are testimony as to how energetic and also how numerous those planet skirmishes were. The dating of the literature by Abraham and Job are testimony as to how repeated they were. A close analysis of chronological dates and flyby scenes can yield a 108-year cyclicism. Testimony by Hesiod and Isaiah indicate how remote they were in time .... billions of minutes ago, 14.2 billions of them to be more or less precise. .
Earlier, a foundation was laid to understand that the crust of Mars is somewhat thicker than the Earth's crust, being colder and being made of lighter materials. It is more rigid, less elastic. During Mars flybys, the Earth's crust could absorb some of its internal distress merely by elastic flexing of its crust (earthquakes). But the primary mechanism of relief of internal distress was different for Mars; it was volcanism.
The sizes of the volcanoes on Mars indicate how dreadful was the internal distress which Mars suffered in the Catastrophic Age. Job, Hesiod and other ancients merely caught glimpses of Mar’s volcanism as it was erupting.
Story 8 of the catastrophic skyscraper is that during the Mars-Earth Wars, SUDDEN, MASSIVE INTERNAL DISTRESS WITHIN MARS WAS RELIEVED LARGELY THROUGH VOLCANISM. Olympic Mons was the primary vent. Ascraeus Mons, Arsia Mons and Pavonis Mons also were contributing vents, as were a dozen lesser volcanoes. Once again to paraphrase Guthrie, the volcanoes of Mars are definitely high, wide and handsome. In comparison to them, Kilimanjaro and Popocatapetl, Etna and Ararat, are next to nothing.
There are numerous aspects to scenes painted by the ancients who saw the flybys and left records of their views. Words were used describing aspects of planetary catastrophism. For instance, there are the eyes of Ares, red and bloodshot, flowing and glowing in the dark, spewing out clouds of smoke and streams of fiery fluid. Those red “eyes,” glowing in the dark, probably were the calderas of Olympus, Arsia, Ascraeus, Pavonis, etc.
A review of Greek cosmo-mythology provides the one-eyed Cyclops, the evil Medusa, the speedy Perseus, the ugly Gorgons, the devastating Typhon, the swift Pegasus, the bloody Chrysaor, etc. All were among the Greek archetypes of Mars. The Greeks had many nicknames for Ares for its various flybys.
The genesis of the massive volcanoes of Mars were planetary flybys of the Earth and Venus. These ordeals of Mars were repeating, and as shall be demonstrated in Volume III, were cyclic in 108-year cycles. It was the long series of squeeze plays put on Mars by its two neighbors, the Earth and Venus.
Story 9 is that, just like Deimos and Phobos, while these Martian volcanoes were near the Earth, erupting, THOSE ERUPTIONS WERE VISIBLE TO THE ANCIENTS. They were described, albeit in horrible, ugly terms, in the literatures of the ancients. Those scenes they described were not very pretty, but they were painted fairly accurately - often by eye witnesses.
With story 9, the reader is now 37% of the way to the penthouse of planetary | http://www.creationism.org/patten/PattenMarsEarthWars/PattenMEW04.htm | 13 |
31 | The energy sector is the biggest contributor to man-made climate change. Energy use is responsible for about three-quarters of mankind’s carbon dioxide (CO2) emissions, one-fifth of our methane (CH4), and a significant quantity of our nitrous oxide (N2O). It also produces nitrogen oxides (NOx) hydro-carbons (HCs), and carbon monoxide (CO), which, though not greenhouse gases (GHGs) themselves, influence chemical cycles in the atmosphere that produce or destroy GHGs, such as tropospheric ozone. Most GHGs are released during the burning of fossil fuels. Oil, coal, and natural gas supply the energy needed to run automobiles, heat houses, and power factories. In addition to energy, however, these fuels also produce various by-products. Carbon and hydrogen in the burning fuel combine with oxygen (O2) in the atmosphere to yield heat (which can be converted into other forms of useful energy) as well as water vapor and carbon dioxide. If the fuel burned completely, the only by-product containing carbon would be carbon dioxide. However, since combustion is often incomplete, other carbon-containing gases are also produced, including carbon monoxide, methane, and other hydrocarbons. In addition, nitrous oxide and other nitrogen oxides are produced as by-products when fuel combustion causes nitrogen from the fuel or the air to combine with oxygen from the air. Increases in tropospheric ozone are indirectly caused by fuel combustion as a result of reactions between pollutants caused by combustion and other gases in the atmosphere. Extracting, processing, transporting, and distributing fossil fuels can also release greenhouse gases. These releases can be deliberate, as when natural gas is flared or vented from oil wells, emitting mostly methane and carbon dioxide, respectively. Releases can also result from accidents, poor maintenance, or small leaks in well heads and pipe fittings. Methane, which appears naturally in coal seams as pockets of gas or "dissolved" in the coal itself, is released when coal is mined or pulverized.Methane, hydrocarbons, and nitrogen oxides are emitted when oil and natural gas are refined into end products and when coal is processed (which involves crushing and washing) to remove ash, sulfur, and other impurities. Methane and smaller quantities of carbon dioxide and hydrocarbons are released from leaks in natural gas pipelines. Hydrocarbons are also released during the transport and distribution of liquid fuels in the form of oil spills from tanker ships, small losses during the routine fueling of motor vehicles, and so on.Some fuels produce more carbon dioxide per unit of energy than do others. The amount of carbon dioxide emitted per unit of energy depends on the fuel’s carbon and energy content. The figures below give representative values for coal, refined oil products, natural gas, and wood. Figure A shows for each fuel the percentage by weight that is elemental carbon. Figure B shows how many gigajoules (GJ) of energy are released when a tonne of fuel is burned. Figure C indicates how many kilograms of carbon are created (in the form of carbon dioxide) when each fuel is burned to yield a gigajoule of energy. According to Figure C,coal emits around 1.7 times as much carbon per unit of energy when burned as does natural gas and 1.25 times as much as oil. Although it produces a large amount of carbon dioxide, burning wood (and other biomass)contributes less to climate change than does burning fossil fuel. In Figure C, wood appears to have the highest emission coefficient. However, while the carbon contained in fossil fuels has been stored in the earth for hundreds of millions of years and is now being rapidly released over mere decades, this is not the case with plants. When plants are burned as fuel, their carbon is recycled back into the atmosphere at roughly the same rate at which it was removed, and thus makes no net contribution to the pool of carbon dioxide in the air. Of course, when biomass is removed but is not allowed to grow back - as in the case of massive deforestation - the use of biomass fuels use can yield net carbon dioxide emissions. It is difficult to make precise calculations of the energy sector’s greenhouse gas emissions.Estimates of greenhouse gas emissions depend on the accuracy of the available energy statistics and on estimates of "emission factors", which attempt to describe how much of a gas is emitted per unit of fuel burned. Emission factors for carbon dioxide are well known, and the level of uncertainty in national CO2 emissions estimates are thus fairly low, probably around 10 percent. For the other gases, however, the emission factors are not so well understood, and estimates of national emissions may deviate from reality by a factor of two or more. Estimates of emissions from extracting, processing, transport, and so on are similarly uncertain.See also Fact Sheet 240: "Reducing greenhouse gas emissions from the energy sector".For further reading:Grubb, M., 1989. "On Coefficients for Determining Greenhouse Gas Emissions from Fossil Fuel Production and Consumption". P. 537 in Energy Technologies for Reducing Emissions of Greenhouse Gases. Proceedings of an Experts’ Seminar, Volume 1, OECD, Paris, 1989.ORNL, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing. Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data. G. Marland et al, Oak Ridge National Laboratory, May 1989. ORNL/CDIAC-25. This is a useful source for data.
Most of the combustible fuels in common use contain carbon. Coal, oil, natural gas, and biomassfuels such as wood are all ultimately derived from the biological carbon cycle (see diagram below). The exceptions are hydrogen gas (H2), which is currently in limited use as a fuel, and exotic fuels (such as hydrazine, which contains only nitrogen and hydrogen) used for aerospace and other special purposes. Burning these carbon-based fuels to release useful energy also yields carbon dioxide (the most important greenhouse gas) as a by-product. The carbon contained in the fuel combines with oxygen (O2) in the air to yield heat, water vapor (H2O), and CO2. This reaction is described in chemical terms as:CH2 + 3O2 -> heat + 2H2O + CO2, where "CH2" represents about one carbon unit in the fossil fuel. Other by-products, such as methane (CH4), can also result when fuels are not completely burned.Carbon cycles back and forth between the atmosphere and the earth (the oceans also play a critical role in the carbon cycle). Plants absorb CO2 from the air and from water and use it to create plant cells, or biomass. This reaction is powered by sunlight and is often characterized in a simplified manner as:CO2 + (solar energy) + H2O -> O2 + CH2O,where "CH2O" roughly represents one new carbon unit in the biomass. Plants then release carbon back into the atmosphere when they are burned in fires or as fuel or when they die and decompose naturally. The carbon absorbed by plants is also returned to the air via animals, which exhale carbon dioxide whenthey breath and release it when they decompose. This biological carbon cycle has a natural balance, so that over time there is no net contribution to the "pool" of CO2 present in the atmosphere. One way of visualizing this process is to consider a hectare of sugar cane plants that is harvested to make ethanol fuel. The production and combustion of the ethanol temporarily transfers CO2 from the terrestrial carbon pool (carbon present in various forms on and under the earth’s surface) to the atmospheric pool. A year or two later, as the hectare of cane grows back to maturity, the CO2 emitted earlier is recaptured in plant biomass. Mankind’s reliance on fossil fuels has upset the natural balance of the carbon cycle. Biomass sometimes becomes buried in ocean sediment, swamps, or bogs and thus escapes the usual process of decomposition. Buried for hundreds of millions of years - typically at high temperatures and intense pressures - this dead organic matter sometimes turns into coal, oil, or natural gas. The store of carbon is gradually liberated by natural processes such as rock weathering, which keeps the carbon cycle in balance. By extracting and burning these stores of fossil fuel at a rapid pace, however, humans have accelerated the release of the buried carbon. We are returning hundreds of millions of years worth of accumulated CO2 to the atmosphere within the space of a half-dozen generations. The difference between the rate at which carbon is stored in new fossil reserves and the rate at which it is released from old reserves has created an imbalance in the carbon cycle and caused carbon in the form of CO2 to accumulate in the atmosphere. With careful management, biomass fuels can be used without contributing to net CO2 emissions. This can only occur when the rate at which trees and other biomass are harvested for fuel is balanced by the rate at which new biomass is created. This is one reason why planting forests is often advocated as an important policy for addressing the problem of mankind’s emissions of greenhouse gases. In cases where biomass is removed but does not (or is not allowed to) grow back, the use of biomass fuels is likely to yield net CO2 emissions just as the use of fossil fuels does. This occurs in instances where fuel-wood is consumed faster than forests can grow back, or where carbon in the soil is depleted by sub-optimal forestry or agricultural practices.For further reading:Ehrlich, P.R, A.H. Ehrlich, and J.P. Holdren, 1977, ECOSCIENCE. W.H. Freeman and Co., San francisco.
Carbon dioxide and methane - the two most important greenhouse gases - are emitted duringthe extraction and distribution of fossil fuels. Fossil fuels surrender most of their carbon when burned,but GHGs are also emitted when coal is dug out of mines and when oil is pumped up from wells. Additional quantities escape into the atmosphere when fuel is transported, as in gas pipelines. Together, these activities account for about one percent of total annual man-made carbon dioxide emissions (CO2) and about one-quarter of methane emissions (CH4).CO2 is released into the atmosphere when natural gas is "flared" from petroleum reservoirs. Natural gas and oil often occur together in deposits. Oil drillers sometimes simply flare, or burn off, the gas or release it directly into the atmosphere, particularly if the well is too far from gas pipelines or potential gas users. Global emissions of CO2 from gas flaring reached a peak during the mid-1970s and have declined since. Gas that previously was flared is now increasingly captured for use as fuel due to higher prices and demand for gas, as well as improvements in production equipment. Current (1989) global emissions of CO2 from this source are estimated at 202 million tonnes, about 0.8 percent of total man-made CO2 emissions. Most emissions from gas flaring take place in the oil-producing countries of Africa and Asia, as well as in the former USSR. Methane is released when natural gas escapes from oil and gas wells and pipe fittings. Natural gas is typically 85 to 95 percent methane. Transporting this gas from underground reservoirs to end-users via pipes and containers leads to routine and unavoidable leaks. Accidents and poor maintenance andequipment operation cause additional leaks. Newer, well-sealed pipeline systems can have leakage rates of less than 0.1 percent, while very old and leaky systems may lose as much as 5% of the gas passing through. Few measurements have been made, but present estimates are that leaks from equipment at oil and gas wells total about 10 million tonnes of methane a year. Annual emissions from pipelines are thought to be about 10-20 million tonnes, representing some 2-5% of total man-made methane emissions.Methane is also released when coal is mined and processed. This accounts for most of the methane emitted during fossil fuel extraction. Coal seams contain pockets of methane gas, and methane molecules also become attached through pressure and chemical attraction to the microscopic internal surfaces of the coal itself. The methane is released into the atmosphere when coal miners break open gas pockets in the coal and in coal-bearing rock. (Coal miners once used canaries as indicators of the presence of the colorless, odorless gas; if the birds died, methane concentrations in the mine were at dangerous levels.) Crushing and pulverizing the coal also breaks open tiny methane gas pockets and liberates the methane adsorbed in the coal. It can take days or even months for this absorbed methane to escape from the mined coal. The amount of methane released per unit of coal depends on the type of coal and how it is mined. Some coal seams contain more methane per unit of coal than do others. In general, lower quality coals, such as "brown" coal or lignite, have lower methane contents than higher quality coals such as bituminous and anthracite coal. In addition, coal that is surface-mined releases on average just 10% as much methane per unit mined as does coal removed from underground mines. Not only is coal buried under high pressure deep in the earth able to hold more methane, but underground mining techniques allow additional methane to escape from both the coal that is not removed and from the coal-bearing rock. The table below shows methane emissions from underground and surface mining for the ten countries with the highest emissions. These ten countries produce over 90 percent of both the world’s coal and coal-related methane emissions. Three countries - China, the (former) Soviet Union, and the United States - together produce two-thirds of the world’s methane emissions from coal. For further reading:Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25. Estimates of CO2 from gas flaring were taken from this source. United States Environmental Protection Agency (US/EPA), 1990. Methane Emissions from Coal Mining: Issues and Opportunities for Reduction, prepared by ICF Resources, Inc., for the Office of Air and Radiation of the US/EPA, Washington D.C.. US/EPA Report # EPA/400/9-90/008. Data from this source were used for the table in the text.Intergovernmental Panel on Climate Change (IPCC) , 1990. Methane Emissions and Opportunities for Control. Published by the US Environmental Protection Agency, US/EPA Report # 400/9-90/007
4. Global energy use during the industrial Age
Almost all of mankind’s fossil-fuel emissions of carbon dioxide have occurred over the last century. It was not until the 1800s that coal, oil, and natural gas were unearthed and burned in large quantities in the newly-invented factories and machines of the Industrial Revolution. Industrialization brought about profound changes in human well-being, particularly in Europe, North America, and Japan. It also created or worsened many environmental problems, including climate change. Fossil fuel use currently accounts for about three-quarters of mankind’s emissions of so-called greenhouse gases.Coal dominated the energy scene in Europe and North America during the 19th and early 20th centuries. Coal was found in large deposits near the early industrial centres of Europe and North America. Figure A below shows the trend of global fossil-fuel carbon dioxide (CO2) emissions over the last 130 years (note the "valley" around 1935 when the Great Depression lowered energy use, and the plateau around 1980 caused by higher international oil prices). In industrialized countries the fuel mix has now shifted towards oil, gas, and other energy sources. Although large petroleum deposits were located early in the 20th century, oil use did not expand greatly until the post-World War II economic take-off. Natural gas, in limited use since the 1800s, started to supply an increasing share of the world’s energy by the 1970s (see Figure B). Among non-fossil energy sources, hydroelectric power has been exploited for about 100 years, and nuclear power was introduced in the 1950s; together they now supply about 15 percent of the global demand for internationally traded energy. Solar and wind power are used in both traditional applications (such as wind-assisted pumping) and high-tech ones (solar photovoltaics), but they satisfy only a small fraction of overall fuel needs. The fuel mix in developing countries includes a higher percentage of biomass fuels and, in some cases, coal. Biomass fuels continue to be widely used in many countries, particularly in homes. As India, China, and other developing countries have industrialized over the past decade, coal’s share of global CO2 emissions has increased somewhat, reversing the pattern of previous years. Countries such as China and Mexico have benefited from large domestic supplies of coal or oil, but most other developing countries have had to turn to imported fuels, typically oil, to power their industries.Although CO2 emissions have generally followed an upward trend, the rate of increase has fluctuated. Changes in overall carbon dioxide emissions reflect population and economic growth rates, per-capita energy use, and changes in fuel quality and fuel mix. During the last four decades of the 1800s, fuel consumption rose six times faster than population growth as fossil fuels were substituted for traditional fuels (see Figure C). From 1900 to 1930 total fuel use expanded more slowly, but - driven by increased fuel use per person - it still grew faster than the rate of population growth. CO2 emissions rose only 1.5 times as fast as population between 1930 and 1950 due to the impact of the Great Depression and World War II on industrial production. The post-war period of 1950 to 1970 saw a rapid expansion of both population and total fuel emissions, with emissions growing more than twice as fast as population. Here again, increases in per-capita energy consumption made the difference. Since 1970, higher fuel prices, new technologies, and a shift to natural gas (which has a lower carbon content than oil and coal) have reduced the growth in emissions relative to population. During the 1980s, in fact, growth in population exceeded growth in emissions, meaning that average emissions per capita actually declined. Regional patterns of per-capita energy use will continue to change. Over the last 40 years, the strongest absolute growth in per-capita carbon dioxide emissions has been in the industrialized countries, while the developing countries have provided (and continue to provide) most of the world’s population increase. Large increases in per-capita CO2 emissions occurred between 1950 and 1970 in Eastern Europe and the (former) USSR, North America, Japan, Australia, and Western Europe. During the 1980s, however, per-capita emissions in these regions have grown relatively little or even declined. The strongest growth in per-capita emissions since 1980 has been in Centrally-Planned Asia (principally China), south and east Asia, and the Middle East. See also Fact Sheet 240: "Reducing greenhouse gas emissions from the energy sector"For further reading:Ehrlich, P.R, A.H. Ehrlich, and J.P. Holdren, 1977, ECOSCIENCE. W.H. Freeman and Co., SanFrancisco.Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25.Data from this source were used for Figures A and C.Ogawa, Yoshiki, "Economic Activity ad the Greenhouse Effect", in "Energy Journal", Vol. 1, no. 1. (Jan.1991), pp. 23-26.United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.1, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C.. Data from this source were used for Figures A and B.
Cement manufacturing is the third largest cause of man-made carbon dioxide emissions. While fossil fuel combustion and deforestation each produce significantly more carbon dioxide (CO2), cement-making is responsible for approximately 2.5% of total worldwide emissions from industrial sources (energy plus manufacturing sectors). Cement is a major industrial commodity. Manufactured commercially in at least 120 countries, it is mixed with sand and gravel to make concrete. Concrete is used in the construction of buildings, roads, and other structures, as well as in other products and applications. Its use as a residential building material is particularly important in countries where wood is not traditionally used for building or is in short supply.Annual CO2 emissions from cement production in nine major regions of the world are shown in Figure A below.Large quantities of CO2 are emitted during the production of lime, the key ingredient in cement. Lime, or calcium oxide (CaO), is created by heating calcium carbonate (CaCO3) in large furnaces called kilns. Calcium carbonate is derived from limestone, chalk, and other calcium-rich materials. The process of heating calcium carbonate to yield lime is called calcination or calcining and is written chemically as: CaCO3 + Heat -> CaO + CO2Lime combines with other minerals in the hot kiln to form cement’s "active ingredients". Like the CO2 emitted during the combustion of coal, oil, and gas, the carbon dioxide released during cement production is of fossil origin. The limestone and other calcium-carbonate-containing minerals used in cement production were created ages ago primarily by the burial in ocean sediments of biomass (such as sea shells, which have a high calcium carbonate content). Liberation of this store of carbon is normally very slow, but it has been accelerated many times over by the use of carbonate minerals in cement manufacturing.The lime content of cement does not vary much. Most of the structural cement currently produced is of the "Portland" cement type, which contains 60 to 67 percent lime by weight. There are specialty cements that contain less lime, but they are typically used in small quantities. While research is underway into suitable cement mixtures that have less lime than does Portland cement, options for significantly reducing CO2 emissions from cement are currently limited. Carbon dioxide emissions from cement production are estimated at 560 million tonnes per year.This estimate is based on the amount of cement that is produced, multiplied by an average emission factor. By assuming that the average lime content of cement is 63.5%, researchers have calculated an emission factor of 0.498 tonnes of CO2 to one tonne of cement.1CO2 emissions from cement production have increased about eight-fold in the last 40 years.Figure B below shows the estimated global emissions from this source since the 1950s. Note that these figures do not include the CO2 emissions from fuels used in the manufacturing process. Cement production and related emissions of CO2 have risen at roughly three times the rate of population growth over the entire period, and at twice the rate of population growth since 1970. Cement-related CO2 emissions by region are shown in the right-hand figure. Emissions from the industrialized world and China dominate, but emissions from all regions are significant, reflecting the global nature of cement production.For further reading:Marland, G., T.A. Boden, R.C. Griffin, S.F. Huang, P. Kanciruk, and T.R. Nelson, 1989. Estimates of CO2 Emissions from Fossil Fuel Burning and Cement Manufacturing, Based on the United Nations Energy Statistics and the U.S. Bureau of Mines Cement Manufacturing Data, Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee, May 1989. Report # ORNL/CDIAC-25.Data from this source were used in preparing Figures A and B.Tresouthick, S.W., and A. Mishulovich, 1990. "Energy and Environment Considerations for the Cement Industry", pp. B-110 to B-123 in Energy and Environment in the 21st Century, proceedings of a conference held March 26-28, 1990 at Massachusetts Institute of Technology, Cambridge, Massachusetts. U.S. Department of the Interior, Bureau of Mines, 1992. Cement: Annual Report 1990, authored by Wilton Johnson. United States Department of the Interior, Washington D.C. The cement production figures in Figure A were derived from this source.Notes:1 Marland, et. al.
Chlorofluorocarbons (CFCs) and other halocarbons are extremely potent greenhouse gases . . .They are released in relatively small quantities, but one kilogram of the most commonly used CFCs may have a direct effect on climate thousands of times larger than that of one kilogram of carbon dioxide. In addition, over the last two decades the percentage increase in CFCs in the atmosphere has been higher than that of other greenhouse gases (GHG); by 1990 concentrations of the different varieties of CFC were increasing by 4-12 percent per year. but because CFCs also destroy ozone -- itself a greenhouse gas -- their net effect on the climate is unclear. The strength of this "indirect" effect of ozone depletion depends on variables such as the temperature of the upper atmosphere and cannot yet be measured with any confidence. According to new research, however, it is possible that the indirect effect of CFCs cancels out some or all of the direct effect of their being powerful GHGs.CFCs are a family of man-made gases used for various industrial purposes. First developed in the 1920s in the United States, CFCs have been used in large quantities only since about 1950. The industrialized countries still account for well over 80 percent of CFC use, although newly-industrializing and developing countries are rapidly increasing their consumption levels. CFC-11 is used principally as a propellant in aerosol cans, although this use has been phased out in many countries, and in the manufacture of plastic foams for cushions and other products. CFC-12 is also used for foam manufacturing as well as in the cooling coils of refrigerators and air conditioners. HCFC-22 was recently introduced as a replacement for CFC-12 because it has a shorter lifetime in the atmosphere and is thus a much less powerful ozone-depleting agent. Halons (or bromofluorocarbons) are used as fire extinguishing materials.CFC-113, methyl chloroform, and carbon tetrachloride are used as solvents for cleaning (carbon tetrachloride is also a feed stock for the production of CFC-11 and CFC-12). There are other types of halocarbons, but they are used in small quantities.CFCs are generally colorless, odorless, and non-toxic. They also do not react chemically with other materials, and as a result they remain in the atmosphere for a long time -- often 50 to 100 years -- before they are destroyed by reactions catalyzed by sunlight. CFCs are composed of carbon, chlorine, and fluorine. Together with other manufactured gases that contain either fluorine or chlorine, and with the bromine-containing Halons, CFCs are referred to collectively as halogenated compounds, or halocarbons.There is often a significant lag time between the production of CFCs and their escape into the atmosphere. Some CFCs, such as those used in spray cans or as solvents for washing electronic parts,are emitted within just a few months or years of being produced. Others, such as those contained in durable equipment such as air conditioners, refrigerators, and fire extinguishers, may not be released for decades. Consequently, the annual use figures in the table do not, for many compounds, reflect annual emissions. So even if the manufacture of CFCs were to stop today, it would take many years for emissions to fall to zero, unless stringent measures were adopted for the recycling or capture of CFCs in old equipment.Although they are important greenhouse gases, CFCs are better known for their role in damaging the earth's ozone layer. CFCs first came to public attention in the mid-1980s after an "ozone hole" was discovered over the Antarctic. Scientists now know that a complex series of chemical reactions involving CFCs occurs during the Arctic and Antarctic springtimes and leads to the depletion of ozone (O3). Stratospheric ozone forms a shield that prevents most of the sun's ultraviolet (UV) light reaching the earth's surface and causing skin cancer and other cell damage. In response to the weakening of this shield, most of the world's CFC users adopted the "Montreal Protocol" in 1987. This treaty commits signatory nations to phase out their use of CFCs and some other halocarbons by the year 2000. In November, 1992, growing fears of ozone depletion led to the Copenhagen agreement, which commits governments to a total phase out of the most destructive CFCs by the year 1996. This will help to protect the ozone layer and reduce the role of CFCs in climate change -- although the benefits of these agreements will not be felt for several years due to the long life-span of CFCs. Alternatives are being developed to replace CFCs. Some of these substitutes are halocarbons, such as the compound HCFC-22, which can replace CFC-12 in refrigeration and air conditioning systems. These substitute halocarbons are also greenhouse gases, but because they are shorter-lived than the CFCs used now they will have a more limited long-term effect on the climate. Other substitutes that are less harmful than HCFC-22 have been developed and tested and are now being rapidly introduced for various applications. Additional solutions involve changes in industrial processes to avoid the need for halocarbons entirely. For example, later-based cleaners are increasingly being substituted for CFCs in the electronics industry, and non-pressurized, or "pump" spray bottles are being sold instead of CFC-driven spray cans.For further reading:IPCC, "Scientific Assessment of Climate Change", Cambridge University Press, 1990.IPCC, "The Supplementary Report to the IPCC Scientific Assessment", Cambridge University Press, 1992.US Environment Protection Agency, 1990, "Policy Options for Stabilizing Global Climate", eds. D.A.Lashof and D. Tirpak. Report no. 21P-2003.1, December 1991. EPA Office of Planning and Evaluation,Washington DC.WMO/UNEP/NASA, "Scientific Assessment of Ozone Depletion", 1991.
About one-quarter of the methane emissions caused by human activities comes from domesticated animals. The second-most important greenhouse gas after carbon dioxide, methane (CH4) is released by cattle, dairy cows, buffalo, goats, sheep, camels, pigs, and horses. It is also emitted by the wastes of these and other animals. Total annual methane emissions from domesticated animals are thought to be about 100 million tonnes. Animals produce methane through "enteric fermentation". In this process plant matter is converted by bacteria and other microbes in the animal’s digestive tract into nutrients such as sugars and organic acids. These nutrients are used by the animal for energy and growth. A number of by-products, including methane, are also produced, but they are not used by the animal; some are released as gas into the atmosphere. (Although carbon dioxide (CO2) is produced in similar quantities as methane, it is derived from sustainably produced plant matter and thus makes no net addition to the atmosphere.) The carbon in the plant manner is converted into methane through this general, overall chemical reaction: microbial Organic Plant Matter + H2O -----------> CO2 + CH4 + (nutrients and metabolism other products). The amount of methane that an individual animal produces depends on many factors. The key variables are the species, the animal’s age and weight, its health and living conditions, and the type of feed it eats. Ruminant animals - such as cows, sheep, buffalo, and goats - have the highest methane emissions per unit of energy in their feed, but emissions from some non-ruminant animals, such as horses and pigs, are also significant. National differences in animal-farming are particularly important. Dairy cows in developing nations, for example, produce about 35 kg of methane per head per year, while those in industrialized nations, where cows are typically fed a richer diet and are physically confined, produce about 2.5 times as much per head. There is a strong link between human diet and methane emissions from livestock. Nations where beef forms a large part of the diet, for example, tend to have large herds of cattle. As beef consumption rises or falls, the number of livestock will, in general, also rise or fall, as will the related methane emissions. Similarly, the consumption of dairy goods, pork, mutton, and other meats, as well non-food items such as wool and draft labor (by oxen, camels, and horses), also influences the size of herds and methane emissions. The figures below present recent estimates of methane emissions by type of animal and by region. Due to their large numbers, cattle and dairy cows produce the bulk of total emissions. In addition, certain regions - both developing and industrialized - produce significant percentages of the global total. Emissions in South and East Asia are high principally because of large human populations; emissions per-capita are slightly lower than the world average. Latin America has the highest regional emissions per capita, due primarily to large cattle populations in the beef-exporting countries (notablyBrazil and Argentina). Centrally-planned Asia (mainly China) has by far the lowest per-capita emissions due to a diet low in meat and dairy products. See also Fact Sheet 271: "Reducing methane emissions from livestock farming".For further reading:Intergovernmental Panel on Climate Change (IPCC), 1990. Greenhouse Gas Emissions from Agricultural systems. Proceedings of a workshop on greenhouse gas emissions from agricultural systems held inWashington D.C., December 12 - 14, 1989. Published as US/EPA report 20P-2005, September, 1990 (2volumes).United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.1, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C. Data from computer files used for this report were used to create the tables.
About one-quarter of the total methane emissions caused by human activities comes from domesticated animals. The animals that emit this methane (CH4) include cattle, dairy cows, buffalo, goats, sheep, camels, pigs, and horses. Most livestock-related methane is produced by "enteric fermentation" of food in the animals’ digestive tracts. About one-quarter to one-third of it, however, or a total of 25 million tonnes per year, is released later from decomposing manure. Decomposition occurs as organic wastes in moist, oxygen-free (anaerobic) environments are broken down by bacteria and other microbes into methane, carbon dioxide, and trace amounts of small organic molecules, nitrogen compounds, and other products. The amount of methane released from animal manure in a particular region depends on many variables. The key variables are the number and types of animals present, the amount of manure produced by each animal, the amount of moisture and fiber in the animals’ wastes, the waste management system used, and the local climate. Eastern Europe, Western Europe, and North America have the largest emissions, primarily due to their use of liquid waste storage systems and anaerobic lagoons for treating cattle and swine wastes. The figure below shows regional methane emissions from livestock wastes as a percentage of global emissions and on a per-person basis. Emissions per person in the industrialized regions are two to eight times those in developing regions. Larger animals, not surprisingly, produce more manure per individual. Most domestic animals produce between 7 and 11 kilograms of "volatile solids" (VS) per tonne of animal per day. VS is the amount of organic matter present in the manure after it has dried. Each type of animal waste has its characteristic content of degradable organic matter (material that can be readily decomposed), moisture, nitrogen, and other compounds. As a consequence, the maximum methane-producing potential of the different manures varies both across species and, in instances where feeding practices vary, within a single species.Dairy and non-dairy cattle account for the largest part of global methane emission from livestock manures. After cattle, swine wastes make the second largest contribution. Waste disposal methods help to determine how much methane is emitted. If manure is left to decompose on dry soil, as typically happens with free-roaming animals in developing countries, it will be exposed to oxygen in the atmosphere and probably decompose aerobically. Relatively little methane will be produced, perhaps just 5-10 percent of the maximum possible. (This is particularly true in dry climates, where the manure dries out before extensive methane production can take place; very low temperatures also inhibit fermentation and methane production.) However, when animal wastes are collected and dumped into artificial or natural lagoons or ponds - a common practice in developed countries - they lose contact with the air because, as wastes decompose in these small bodies of water, the oxygen is quickly depleted. As a result, most of the waste is likely to decompose anaerobically, producing a substantial amount of methane - as much as 90 percent of the theoretical maximum. Other waste disposal practices fall in between these two extremes.See also Fact Sheet 271: "Reducing methane emissions from livestock farming".For further reading:L.M. Safley, et. al, 1992. Global Methane Emissions from Livestock and Poultry. United States Environmental Protection Agency (US/EPA) Report # EPA/400/ 1-91/048, February, 1992. US/EPA, Washington, D.C..
Rice fields produce about 60 million tonnes of methane per year. This represents about 17% of total methane (CH4) emissions resulting from human activities. Virtually all of this methane comes from "wetland" rice farming. Rice can be produced either by wetland, paddy rice farming or by upland, dry rice farming. Wetland rice is grown in fields that are flooded for much of the growing season with natural flood- or tide-waters or through irrigation. Upland rice, which accounts for just 10 percent of global rice production, is not flooded, and it is not a significant source of methane.Methane is produced when organic matter in the flooded rice paddy is decomposed by bacteria and other micro-organisms. When soil is covered by water, it becomes anaerobic, or lacking in oxygen. Under these conditions, methane-producing bacteria and other organisms decompose organic matter in or on the soil, including rice straw, the cells of dead algae and other plants that grow in the paddy, and perhaps organic fertilizers such as manure. The result of this reaction is methane, carbon dioxide (CO2 -but not in quantities significant for climate change), and other products:microbial Plant Organic Matter + H2O -----------> CO2 + CH4 + (other products). metabolismMethane is transported from the paddy soil to the atmosphere in three different ways. The primary method is through the rice plant itself, with the stem and leaves of the plant acting rather like pipelines from the soil to the air. This mode of transport probably accounts for 90-95 percent of emissions from a typical field. Methane also bubbles up directly from the soil through the water or is released into the air after first becoming dissolved in the water. Calculating how much methane is released from a particular field or region is difficult. Important variables include the number of acres under cultivation, the number of days that the paddy is submerged under water each year, and the rate of methane emission per acre per day. The uncertainty is caused by this last variable, which is complex and poorly understood. The methane emission rate is determined by soil temperature, the type of rice grown, the soil type, the amount and type of fertilizer applied, the average depth of water in the paddy, and other site-specific variables. Measurements at a fairly limited number of paddy sites have yielded a wide range of methane production rates. As a result, estimates of global methane production from rice paddies are considered uncertain. One recent estimate gives a range of 20 - 150 million tonnes of methane per year.1Asia produces most of the world’s rice. Since rice is the staple food throughout much of Asia, nearly 90 percent of the world’s paddy area is found there. China and India together have nearly half of the world’s rice fields and probably contribute a similar fraction of the global methane emissions from rice production.The options for reducing methane emissions from rice cultivation are limited. Reducing the area of rice under cultivation is unlikely to happen given the already tenuous food supply in many rice-dependent countries. Other options include replacing paddy rice with upland rice, developing strains of rice plant that need less time in flooded fields, and using different techniques for applying fertilizers. Each of these options will require much more research to become widely practical.For further reading:Intergovernmental Panel on Climate Change (IPCC), 1990. Greenhouse Gas Emissions from Agricultural Systems. Proceedings of a workshop on greenhouse gas emissions from agricultural systems held in Washington D.C., December 12 - 14, 1989. Published as US/EPA report 20P-2005, September, 1990 (2volumes).The Organization for Economic Co-operation and Development (OECD), 1991. Estimation of Greenhouse Gas Emissions and Sinks. Final Report from the Expert’s Meeting, 18-21 February, 1991. Prepared for the Intergovernmental Panel on Climate Change. OECD, Paris, 1991.United States Environmental Protection Agency (US/EPA), 1990. Policy Options For Stabilizing Global Climate, Technical Appendices, edited by D.A. Lashof and D. Tirpak. Report # 21P-2003.3, December, 1991. US/EPA Office of Planning and Evaluation, Washington D.C.Notes:1 IPCC, 1992 Supplement.
Climate change would strongly affect agriculture, but scientists still don’t know exactly how. Most agricultural impacts studies are based on the results of general circulation models (GCMs). These climate models indicate that rising levels of greenhouse gases are likely to increase the global average surface temperature by 1.5-4.5 C over the next 100 years, raise sea-levels (thus inundating farmland and making coastal groundwater saltier), amplify extreme weather events such as storms and hot spells, shift climate zones poleward, and reduce soil moisture. Impacts studies consider how these general trends would affect agricultural production in specific regions. To date, most studies have assumed that agricultural technology and management will not improve and adapt. New studies are becoming increasingly sophisticated, however, and "adjustments experiments" now incorporate assumptions about the human response to climate change.Increased concentrations of CO2 may boost crop productivity. In principle, higher levels of CO2 should stimulate photosynthesis in certain plants; a doubling of CO2 may increase photosynthesis rates by as much as 30-100%. Laboratory experiments confirm that when plants absorb more carbon they grow bigger and more quickly. This is particularly true for C3 plants (so called because the product of their first biochemical reactions during photosynthesis has three carbon atoms). Increased carbon dioxide tends to suppress photo-respiration in these plants, making them more water-efficient. C3 plants include such major mid-latitude food staples as wheat, rice, and soya bean. The response of C4 plants, on the other hand, would not be as dramatic (although at current CO2 levels these plants photosynthesize more efficiently than do C3 plants). C4 plants include such low-latitude crops as maize, sorghum, sugar-cane, and millet, plus many pasture and forage grasses.Climate and agricultural zones would tend to shift towards the poles. Because average temperatures are expected to increase more near the poles than near the equator, the shift in climate zones will be more pronounced in the higher latitudes. In the mid-latitude regions (45 to 60 latitude), the shift is expected to be about 200-300 kilometres for every degree Celsius of warming. Since today’s latitudinal climate belts are each optimal for particular crops, such shifts could have a powerful impact on agricultural and livestock production. Crops for which temperature is the limiting factor may experience longer growing seasons. For example, in the Canadian prairies the growing season might lengthen by 10 days for every 1 C increase in average annual temperature. While some species would benefit from higher temperatures, others might not. A warmer climate might, for example, interfere with germination or with other key stages in their life cycle. It might also reduce soil moisture; evaporation rates increase in mid-latitudes by about 5% for each 1 C rise in average annual temperature. Another potentially limiting factor is that soil types in a new climate zone may be unable to support intensive agriculture as practised today in the main producer countries. For example,even if sub-Arctic Canada experiences climatic conditions similar to those now existing in the country’s southern grain-producing regions, its poor soil may be unable to sustain crop growth. Mid-latitude yields may be reduced by 10-30% due to increased summer dryness. Climate models suggest that today’s leading grain-producing areas - in particular the Great Plains of the US - may experience more frequent droughts and heat waves by the year 2030. Extended periods of extreme weather conditions would destroy certain crops, negating completely the potential for greater productivity through "CO2 fertilization". During the extended drought of 1988 in the US corn belt region, for example, corn yields dropped by 40% and, for the first time since 1930, US grain consumption exceeded production.The poleward edges of the mid-latitude agricultural zones - northern Canada, Scandinavia, Russia, and Japan in the northern hemisphere, and southern Chile and Argentina in the southern one - may benefit from the combined effects of higher temperatures and CO2 fertilization. But the problems of rugged terrain and poor soil suggest that this would not be enough to compensate for reduced yields in the more productive areas.The impact on yields of low-latitude crops is more difficult to predict. While scientists are relatively confident that climate change will lead to higher temperatures, they are less sure of how it will affect precipitation - the key constraint on low-latitude and tropical agriculture. Climate models do suggest, however, that the intertropical convergence zones may migrate poleward, bringing the monsoon rains with them. The greatest risks for low-latitude countries, then, are that reduced rainfall and soil moisture will damage crops in semi-arid regions, and that additional heat stress will damage crops and especially livestock in humid tropical regions. The impact on net global agricultural productivity is also difficult to assess. Higher yields in some areas may compensate for decreases in others - but again they may not, particularly if today’s major food exporters suffer serious losses. In addition, it is difficult to forecast to what extent farmers and governments will be able to adopt new techniques and management approaches to compensate for the negative impacts of climate change. It is also hard to predict how relationships between crops and pests will evolve.For further reading:Martin Parry, "Climate Change and World Agriculture", Earthscan Publications, 1990.Intergovernmental Panel on Climate Change, "The IPCC Scientific Assessment" and "The IPCC Impacts Assessment", WMO/IPCC, 1990. | http://www.usask.ca/agriculture/caedac/dbases/ghgprimer2b.html | 13 |
19 | Diffraction of Light
We classically think of light as always traveling in straight lines, but when light waves pass near a barrier they tend to bend around that barrier and become spread out. Diffraction of light occurs when a light wave passes by a corner or through an opening or slit that is physically the approximate size of, or even smaller than that light's wavelength.
A very simple demonstration of diffraction can be conducted by holding your hand in front of a light source and slowly closing two fingers while observing the light transmitted between them. As the fingers approach each other and come very close together, you begin to see a series of dark lines parallel to the fingers. The parallel lines are actually diffraction patterns. This phenomenon can also occur when light is "bent" around particles that are on the same order of magnitude as the wavelength of the light. A good example of this is the diffraction of sunlight by clouds that we often refer to as a silver lining, illustrated in Figure 1 with a beautiful sunset over the ocean.
We can often observe pastel shades of blue, pink, purple, and green in clouds that are generated when light is diffracted from water droplets in the clouds. The amount of diffraction depends on the wavelength of light, with shorter wavelengths being diffracted at a greater angle than longer ones (in effect, blue and violet light are diffracted at a higher angle than is red light). As a light wave traveling through the atmosphere encounters a droplet of water, as illustrated below, it is first refracted at the water:air interface, then it is reflected as it again encounters the interface. The beam, still traveling inside the water droplet, is once again refracted as it strikes the interface for a third time. This last interaction with the interface refracts the light back into the atmosphere, but it also diffracts a portion of the light as illustrated below. This diffraction element leads to a phenomenon known as Cellini's halo (also known as the Heiligenschein effect) where a bright ring of light surrounds the shadow of the observer's head.
The terms diffraction and scattering are often used interchangeably and are considered to be almost synonymous. Diffraction describes a specialized case of light scattering in which an object with regularly repeating features (such as a diffraction grating) produces an orderly diffraction of light in a diffraction pattern. In the real world most objects are very complex in shape and should be considered to be composed of many individual diffraction features that can collectively produce a random scattering of light.
One of the classic and most fundamental concepts involving diffraction is the single-slit optical diffraction experiment, first conducted in the early nineteenth century. When a light wave propagates through a slit (or aperture) the result depends upon the physical size of the aperture with respect to the wavelength of the incident beam. This is illustrated in Figure 3 assuming a coherent, monochromatic wave emitted from point source S, similar to light that would be produced by a laser, passes through aperture d and is diffracted, with the primary incident light beam landing at point P and the first secondary maxima occurring at point Q.
As shown in the left side of the figure, when the wavelength (λ) is much smaller than the aperture width (d), the wave simply travels onward in a straight line, just as it would if it were a particle or no aperture were present. However, when the wavelength exceeds the size of the aperture, we experience diffraction of the light according to the equation:
Where θ is the angle between the incident central propagation direction and the first minimum of the diffraction pattern. The experiment produces a bright central maximum which is flanked on both sides by secondary maxima, with the intensity of each succeeding secondary maximum decreasing as the distance from the center increases. Figure 4 illustrates this point with a plot of beam intensity versus diffraction radius. Note that the minima occurring between secondary maxima are located in multiples of π.
This experiment was first explained by Augustin Fresnel who, along with Thomas Young, produced important evidence confirming that light travels in waves. From the figures above, we see how a coherent, monochromatic light (in this example, laser illumination) emitted from point L is diffracted by aperture d. Fresnel assumed that the amplitude of the first order maxima at point Q (defined as εQ) would be given by the equation:
where A is the amplitude of the incident wave, r is the distance between d and Q, and f(χ) is a function of χ, an inclination factor introduced by Fresnel.
Explore how a beam of light is diffracted when it passes through a narrow slit or aperture. Adjust the wavelength and aperture size and observe how this affects the diffraction intensity pattern.
Start Tutorial »
Diffraction of light plays a paramount role in limiting the resolving power of any optical instrument (for example: cameras, binoculars, telescopes, microscopes, and the eye). The resolving power is the optical instrument's ability to produce separate images of two adjacent points. This is often determined by the quality of the lenses and mirrors in the instrument as well as the properties of the surrounding medium (usually air). The wave-like nature of light forces an ultimate limit to the resolving power of all optical instruments.
Our discussions of diffraction have used a slit as the aperture through which light is diffracted. However, all optical instruments have circular apertures, for example the pupil of an eye or the circular diaphragm and lenses of a microscope. Circular apertures produce diffraction patterns similar to those described above, except the pattern naturally exhibits a circular symmetry. Mathematical analysis of the diffraction patterns produced by a circular aperture is described by the equation:
where θ(1) is the angular position of the first order diffraction minima (the first dark ring), λ is the wavelength of the incident light, d is the diameter of the aperture, and 1.22 is a constant. Under most circumstances, the angle θ(1) is very small so the approximation that the sin and tan of the angle are almost equal yields:
From these equations it becomes apparent that the central maximum is directly proportional to λ/d making this maximum more spread out for longer wavelengths and for smaller apertures. The secondary mimina of diffraction set a limit to the useful magnification of objective lenses in optical microscopy, due to inherent diffraction of light by these lenses. No matter how perfect the lens may be, the image of a point source of light produced by the lens is accompanied by secondary and higher order maxima. This could be eliminated only if the lens had an infinite diameter. Two objects separated by a distance less than θ(1) can not be resolved, no matter how high the power of magnification. While these equations were derived for the image of a point source of light an infinite distance from the aperture, it is a reasonable approximation of the resolving power of a microscope when d is substituted for the diameter of the objective lens.
Thus, if two objects reside a distance D apart from each other and are at a distance L from an observer, the angle (expressed in radians) between them is:
which leads us to be able to condense the last two equations to yield:
Where D(0) is the minimum separation distance between the objects that will allow them to be resolved. Using this equation, the human eye can resolve objects separated by a distance of 0.056 millimeters, however the photoreceptors in the retina are not quite close enough together to permit this degree of resolution, and 0.1 millimeters is a more realistic number under normal circumstances.
The resolving power of optical microscopes is determined by a number of factors including those discussed, but in the most ideal circumstances, this number is about 0.2 micrometers. This number must take into account optical alignment of the microscope, quality of the lenses, as well as the predominant wavelengths of light used to image the specimen. While it is often not necessary to calculate the exact resolving power of each objective (and would be a waste of time in most instances), it is important to understand the capabilities of the microscope lenses as they apply to the real world. | http://www.olympusmicro.com/primer/lightandcolor/diffraction.html | 13 |
10 | Birds of A Feather
Making Field Guides
Students are working in their field
guides, filling in their data on birds. Questions include: Bird Habitat,
Size, color and their food.Making our own field guide is a good way to learn
how to identify some of the more common birds we might see in our study.
Prior to returning to the classroom, students were outside walking their
bird path and collecting data. Students were given blank paper, pencils
and a clipboard to sketch the path and the physical features that surround
it.Students were asked to note the ways land surrounding the path is used.
Is the area built-up, natural, or somewhere in between?
Students are separated into preestablished cooperative groups. Field
guides are given to each cooperative group. Students have just returned
to the classroom from walking their bird path. Each group is using their
field guide to identify and list the major plant and bird species found
in their sampling site.
In addition to working in their field guides students are asked to find
their path on the topographic map. Questions asked are: Where is it situated
in relation to your school? Your home? What types of land use/land cover
are near your path? Houses? A Forest? or a river?
A teacher aide, Mrs. Green is helping students with their
A Birds Eye View
Students determine their path outside. Students from Mrs. Simmons and
Mrs. Miller's class are using their binoculars to view birds. In order to
best examine our corner of the world, both classes will establish their
path in an area near our school that best represents our local environment.
In order to enable different schools across the state to use our data
and compare it to theirs, our path in this experiment will be 200 meters
long. Questions for students: Why everyone must have the same length for
their path for their data to be comparable.What would happen if one school
used a 100 meter path and another a 50 meter path? How might this affect
the data each school collects?
In order to set an accurate description of habitat, roles were assigned
to each group member during this initial survey.For example, one student
to serve as official recorder, two students to serve as botanists (responsible
for collecting plant data), and two students to serve as biologist (responsible
for collecting animal data). These roles will change for the actual data
collection periods, where each student will be counting a specific type
Students from Mrs. Miller's class is standing by their
bird project, displayed as a hall way exhibit.
Animated graphics are made by:
Created for the Fermilab
LInC program sponsored by Fermi National
Accelerator Laboratory Education Office,
Friends of Fermilab, United States Department of Energy, Illinois State Board of Education, and
North Central Regional Technology in Education
Consortium which is operated by North
Central Regional Educational Laboratory (NCREL).
Author(s): Olivia Miller & Marva Simmons ([email protected])
School: Hatch Elementary School Oak Park Illinois
Created: October 18, 1997 - Updated: October 18, 1997 | http://ed.fnal.gov/lincon/f97/projects/omiller/student.html | 13 |
20 | A coronagraph is a telescopic attachment designed to block out the direct light from a star so that nearby objects – which otherwise would be hidden in the star's bright glare – can be resolved. Most coronagraphs are intended to view the corona of the Sun, but a new class of conceptually similar instruments (called stellar coronagraphs to distinguish them from solar coronagraphs) are being used to find extrasolar planets around nearby stars.
The coronagraph was introduced in 1930 by the French astronomer Bernard Lyot; since then, coronagraphs have been used at many solar observatories. Coronagraphs operating within Earth's atmosphere suffer from scattered light in the sky itself, due primarily to Rayleigh scattering of sunlight in the upper atmosphere. At view angles close to the Sun, the sky is much brighter than the background corona even at high altitude sites on clear, dry days. Ground based coronagraphs, such as the High Altitude Observatory's Mark IV Coronagraph on top of Mauna Loa, use polarization to distinguish sky brightness from the image of the corona: both coronal light and sky brightness are scattered sunlight and have similar spectral properties, but the coronal light is Thomson-scattered at nearly a right angle and therefore undergoes scattering polarization, while the superimposed light from the sky near the Sun is scattered at only a glancing angle and hence remains nearly unpolarized.
Coronagraph instruments are extreme examples of stray light rejection and precise photometry because the total brightness from the solar corona is less than one millionth (10−6) the brightness of the Sun. The apparent surface brightness is even fainter because, in addition to delivering less total light, the corona has a much greater apparent size than the Sun itself.
During a solar eclipse, the Moon acts as an occulting disk and any camera in the eclipse path may be operated as a coronagraph until the eclipse is over. More common is an arrangement where the sky is imaged onto an intermediate focal plane containing an opaque spot; this focal plane is reimaged onto a detector. Another arrangement is to image the sky onto a mirror with a small hole: the desired light is reflected and eventually reimaged, but the unwanted light from the star goes through the hole and does not reach the detector. Either way, the instrument design must take into account scattering and diffraction to make sure that as little unwanted light as possible reaches the final detector. Lyot's key invention was an arrangement of lenses with stops, known as Lyot stops, and baffles such that light scattered by diffraction was focused on the stops and baffles, where it could be absorbed, while light needed for a useful image missed them.
As an example, imaging instruments on the Hubble Space Telescope offer coronagraphic capability.
Band-limited coronagraph
A band-limited coronagraph uses a special kind of mask called a band-limited mask. This mask is designed to block light and also manage diffraction effects caused by removal of the light. The band-limited coronagraph has served as the baseline design for the Terrestrial Planet Finder coronagraph. Band-limited masks will also be available on the James Webb Space Telescope.
See also:
Phase-mask coronagraph
A phase-mask coronagraph (such as the so-called four-quadrant phase-mask coronagraph) uses a transparent mask to shift the phase of the stellar light in order to create a self-destructive interference, rather than a simple opaque disc to block it. See also:
Optical vortex coronagraph
An optical vortex coronagraph uses a phase-mask in which the phase-shift varies azimuthally around the center. Several varieties of optical vortex coronagraphs exist:
- the scalar optical vortex coronagraph based on a phase ramp directly etched in a dielectric material, like fused silica.
- the vector(ial) vortex coronagraph employs a mask that rotates the angle of polarization of photons, and ramping this angle of rotation has the same effect as ramping a phase-shift. A mask of this kind can be synthesized by various technologies, ranging from liquid crystal polymer (same technology as in 3D television), and micro-structured surfaces (using microfabrication technologies from the microelectronics industry). Such a vector vortex coronagraph made out of liquid crystal polymers is currently in use at the 200-inch Hale telescope at the Palomar Observatory. It has recently been operated with adaptive optics to image extrasolar planets.
This works with stars other than the sun because they are so far away their light is, for this purpose, a spacially coherent plane wave. The coronagraph, using interference, masks out the light along the center axis of the telescope, but allows the light from off axis objects through.
Satellite-based coronagraphs
Coronagraphs in outer space are much more effective than the same instruments would be if located on the ground. This is because the complete absence of atmospheric scattering eliminates the largest source of glare present in a terrestrial coronagraph. Several space missions such as NASA-ESA's SOHO, SPARTAN, and Skylab have used coronagraphs to study the outer reaches of the solar corona. The Hubble Space Telescope (HST) is able to perform coronagraphy using the Near Infrared Camera and Multi-Object Spectrometer (NICMOS), and there are plans to have this capability on the James Webb Space Telescope (JWST) using its Near Infrared Camera (NIRCam) and Mid Infrared Instrument (MIRI).
While space-based coronagraphs such as LASCO avoid the sky brightness problem, they face design challenges in stray light management under the stringent size and weight requirements of space flight. Any sharp edge (such as the edge of an occulting disk or optical aperture) causes Fresnel diffraction of incoming light around the edge, which means that the smaller instruments that one would want on a satellite unavoidably leak more light than larger ones would. The LASCO C-3 coronagraph uses both an external occulter (which casts shadow on the instrument) and an internal occulter (which blocks stray light that is Fresnel-diffracted around the external occulter) to reduce this "leakage", and a complicated system of baffles to eliminate stray light scattering off the internal surfaces of the instrument itself.
Extrasolar planets
The coronagraph has recently been adapted to the challenging task of finding planets around nearby stars. While stellar and solar coronagraphs are similar in concept, they are quite different in practice because the object to be occulted differs by a factor of a million in linear apparent size. (The Sun has an apparent size of about 1900 arcseconds, while a typical nearby star might have an apparent size of 0.0005 and 0.002 arcseconds.)
A stellar coronagraph concept was studied for flight on the canceled Terrestrial Planet Finder mission. On ground-based telescopes, a stellar coronagraph can be combined with adaptive optics to search for planets around nearby stars .
This link shows an HST image of a dust disk surrounding a bright star with the star hidden by the coronagraph.
In November 2008, NASA announced that a planet was directly observed orbiting the nearby star Fomalhaut. The planet could be seen clearly on images taken by Hubble's Advanced Camera for Surveys' coronagraph in 2004 and 2006 . The dark area hidden by the coronagraph mask can be seen on the images, though a bright dot has been added to show where the star would have been.
Up until the year 2010, telescopes could only directly image exoplanets under exceptional circumstances. Specifically, it is easier to obtain images when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation. However in 2010 a team from NASAs Jet Propulsion Laboratory demonstrated that a vector vortex coronagraph could enable small telescopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets using just a 1.5 m portion of the Hale Telescope.
See also
- NASA page with diagrams of coronagraphs
- Kuchner and Traub (2002). "A Coronagraph with a Band-limited Mask for Finding Terrestrial Planets". The Astrophysical Journal 570 (2): 900–908. arXiv:astro-ph/0203455. Bibcode:2002ApJ...570..900K. doi:10.1086/339625.
- optical vortex coronagraph
- Optical vortex coronagraph
- NICMOS Coronagraphy
- New method could image Earth-like planets
- Overview of Technologies for Direct Optical Imaging of Exoplanets, Marie Levine, Rémi Soummer, 2009
|Wikimedia Commons has media related to: Coronagraph/Polarimeter|
- "Sun Gazer's Telescope." Popular Mechanics, February 1952, pp. 140-141. Cut-away drawing of first Coronagraph type used in 1952.
- Optical Vectorial Vortex Coronagraphs using Liquid Crystal Polymers: theory, manufacturing and laboratory demonstration Optics Infobase
- THE VECTOR VORTEX CORONAGRAPH: LABORATORY RESULTS AND FIRST LIGHT AT PALOMAR OBSERVATORY IopScience
- Annular Groove Phase Mask Coronagraph IopScience | http://en.wikipedia.org/wiki/Coronagraph | 13 |
14 | how to identify a value
how a value-system works
six relationships between (or among) values
three perspectives generated by value-systemsAs defined here, values-analysis includes description of the relationship between two (or more) values as acted out in particular situations, and it includes explanation of the perspective that is acted out in particular situations.
1.1. Here are two things to remember about the plan:
In order to follow the plan, you must be able to use the definitions given below. Since some of the terms have other meanings than the ones stated in this document, it helps to treat each term as a "technical" term. For instance, we give a precise and limited definition of the term "perspective." If you don't use the term perspective in any other way in this course, it will help you keep the idea of perspective in your mind as we apply it to the analysis of values.1.2. The material below is divided into three sections:
To make a start on the learning-curve for analyzing values, you might find it useful to restate each definition in your own words. Putting the definitions given here into your own words will help you think through the definitions. Thinking of an example for each relationship and perspective will allow you to put the definitions to immediate use. You will see guidelines below for inventing or recognizing appropriate examples. Backing up your definitions with examples is the only practical way you can know if your definitions are workable. If you do these two things and receive appropriate feedback, you will make a good start on the process of analyzing values.
Basic Definitions. This section contains the definitions of "value" and "value-system," and states the basic principles in their analysis.
Relationships. This section introduces the six relationships between values. These relationships give us the possibilities for determining how values are connected to each other in particular situations.
Perspectives. This section introduces the three perspectives that are demonstrated by people's actions in particular situations.
2.1. First Principle. If anything can be a value, the way we discover values is to look at people's actions. Failure to honor this principle leads to the creation of fantasy values -- that is, values that might exist "on paper" but do not exist as actual facts. Only as people act on values do the values become real. Is "honesty," for instance, an example of a value? NO! Not in itself. It only becomes an example when we talk about some set of actions we might characterize as "honest." VALUE IS A CONCEPT THAT REFERS TO ACTIONS MORE THAN ANYTHING ELSE.
2.2. Second Principle. We must distinguish between the ways in which values are shaped and the ways in which they are demonstrated. Values are shaped by people's needs (basic feelings and desires), the situations in which they find themselves, the projects they undertake, and the beliefs they hold. However, people demonstrate their values by what they do more than by what they say or think or believe or observe or hear. Since actions count more than beliefs, people may fool themselves by thinking their values are what they believe rather than what they do. For most people, there is a gap between actions (which demonstrate "real" values) and beliefs (which demonstrate "ideal" values).
2.3. Third Principle. Applying the action principle to artifacts from the past is simple. Anything handed down to us from the past is itself an action. If a person writes something, that writing constitutes an action. We can read it and determine the values that brought it into being. The same thing applies to music and art. They are the products of actions. We loosely call these sorts of things "primary" sources. A primary source is something that comes to us from the past. Often, we can't observe such things in a very primary form. For instance, we might not know the original language of a writing, so we will have to take the word of some translator. As far as we rely on a CD-ROM, we look at images of artistic works rather than the works themselves. In detecting values in primary sources, it's really important to interpret what you observe. For instance, we shouldn't interpret a thirty foot statue unless we are looking at a thirty foot statue. If we are looking at a picture on a computer screen, interpret it. It's real. Everything else is a fake.
2.4. Fourth Principle. The final principle of identifying values
-- and this applies as well to giving examples of values -- is that
only occur in particular situations as a person does some action. You
can't detect a value when there's nothing to detect. We can call this principle
the situation principle. It reminds us to never talk about a value apart
from some description of a specific action.
|The four principles for identifying values are: 1) the action principle, 2) the demonstration principle, 3) the source principle, and 4) the situation principle. There's a lot of overlap among these principles. Together, they provide a basis for talking about values.|
3.1. First Characteristic: Consistency of the Values in the System. Values in a system are reasonably, not totally, consistent. Even an individual cannot (usually) live by a totally consistent value-system, and groups identified by value-systems show greater variances than individuals. However, the inconsistency occurs within a fairly narrow range.
Let's say a person values a clean car. One day she might wash and wax and vacuum it. Then she might not do anything at all for six weeks -- demonstrating by her actions that she values a dirty car. But what if she was in the hospital for the six weeks and cleaned her car as soon as I got out? Granted, she did not act with total consistency, but wouldn't you say her overall actions showed that she really did value a clean car? If so, she would perform other actions that go along with her value of a clean car and that show connected values -- like buying car-soap, for instance, or car-wax. Reasonable consistency is the first characteristic of a value-system.
3.2. Second Characteristic: Relative Importance of the Values in the System. Values have relative, not absolute, importance. Their role in the actions generated by the value-system depends on their relationships to each other as acted out in specific situations. Such relationships are easily seen when a person makes an either-or choice. The value chosen becomes more important, and the value rejected becomes less important. But the rejected value may remain as part of the system because it will be chosen in different circumstances.
3.3. Third Characteristic: Hierarchy
of the Values in the System. The values in a particular value-system
may be diagrammed as a pyramid. We usually think of hierarchy in terms
of higher and lower. Frequent actions based on a value put the value
higher in the system than values that are rarely acted on. Often, the difference
between actions and beliefs plays a crucial role in determining the location
of values in a system. For example, many people believe that exercise is
good for their health. But the only ones who really place high value on
exercise are the ones who exercise a lot. If a person takes the elevator
instead of walking the stairs (or vice-versa), it shows something about
the person's values. Another issue of relevance to the question of hierarchy
is the problem we frequently encounter in determining whether one value
is higher than another. Let's say a group of people take a shower every
morning and brush their teeth. Is taking a shower or brushing their teeth
the higher value? Hard to say, right? Here's where the model of a pyramid
comes in handy. When you can't decide on the question of higher-lower,
you can just put the values side by side.
|A value-system is a cluster of reasonably consistent values whose position in the system depends on their relationships to each other. People make decisions on the basis of their value-judgments, their evaluations of the relative worth of whatever they encounter in their lives. Value-judgments may be difficult because everyone has many values. And the same values in a value-system may relate to each other in different ways as circumstances require action. The idea of relationship is fundamental in our discussion of values. So let's consider it right now.|
Relationships between Values Only Occur in Particular Situations. Two values in themselves do not have a "relationship" with each other. They only relate when they are connected to each other by the actions of a person or group in some specific circumstance. The only way to explain or give an example of a relationship is to look at some action. Once you observe and describe an action, you can identify the values on which the action is based. Then (and only then) you can determine how they are connected as shown by the action. (Remember that a "primary" source is an action, so you can discuss the relationships of values in, or even between, "primary" sources.)
|A relationship is a connection, or an interaction, between two values. For purposes of analysis, we will identify six relationships. This is the vocabulary you should use when analyzing values. We could add many more relationships, and reducing the number to six makes for a very simplified system. However, experience indicates that if you use the six with imagination, they will cover almost any action you try to analyze.|
4.1.1. We may refer to the higher value that integrates another value with itself as an integrating value. Value-systems always have a value that identifies the system by integrating the other values in the system. There could be a value-system of "order," or a value-system of "loyalty," or a value-system of "bureaucracy." It depends on what value identifies the system as a whole. You will be hearing quite a bit about integrating values in this course, because we identify the cultural matrixes covered in the course by their integrating values. Mapping a value-system is a complex project. With the image of a pyramid, the integrating value stands on the point at the top, and all the other values appear below -- connected by a variety of relationships.
4.1.2. Since the integrating value is always the most important, or highest, value in a value-system, the integrating value will always be acted out when it becomes involved in a situation. The value that functions to integrate other values is the one value in a value-system that will never be sacrificed (or chosen against) for any reason. It is always of absolute worth. It exists for no other reason than itself. Such a value has been described as the ultimate (most important) value in the system, the apex (at the top of the hierarchy) of the system, or the pivot of the system (that around which the rest of the system revolves). In a relationship between two values, the integrating value is always the more important value. It never "assists" the other value.
4.1.3. Since the integrating value holds together (integrates) the other values into a coherent system, it gives consistency to the system, and it establishes the relative importance of the other values in the system. In practical terms, this value may be determined by its inconclusiveness. It is the value in the system that contributes most to the system's coherent functioning and organization of experience. Another way to speak of "inconclusiveness" is to say that the integrating value can only be defined by looking at the overall system. For instance, "order" could mean a lot of things. The only way to limit the definition is to examine the value-system as a whole.
4.2.1. The supporting value may be identified as a utilitarian value. A utilitarian value always supports some other value. Since the two values in the relationship do not have equal importance, we cannot say that they support each other. (This phrase might be used for a different relationship.) The supporting value has less importance, and the value which is supported has greater importance.
4.2.2. If a person edges his lawn to make it look better, the value of edging the lawn has a utilitarian relationship to the value of making the lawn look better. Why? Because the value of edging the lawn supports the value of making the lawn look better. The value of making the lawn look better is the higher value because it is more inclusive. Other utilitarian values could include mowing the lawn, watering the lawn, fertilizing the lawn.
4.6.1. There must be a factor in addition to the two values that prevents a person from acting on the basis of both values. The most common "outside factors" are time, energy, and money. A person may not have enough time, or enough energy, or enough money to act on both values in a particular situation.
4.6.2. The action that results from the person's values-decision reveals "substitution." The two values “overlap” because they are both "good" values. In a given situation, however, an action based on one of the values takes the place of an action based on the other value. That is, one value substitutes for the other. (Think of a basketball player who enters the game as a substitute for another player.)
4.6.3. Consider this situation:
Henry has a problem with his twin daughters, Mae and Anna. They are Seniors at Eastern Hills High School. Mae entered the "Miss Big E" pageant for the third year in a row. She had never made it to the finals, but this year looked different. People even said she had a chance to win! When she gives her performance next Friday night, it will be the most important night of her life. Anna is a talented writer. She entered the "Lesser Metroplex Essay Contest" for the third year in a row. She had never made it to the finals, but this year is different. She is not only in the finals, but people even say she has a great chance to win! When she reads her essay next Friday night, it will be the most important night of her life. Henry has a big problem. Although he loves his daughters equally, he can't go to Mae's performance and to Anna's essay contest.
What are the two values in this example? What is the outside factor that forces Henry to make a tough decision? What will Henry do?!
First, perspectives always involve a response to some "other." That is, perspectives are interactive by their very nature. They provide modes for people to "look out" to the world around them. Encounters with others may lead people to act out their perspectives, or the attitudes that people hold may give rise to actions that display the attitudes. Perspectives are simplest to describe for individuals, but we can also identify perspectives as they apply to cultural matrixes if we provide appropriate qualifications.The following overview states each perspective, offers a definition and a phrase, and mentions some other terms in the word-field.
Second, perspectives result from people's actions more than their thoughts. People tend to see themselves as holding the same perspective in every situation. This is not so, but it seems to provide a comfortable self-image. A perspective is an "attitude" that people hold toward the "other" as they act out their values in specific situations. Individuals usually have a dominant perspective that guides the way they want to respond to situations. It would be exceedingly rare to find an individual who is totally consistent in acting out his/her ideal perspective. People actually act out all three perspective, depending on their responses to particular situations. This is normal. And healthy!
5.1.1. Ethnocentrism makes judgments about the "other." The other is wrong if it's different. Ethnocentric judgments make no critical appraisal of evidence or "facts." If "my" way of doing or looking at things is the only logical or correct way, there is no need or desire to examine alternatives. We act in an ethnocentric fashion whenever we try to get others to conform to our standards without seriously considering alternatives, or whenever we close our mind to other points-of-view, or whenever we think ill of (and then act ill toward) others without evidence sufficient to justify the thought or action.
5.1.2. Closing one's mind can be a dangerous practice. In The Nature of Prejudice (1954), Gordon W. Allport examines the strong connection between such closure and the building of prejudice. Allport defines prejudice as: "thinking ill of others without sufficient warrant." /The information cited here comes from pp. 6-15 of the abridged ed. (Doubleday Anchor Books, [no.] A149. Garden City, NY: Doubleday, 1958)./ The problem, as Allport explains it, is that thoughts lead to action. He describes five degrees of negative action from the least energetic to the most:
Antilocution. Most people who have prejudices talk about them. With like-minded friends, occasionally with strangers, they may express their antagonism freely. But many people never go beyond this mild degree of antipathetic action.5.1.3. To some extent, or one might say, in certain situations, we are all ethnocentric. We must recognize that whenever we desire others to conform to our standards, and this desire is combined with the refusal to consider alternatives, we are holding an ethnocentric perspective. In many situations, ethnocentrism poses no particular dangers and may even be the "best" perspective to hold. Think about parents raising their young children, for instance. But Allport alerts us to the dangerous side-effects of ethnocentrism. Whenever we find ourselves acting in an ethnocentric fashion, we need to ask if our actions truly represent the best possibility. (Think again about parents raising their young children.) When we decide the answer is "NO," then we will usually turn to a relativistic perspective.
Avoidance. If the prejudice is more intense, it leads the individual to avoid members of the disliked group, even perhaps at the cost of considerable inconvenience. In this case, the bearer of prejudice does not directly inflict harm upon the group he dislikes. He takes the burden of accommodation and withdrawal entirely upon himself.
Discrimination. Here the prejudiced person makes detrimental distinctions of an active sort. He undertakes to exclude all members of the group in question from certain types of employment, from residential housing, political rights, educational or recreational opportunities, churches, hospitals, or from some other social privileges. Segregation is an institutionalized form of discrimination, enforced legally or by common custom.
Physical Attack. Under conditions of heightened emotion prejudice may lead to acts of violence or semiviolence. An unwanted family may be forcibly ejected from a neighborhood, or so severely threatened that it leaves in fear. Gravestones in Jewish cemeteries may be desecrated. The Northside's Italian gang may lie in wait for the Southside's Irish gang.
Extermination. Lynchings, pogroms, massacres, and the Hitlerian program of genocide mark the ultimate degree of violent expression of prejudice.
5.2.1. The term most closely associated with relativism is open-mindedness, usually understood in the sense of "non-judgmental." This association is so strong that a refusal to make judgments is often mistaken for freedom from prejudice.
5.2.2. Much is often said in praise of relativism. Three commonly stated reasons for its popularity are:
There is no empirical way to demonstrate that values are not dependent on specific cultural settings (because decisions about "universal" values must take into account every human point-of-view that's ever been expressed, and this cannot be done at the present time).5.2.3. These are good reasons, and in many situations relativism is the most practical perspective, either because it helps one avoid ethnocentrism or there is no realistic way (or reason) to try to achieve tolerance, the third perspective. Yet there are three questions we should raise about the value of relativism:
Relativism involves respect for diversity among human individuals and groups, which provides some antidote to ethnocentrism and prejudice without requiring individuals to reconsider their own values. “Multiculturalism” expresses this benefit.
Relativism provides a "scientific" (impersonal, objective, functional) framework for moral reasoning because it replaces assumptions or generalizations about what is valuable with the affirmation that values simply reflect the interests of particular individuals or societies and therefore are open to straightforward empirical investigation.
First, we should note that the term most closely associated with relativism is "open-mindedness" and we should ask if open-mindedness is a good thing. When people describe themselves as open-minded, doesn't the concept usually derive from a refusal to make value-judgments about others? "Open-mindedness" becomes a synonym for "non-judgmental." Gordon W. Allport /see pp. 19-22/ doubts that in a literal sense open-mindedness is even possible. He remarks that: "Open-mindedness is considered to be a virtue. But, strictly speaking, it cannot occur. A new experience must be redacted into old categories. We cannot handle each event freshly in its own right. If we did so, of what use would past experience be? Bertrand Russell, the philosopher, has summed up the matter in a phrase, 'a mind perpetually open will be a mind perpetually vacant'." Allport's comments raise a question: are there really infinite limits to what someone can accept? If we can accept alternative lifestyles, or alternative cultures, will we accept child molesters? Let's say the answer is “no.” Then someone walks up to us and asks "why not?" What do we reply? Remember our position as relativists: values are simply the product of particular situations.5.2.4. Consider this situation:
Following Allport, we have argued that prejudice is dangerous. As human beings, we need to strive for freedom from prejudice. A second question we must raise about relativism is whether a non-judgmental spirit can lead to genuine freedom from prejudice. If relativism is most commonly expressed as a lack of critical judgment, defended on the grounds that there is no support for making such judgments, then the issue is whether the refusal to make judgments is equivalent to a lack of prejudice. In his novel Shibumi, Trevanian notes the confusion: "... [she] expressed her lack of critical judgment as freedom from prejudice ..." /p. 26/. If freedom from prejudice depends on not making critical judgments, and if we all have limits to what we find acceptable, aren't the limits necessarily based on ethnocentrism? If so, then relativism provides weak medicine against prejudice, or maybe no medicine. (Do you find the triumph of relativism ironic in an educational climate in which "critical thinking" is one of the most positive buzzwords? We really should investigate the meaning of "critical thinking" in contemporary college education!)
A third question to raise about relativism is what positive good it does. In many cases, it does negative good, since it does combat ethnocentrism until a limit of acceptability has been reached, and it does propel the twin-engined craft of diversity and multiculturalism, seen in our [United States] society as a ship that will take us to calmer waters than we have sailed in the past. However, if the stance of a relativist is that differences are ok, how does relativism provides a logical incentive for change in a society (except to partially move away from ethnocentrism)? Think of the situation in economic terms. If one person is rich and another poor, the rich person says: "I can accept the difference." This difference has increased at an astonishing rate as the United States has become more of a relativistic society over the last fifteen years.
Fred had grown up in a small, midwestern town where the people thought all outsiders were degenerate. Imagine how shocked the townspeople were when Fred decided to go to Texas Wesleyan University! At Wesleyan, Fred moved into Stella Russell Hall. At first Fred found his Buddhist roommate very disgusting. He tried to get a different roommate, but he couldn't. Both boys were very lonely. After a few weeks, they began to talk to each other and became good friends. Fred even attended a few meetings of the Buddhist Student Union with his roommate. At one meeting he said, "I couldn't believe what you all do. But I've learned to respect your beliefs, and I'm glad I met you." When Fred graduated from Wesleyan, he went back to his home town and tried to explain his new feelings to the townspeople. They said, "You've been corrupted. Those Buddhists are wicked. They aren't anything like us." No one would even speak to Fred for the next three months. He moved away from the town and never went back.
What is Fred's perspective in respect to his roommate? What is the perspective of the townspeople? What is Fred's perspective in respect to the townspeople?
5.3.1. Critical judgment plays a crucial role in considerations of tolerance. The starting-point of tolerance is to consider more than one point-of-view. To consider another point of view is to encounter the "other." Like ethnocentrism, tolerance makes judgments about the "other." The difference is that ethnocentrism makes judgments based solely on the point-of-view of the ethnocentric person. Tolerance, on the other hand, bases its judgments on a rational consideration of all available evidence. Tolerance knows that its judgments are provisional, because all the evidence is not in. Ethnocentrism knows that its judgments are right, because they represent the only "logical" way of thinking. Tolerance uses logic to question all judgments, especially its own.
5.3.2. From its starting-point assumption about standards that are unknown, the perspective of tolerance goes on to ask a question: what is the acceptable amount of variation from these standards? This question provides the key to linking the idea of tolerance (as a perspective) to the many definitions of "tolerance" in the dictionary. "Variation from a standard" is an important meaning for the word "tolerance." Now you might be thinking it's really crazy to ask about the amount of legitimate variation from standards that one can't even define. If you do entertain this thought, you have hit on both the appeal and the difficulty of the idea of tolerance.
18.104.22.168. The concept of tolerance comes to us from Europe in the sixteenth through eighteenth centuries. (In English, they called it "toleration" back then.) Those were exciting times for European intellectuals, as discoveries of distant lands and unimagined peoples challenged any easy assumptions about human nature and universal standards for it. Europeans encountered information about peoples whose ways were quite different from their own. Yet they did retain the faith that human beings do have important values in common and that the universal standards of behavior could be known with enough investigation. (Montaigne, that great precursor of the European Enlightenment, wrote of this perspective in his essay "About Cannibals." This essay articulates both the main hope and the main difficulty with the idea of tolerance.) In discussing toleration, Europeans did not assume that their standards were universal. Instead, they assumed that their standards and other peoples' standards should be examined through the use of reason. They viewed toleration as a quest.
22.214.171.124. If the perspective of tolerance takes one on a quest, or a search, for universal standards and the acceptable amount of variation from them, the quest is limited in two ways:
The quest is an ongoing investigation of how things are, not an ongoing assumption about how things ought to be. This is why the ultimate goal of the quest -- to identify the standards of behavior that apply to all human beings, any time, any place -- is unreachable. One must settle for intermediate goals. In order to hold a perspective of tolerance, people have to practice tolerance (that is, act in a tolerant manner by investigating alternative possibilities).5.3.3. Consider this story of two talks. Martin Buber, the Jewish existentialist philosopher, tells it in his book I and Thou. Of the two talks he says:
Acting in a tolerant manner means more than evaluating information about the "other." Tolerance attempts to use reason to evaluate the merits of values, actions, and beliefs. It attempts to establish "rules of reason" that may be used to form arguments and make comparisons. Such rules exist in the form of logic -- the discipline that provides guidelines for evaluating arguments. However, evaluation -- even when the evaluation is fair -- is not enough. Tolerance reaches for something deeper, something more human.
One apparently came to a conclusion, as only occasionally a talk can come, and yet in reality remained unconcluded; the other talk was apparently broken off and yet found a completion such as rarely falls to the lot of discussions. Both times it was a dispute about God, but each time of a very different nature.126.96.36.199. In the first talk, Buber engaged in a discussion with a worker who rejected religious belief using the words of the French astronomer Laplace: "I have had the experience that I do not need this hypothesis 'God' in order to be quite at home in the world." Buber used rational argumentation to refute the man's atheism and then notes:
When I was through ... the man ... raised his heavy lids, which had been lowered the whole time, and said slowly and impressively, "You are right." I sat in front of him dismayed. What had I done? I had led the man to the threshold beyond which there sat enthroned the majestic image which the great physicist, the great man of faith, Pascal, called the God of the Philosophers. Had I wished for that? Had I not rather wished to lead him to the other, Him whom Pascal called the God of Abraham, Isaac, and Jacob, Him to whom one can say Thou?Since Buber used reason to evaluate the merit of a belief, he moved beyond the perspective of relativism -- which might attempt to describe a belief. The descriptive process is crucial and has "academic" integrity. But it isn't tolerance. Buber moves beyond description by arguing against the worker's point of view. He still did not achieve tolerance. Why? Buber showed no interest in or willingness to change. He only wanted to change the other man. In other words, he did not take the other man seriously.
188.8.131.52. Buber then describes a different kind of conversation. In the second talk, Buber and an older man whom he greatly admired argued about religion and never came to verbal agreement with each other. The older man criticized Buber for clinging to the use of the word God: "What you mean by the name of God is something above all human grasp and comprehension, but in speaking about it you have lowered it to human conceptualization. What word of human speech is so misused, so defiled, so desecrated as this! All the innocent blood that has been shed for it has robbed it of its radiance. All the injustice that it has been used to cover has effaced its features. When I hear the highest called God, it sometimes seems almost blasphemous." Buber writes:
"Yes," I said, "it is the most heavy-laden of all human words. None has become so soiled, so mutilated. Just for this reason I may not abandon it. Generations of men have laid the burden of their anxious lives upon this word and weighed it to the ground: it lies in the dust and bears their whole burden. The races of man with their religious factions have torn the word to pieces; they have killed for it and died for it, and it bears their fingermarks and their blood. Where might I find a word like it to describe the highest? If I took the purest, most sparkling concept from the inner treasure-chamber of the philosophers, I could only capture thereby an unbinding product of thought. I could not capture the presence of Him whom the generations of men have honored and degraded with their awesome living and dying. I do indeed mean Him whom the hell-tormented and heaven-storming generations of men mean. Certainly, they draw caricatures and write 'God' underneath; they murder one another and say 'in God's name'. But when all madness and delusion fall to dust, when they stand over against Him in the loneliest darkness and no longer say 'He, He' but rather sigh 'Thou', shout 'Thou', all of them the one word, and when they then add 'God', is it not the real God whom they all implore, the One living God, the God of the children of man? Is it not He who hears them? ... We cannot cleanse the word 'God' and we cannot make it whole; but, defiled and mutilated as it is, we can raise it from the ground and set it over an hour of great care."5.3.4. Take Buber's story for what it is worth, but please notice that it's a story about meaningful communication and the opposite. It's a story that reaches deeply into the idea of practicing tolerance. We can now use Buber's second conversation as a launching-pad for stating four (somewhat overlapping) "rules" for the practice of tolerance:
It had become very light in the room. It was no longer dawning, it was light. The old man stood up, came over to me, laid his hand on my shoulder and spoke: "Let us be friends." The conversation was completed. For where two or three are truly together, they are together in the name of God.
Although tolerance may involve the analysis of other groups by comparing their values with the values of one's own group, practicing tolerance seems to function most fully when it involves meaningful communication between individuals. The failure to achieve meaningful communication generally rests on the inability of the persons involved to respect and trust each other. True dialog and honest encounter mean willingness to respect each other and to share deep and controversial ideas without an attempt to minimize disagreements.5.3.5. The phrase that captures the essence of tolerance is: "what is true for everyone must be true for me." It's easy to misunderstand this phrase. It DOES NOT MEAN "going along with the crowd." In the phrase "everyone" means "everyone," not some particular group.
Accepting other beliefs and values as valid for the "other" simply because they belong to the "other" is relativism. Tolerance may accept, reject, or suspend judgment about the "other," but only after a rational analysis of the "other" and by a rational comparison with one's own viewpoint. Tolerance is trying to enlarge one's own view by examining alternative views and making critical judgments. To be tolerant, people have to be willing to change if they encounter data that shows them they need to change.
If two people exchange points-of-view with each "other," expose their differences, and part as friends -- they may be practicing tolerance. It depends. If they both use reasoned arguments, they use the tolerant approach because they are in dialog with each other. (Relativism may make a reasoned analysis of the "other," but it does not enter into a dialog.) Buber's first conversation illustrates this approach. The difference in the second conversation is that -- since neither man showed the other to be wrong through the use of argument -- the particular conversation ended with a movement to a "higher" level of being. Buber expresses this religiously and mystically, invoking the "name of God." This expression reveals a concept that may be understood in human terms: Buber says the two men are "truly together." Seemingly, they only discover this level of being by participating in the conversation -- it is not a quality that either can bring to the conversation. It's a matter of size! Through the second conversation Buber becomes a "bigger" person even though he does not change his mind. Do you see how this is so? Can you formulate the concept in your own words?
We should refuse to make naive claims about "universal" values -- this would be ethnocentrism -- but we can practice tolerance by trying to enlarge our own views through honest, empathetic, and compassionate dialog with different views. This doesn't necessarily have to occur through personal conversations, though personal conversations are one way of achieving such dialog. We could engage in dialog with "primary" sources -- such as art, music, and writings -- from the past. Because the kind of dialog required for tolerance focuses on an exploration of differences, it may be painful, and it is often to be avoided. Both tolerance and relativism imply respect for alternate viewpoints. The difference is that tolerance requires us to ask if the beliefs and values of others might be useful in our own lives.
5.3.6. The term most closely associated with tolerance is inclusiveness, but again one must beware of misunderstanding the concept. Tolerance never means accepting other beliefs and values simply because they belong to someone else. And it is not a matter of "political correctness" (as the idea of inclusiveness has often come to imply). Tolerance is trying to enlarge one's own view by examining other beliefs and values, and making critical judgments about them.
5.3.7. We hear the word "tolerance" used all the time. Usually we hear it as a synonym for "relativism." In this we encounter a final, widespread misunderstanding. "I can tolerate that," or "I have a lot of tolerance," or "I am a very tolerant person." In all these phrases the term "tolerance" is used as a quasi-substitute for relativism, that is for a perspective that tries to avoid ethnocentrism by assuming a non-judgmental position. In the common use of the word, we might find considerable irony. Nowadays, many people have a high degree of respect for the word "tolerance" while at the same time rejecting tolerance as a legitimate goal. For many, the rejection stems from thoughtlessness and ignorance. For some, the rejection stems from the fear that all evaluations are ethnocentric. Either way, the ideal of tolerance as described here is no longer favored by many.
5.3.8. For a final example of the term "tolerance" used in the sense described here, we turn to the philosopher Alfred North Whitehead /Adventures of Ideas, pp. 51-52/. He summarizes Plato's perspective by saying that for Plato all (coherent) points-of-view have something to contribute to our understanding of the universe. [Therefore, we as individuals need to consider the personal consequences of other points-of-view.] Plato further argues that all points-of-view involve omissions of evident facts. [Therefore, our own point-of-view can never be comprehensive.] Whitehead then concludes: "The duty of tolerance is our finite homage to the abundance of inexhaustible novelty which is awaiting the future, and to the complexity of accomplished fact which exceeds our stretch of insight."
|CD-ROM Home Page||Human
Web Home Page
|Send comments regarding this page to [email protected].|
|Copyright ©2007 Stan Rummel. All rights reserved.|
|This page was last updated December 24, 2007.| | http://faculty.txwes.edu/csmeller/Human-Prospect/ProData09/00Preamble/ValuesAnalysis/ValAnalGd.htm | 13 |
16 | In population ecology, the population growth rate measures the change in the number of individuals in a population over a specified length of time. Patterns of population growth can be shaped by a variety of factors, and so population biologists have developed different mathematical expressions, or models, to describe population growth rate. One of the most basic expressions of population growth rate is the exponential model. Mathematically, the exponential model is:
dN/dt = rN
In the exponential model, change in population size (dN) over a specified, and usually short, interval of time (dt) is proportional to the product of the per capita growth rate r (where r is the number of individuals added to or subtracted from the population per individual already present in the population per time) and the population size N (where N is the number of individuals present in the population). A population that conforms to this model of population growth will grow exponentially when r > 0, will decline exponentially when r < 0, and will remain constant in size when r = 0, which will occur only when births and deaths are exactly balanced. Exponential growth is best visualized in the graph that plots how the population size changes over time. This graph is often referred to as the "j" curve.
In populations growing exponentially, (1) the number of individuals in a population increases over time, (2) the population growth rate (dN/dt) increases over time (even when the per capita growth rate r remains constant), and (3) the rate at which the population growth rate increases over time, increases over time. Thus, populations growing exponentially will keep growing in size at a faster and faster rate. For example, some bacteria have generations every 20 minutes under optimal conditions. Thus, if we started with a single bacteria, then after only 36 hours the world would be covered by a layer of bacteria one foot thick! Obviously, exponential growth is not a very realistic model of population growth for most species (but see the pattern of human population growth).
Why is the exponential population growth model unrealistic?
In the exponential model, the per capita growth rate is independent of population size (density independent). However, this is unlikely because both per capita birth rate (b) and per capita death rate (d) are expected to change with population size (they are density dependent). For example, competition for resources should be greater as population size increases, leaving fewer resources for each reproducing individual. As a consequence, per capita birth rate should decrease as population size increases (negative density dependent). Similarly, increased competition for resources should result in higher per capita death rates as population size increase because individuals are more likely to die of starvation. In addition, per capita death rates should increase with population size because (1) diseases are transmitted more easily in dense populations and (2) predators may be attracted to regions of high prey density. Thus, per capita death rates should be positively density dependent. Thus, as populations increase in size, per capita birth rates are expected to decline and per capita death rates are expected to increase.
Because per capita growth rate is a function of the per capita birth and death rates (r = b - d), exponential growth is not a realistic model for populations in which birth and death rates vary as a function of population size, as will generally be the case when resources are not unlimited. Most populations seem to exhibit density-dependent birth and death rates, and thus the exponential model of population growth is not broadly useful in describing population growth. However, the exponential model is useful in certain cases, for example in describing and predicting population growth when a species is introduced to a new environment with abundant resources without competitors or predators. The exponential model of population growth also serves as foundation for other, more realistic models such as the logistic growth model.
- Campbell, N.A., J.B. Reece, and L.G. Mitchhell. 2006. Biology. Addison Wesley Longman, Inc. Menlo Park, CA. ISBN: 080537146X
- Gotelli, N. J. 2001. A primer of ecology. Sinauer Associates, Inc. Sunderland, MA. ISBN: 0878932704
- Raven, P.H., G.B. Johnson, J.B. Losos, K.A. Mason, and S.R. Singer. 2008. Biology, 8th edition. McGraw Hill, New York, NY. ISBN: 0073227390 | http://www.eoearth.org/article/Exponential_growth | 13 |
12 | The basic principle behind GPS is the measurement of distance between satellites and the receiver. The distance to at least 3 satellites must be known in order to find out a position. Satellites and receivers generate duplicate radio signals at exactly the same time. As satellite signals travel at the speed of light (186,000 miles per second), they take a few hundredths of a second to reach the GPS receiver. This difference and the speed at which signal travels is used in the equation to find out the distance between the GPS receiver and the satellite.
Speed x Time = Distance
So, if it takes 0.09 of a second for a satellite's signal to reach the GPS receiver, the distance between the two must be 16,740 miles (186,000 miles per second x 0.09 seconds = 16,740 miles). The GPS receiver must be located somewhere on an imaginary sphere that has a radius of 16,740 miles.
If it takes 0.08 seconds for the signal to reach the GPS receiver from a second satellite then the receiver must be located somewhere on an imaginary sphere that has a radius of 14,880 miles, and where the two spheres intersect.
Supposing it takes 0.07 seconds for the receiver to receive a signal from a third satellite then the GPS must be located somewhere on a sphere that has a radius of 13,020 miles and where the three satellites intersect.
||Now there will be two location possibilities but one of these is located in space and is mathematically discarded by the GPS receiver as impossible.
Not only do the satellite signals contain data that the GPS receiver uses to calculate distance, but data that enables the receiver to make adjustments needed to get an accurate position. Atmospheric data is sent in the signal as the receiver has to account for delays in the time it takes for the signal to reach it. These decreases in the speed of the signal are caused by the ionosphere and the troposphere.
This information is usually used in conjunction with software on a laptop or PDA (Personal Digital Assistant) in the form of a map, to show the GPS receiver user their location. | http://support.radioshack.com/support_tutorials/gps/gps_works.htm | 13 |
16 | Sometimes astronomy is a bit like fishing: patience is the cardinal virtue. A couple of years ago astronomers trained the Hubble Space Telescope on a fairly empty patch of sky and left it there for ten days, trying to catch whatever photons straggled in. The result was the Hubble Deep Field, a series of images that doubled astronomers' estimates of the number of galaxies in the universe to at least 50 billion.
Now researchers in Hawaii have done something similar. Using a new instrument that can peer through the dust that obscures many galaxies, Amy Barger and her colleagues at the University of Hawaii built up images of small parts of the sky over the course of two weeks. They've uncovered evidence of a population of never-before-seen galaxies--so many, in fact, that taken together they shine as brightly as all the rest of the known galaxies in the universe.
The new instrument on the James Clerk Maxwell Telescope on Mauna Kea allows astronomers to map areas of the sky in wavelengths some 1,000 times longer than visible light. This is useful for finding galaxies because while large clouds of dust will scatter or absorb visible light, they transform light emitted from the galaxies into infrared light. Observing just two small patches of the sky, Barger and another astronomer found three hidden galaxies. If this tally is typical of the rest of the sky--and Barger sees no reason why it shouldn't be--then there are at least 40 million such galaxies in the universe. What's more, Barger says, these galaxies seem to emit 100 times as much energy as the galaxies we know about.
The unusual brightness might be due to bursts of star formation, says Barger. She'd like to point the Hubble at these galaxies to detect visible light coming from them, which would help her determine their distance--and their age. "The problem is that there's very little visible light making it through," Barger says. If the galaxies turn out to be very old, a distinct possibility, it may mean that astronomers will have to revise not only their count of the number of galaxies in the universe but the history of galaxies as well. | http://discovermagazine.com/1998/nov/galaxiesinhiding1548 | 13 |
25 | In classical mechanics, the Kepler problem is a special case of the two-body problem, in which the two bodies interact by a central force F that varies in strength as the inverse square of the distance r between them. The force may be either attractive or repulsive. The "problem" to be solved is to find the position or speed of the two bodies over time given their masses and initial positions and velocities. Using classical mechanics, the solution can be expressed as a Kepler orbit using six orbital elements.
The Kepler problem is named after Johannes Kepler, who proposed Kepler's laws of planetary motion (which are part of classical mechanics and solve the problem for the orbits of the planets) and investigated the types of forces that would result in orbits obeying those laws (called Kepler's inverse problem).
For a discussion of the Kepler problem specific to radial orbits, see: Radial trajectory. The Kepler problem in general relativity produces more accurate predictions, especially in strong gravitational fields.
The Kepler problem arises in many contexts, some beyond the physics studied by Kepler himself. The Kepler problem is important in celestial mechanics, since Newtonian gravity obeys an inverse square law. Examples include a satellite moving about a planet, a planet about its sun, or two binary stars about each other. The Kepler problem is also important in the motion of two charged particles, since Coulomb’s law of electrostatics also obeys an inverse square law. Examples include the hydrogen atom, positronium and muonium, which have all played important roles as model systems for testing physical theories and measuring constants of nature.
The Kepler problem and the simple harmonic oscillator problem are the two most fundamental problems in classical mechanics. They are the only two problems that have closed orbits for every possible set of initial conditions, i.e., return to their starting point with the same velocity (Bertrand's theorem). The Kepler problem has often been used to develop new methods in classical mechanics, such as Lagrangian mechanics, Hamiltonian mechanics, the Hamilton–Jacobi equation, and action-angle coordinates. The Kepler problem also conserves the Laplace–Runge–Lenz vector, which has since been generalized to include other interactions. The solution of the Kepler problem allowed scientists to show that planetary motion could be explained entirely by classical mechanics and Newton’s law of gravity; the scientific explanation of planetary motion played an important role in ushering in the Enlightenment.
where k is a constant and represents the unit vector along the line between them. The force may be either attractive (k<0) or repulsive (k>0). The corresponding scalar potential (the potential energy of the non-central body) is:
Solution of the Kepler problem
- and the angular momentum is conserved. For illustration, the first term on the left-hand side is zero for circular orbits, and the applied inwards force equals the centripetal force requirement , as expected.
If L is not zero the definition of angular momentum allows a change of independent variable from to
giving the new equation of motion that is independent of time
The expansion of the first term is
This equation becomes quasilinear on making the change of variables and multiplying both sides by
After substitution and rearrangement:
The orbit can be derived from the general equation
whose solution is the constant plus a simple sinusoid
where (the eccentricity) and (the phase offset) are constants of integration.
This is the general formula for a conic section that has one focus at the origin; corresponds to a circle, corresponds to an ellipse, corresponds to a parabola, and corresponds to a hyperbola. The eccentricity is related to the total energy (cf. the Laplace–Runge–Lenz vector)
Comparing these formulae shows that corresponds to an ellipse (all solutions which are closed orbits are ellipses), corresponds to a parabola, and corresponds to a hyperbola. In particular, for perfectly circular orbits (the central force exactly equals the centripetal force requirement, which determines the required angular velocity for a given circular radius).
For a repulsive force (k > 0) only e > 1 applies.
- Action-angle coordinates
- Bertrand's theorem
- Binet equation
- Hamilton–Jacobi equation
- Laplace–Runge–Lenz vector
- Kepler orbit
- Kepler problem in general relativity
- Kepler's equation
- Kepler's laws of planetary motion | http://en.wikipedia.org/wiki/Kepler_problem | 13 |
18 | Share & Connect
In 2009, NASA launched Kepler to search for planets outside the solar system – called extrasolar planets, or exoplanets – that are Earth-sized and have a chance of harboring life. As of December 2011, the spacecraft has discovered 2,326 exoplanets, over a hundred of which are likely candidates to meet the requirements.
A team of astronomers at NASA decided in early January to give Kepler an additional mission of hunting for extrasolar moons, or exomoons. The team believes in the potential existence of exomoons. Natural satellites only survive half the time when they and their companion planets are still undergoing evolution, though the many moons in our solar system increase the possibility.
With this new mission, titled Hunt of Exomoons with Kepler (HEK), Kepler may find life on these moons as well as on exoplanets and help astronomers understand planetary evolution and the formation of natural satellites. Kepler will first look at the exoplanets cataloged thus far to see if any of them have any such natural satellites. The exomoons would have to be similar in size, or larger, than our Moon because they would be easiest for the spacecraft to detect.
It is also possible that exomoons are capable of harboring life. In our solar system, Jupiter’s Europa and Saturn’s Enceladus have liquid water beneath their surfaces. It is not known for sure if these two large moons contain life, though the presence of water heightens the probability as well as the probability that exomoons may be habitable.
Kepler will attempt to search for exomoons through two means: dynamical effects and eclipses features. With dynamical effects, the spacecraft would observe and measure the gravitational effect between the exoplanet and the exomoon (i.e. how much they tug on each other).
The amount of gravitational effects on the two bodies would determine whether or not the system would be a planet-moon system or a binary-planet system (it would be easy for the former to be mistaken with the latter). With eclipse features, Kepler would be on the lookout for solar and lunar eclipses, involving the exomoon, its companion planet, and their star. Kepler would see if the exomoon may make subtle changes in a star’s brightness through eclipsing the star, which would drop a bit in brightness.
Once Kepler finds an exomoon, it would be able to determine its size and mass based on the gravitational effect and eclipse features. Upon discovering the size and mass, it would then calculate the density. Thereafter, the exomoon’s composition can be determined, giving insight as to how to the exomoon formed and, ultimately, revealing the process of planetary evolution.
“Extrasolar moons represent an outstanding challenge in modern observational astronomy,” writes head author David Kipping in the team’s paper. Kipping, a member of the team at NASA, is an astronomer at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts.
“Their detection and study would yield a revolution in the understanding of planet/moon formation and evolution, but perhaps most provocatively, they could be frequent seats for life in the Galaxy.” | http://www.toonaripost.com/2012/01/us-news/kepler-now-on-the-hunt-for-exomoons/ | 13 |
10 | The Science from the Hubble Space Telescope
It is designed to take high-resolution images and accurate spectra by concentrating starlight to form sharper images than is possible from the ground, where the atmospheric 'twinkling' of the stars limits the clarity of the images.
Therefore, despite its relatively modest size of 2.4 metres, Hubble is more than able to compete with ground-based telescopes that have light-collecting areas (mirrors) that are as much as 10 or 20 times larger.
Hubble's second huge advantage is its ability to observe near-infrared and ultraviolet light, which is otherwise filtered away by the atmosphere before it can reach ground-based telescopes.
Hubble has made many contributions to science since its launch in 1990. With 10 000 scientific papers attributed to it, Hubble is by some measures the most productive scientific instrument ever built.
This section outlines the background to some of the key areas of research Hubble has contributed to. It is organised into the following pages:
- The Hubble Deep Fields: How Hubble has observed the furthest away galaxies and the most ancient starlight ever seen by humankind
- Age and size of the Universe: How Hubble has calculated the age of the cosmos and discovered the Universe is expanding at an ever faster rate
- The lives of stars: How Hubble has revolutionised our understanding of the birth and death of stars
- The solar neighbourhood: What Hubble has taught us about planets, asteroids and comets in our own Solar System
- Exoplanets and proto-planetary discs: How Hubble has made the first ever image of an exoplanet in visible light, and spotted planetary systems as they form
- Black Holes, Quasars, and Active Galaxies: How Hubble found black holes at the heart of all large galaxies
- Formation of stars: How Hubble observes stars as they form from huge dust clouds
- Composition of the Universe: How Hubble studied what the Universe is made of, and came to some startling conclusions
- Gravitational lenses: How astronomers use a helping hand from Einstein to increase Hubble’s range
- Europe & Hubble: How the European Space Agency and European astronomers have contributed to this international project | http://spacetelescope.org/science/ | 13 |
17 | By Dr David Whitehouse
BBC News Online science editor
Astronomers have seen a trail of black holes scattered across space formed by a titanic collision between galaxies.
They were detected in the NGC 4261elliptical galaxy observed by the orbiting Chandra X-ray telescope.
X-rays reveal black holes scattered around the galaxy
The holes are all that remains of streams of stars thrown out into space after two spiral galaxies crashed into each other a few billion years ago.
The new data support the theory that large, almost featureless, elliptical galaxies are formed in spiral mergers.
NGC 4261 is about 100 million light-years away from our Solar System.
The origin of elliptical galaxies has long been a subject of intense debate among astronomers.
Computer simulations support the idea that they are produced by collisions between spiral galaxies. And optical evidence of streams of stars ripped away by gravity from these impacts has been interpreted as evidence for the theory.
Now, Chandra's X-ray observations, which can only be made above the Earth's atmosphere, provide further proof.
"This discovery shows that X-ray observations may be the best way to identify the ancient remains of mergers between galaxies," says Lars Hernquist, of the Harvard-Smithsonian Center for Astrophysics (CFA) in Cambridge, Massachusetts.
"It could be a significant tool for probing the origin of elliptical galaxies."
Andreas Zezas, also of the CFA, says: "From the optical and radio images, we knew something unusual was going on in the nucleus of this galaxy, but the real surprise turned out to be on the outer edges of the galaxy.
"Dozens of black holes and neutron stars were strung out across space like beads on a necklace."
The spectacular structure is thought to represent the aftermath of the destruction of a smaller galaxy that was pulled apart by gravitational tidal forces as it fell into NGC 4261.
As the doomed galaxy fell into the larger one, streams of gas were pulled out into long tidal tails.
As these tidal trails fell on to the larger galaxy, shock waves triggered the formation of large numbers of massive stars, which over the course of a few million years evolved into neutron stars or black holes. | http://news.bbc.co.uk/2/hi/science/nature/3303855.stm | 13 |
13 | by: Sara Brandt
Ammonite was once thought to be the petrified remains of snakes! Modern science, however, tells us that these fascinating fossils are actually the remains of an ancient aquatic mollusk. A mollusk is an invertebrate with a soft, unsegmented body. The soft body of an ammonite was protected by a hard outer shell. The shells of ammonites ranged from an inch to nine feet! Each shell is divided into many different chambers. The walls of each chamber are called septa. The septa were penetrated by the ammonite’s siphuncle, a tube-like structure that allowed the ammonite to control the air pressure inside its shell. Ammonites were aquatic creatures, and being able to control the air pressure inside their shells meant being able to control their buoyancy.
What is the Fibonacci sequence? The Fibonacci sequence is a list of numbers where every number is the sum of the previous two. The Fibonacci sequence starts at 1 and grows infinitely:
1, 1, 2, 3, 5, 8, 13, 21, 34, 55 …
To put this sequence into mathematical terms, each term Fn = Fn-1 + Fn-2. The Fibonacci sequence can be illustrated geometrically by drawing boxes. The first box should be 1×1, the second box 1×1, the third 2×2, the fourth 3×3, the fifth 5×5, the sixth 8×8, and so on. Each box should be adjacent to the boxes that come before it, forming a spiral of boxes. Have your students create their own Fibonacci squares – graph paper with small boxes works best.
What does ammonite have to do with Fibonacci? Ammonite shells are a naturally occurring example of the Fibonacci sequence. If you draw a quarter circle in each Fibonacci square, they connect to form an ever increasing spiral. Try to find the Fibonacci squares in your ammonite fossils – photocopy the fossil, then start at the very center by drawing two small boxes right next to each other. With most fossils, the first boxes are .25 cm by .25 cm. Continue drawing boxes with Fibonacci dimensions. You’ll notice that the spiral of the shell always falls within the Fibonacci squares.
To further examine the concept of the Fibonacci number sequence in nature it is a worthwhile activity to have your students examine plants and flowers. So many of them have leaf structures, petals, and stems that follow the series. These spirals can be seen in everything from sunflowers to pine cones and even pineapples.
If your school doesn’t have access to ammonites, a field trip around the school grounds to identify the Fibonacci sequence in daisies, black-eyed susans, and seed heads would yield many oohs and aahs from your students. The types of explorations are endless as examples of the Fibonacci sequence and the Golden Ratio are, indeed, endless! | http://blog.teachersource.com/2009/11/12/ammonitethe-fibonacci-fossil/ | 13 |
10 | Above: In this visualization, Hurricane Fabian runs through a large patch of warm water, orange and red indicate 82 F and warmer, and leaves a blue cold trail behind. Click image to see annimation of hurricanes cold trail. (4MB)
Hurricanes act as heat engines, drawing energy up from warm tropical ocean waters to power the intense winds, powerful thunderstorms, and immense ocean surges. These tools help weather experts determine if a tropical cyclone is likely to strengthen or weaken and how much rain will fall on land.
Warm Water Fuels Hurricane Isabel
Warm water fuels hurricanes. Hurricanes drive on 82° F or warmer sea surface temperatures. NASA satellites can detect sea surface temperatures through clouds and helps determine if a tropical cyclone is likely to strengthen or weaken. In this visualization, Hurricane Fabian runs through a large patch of warm water, orange and red indicate 82° F and warmer, and leaves a blue cold trail behind. Cold trails can sometimes weaken tropical storms. However, Hurricane Isabel took a different path fueling up on warm water next to Fabian's cold trail, and leaving another cold trail behind. Aqua satellite's Advanced Microwave Scanning Radiometer (AMSR-E) provided sea surface temperatures for the animation above. Data runs from August 22 to September 15, 2003. AMSR-E was developed by the National Space Development Agency (NASDA) of Japan.
Checking Under Isabel's Hood
|Above: Spaceborne rain radar allows scientists to create 3-D views of precipitation, height of the rain column and warmth of the core inside powerful hurricanes. Click image to see "Cat Scan" of Hurricane Isabel. Credit: NASA/NASDA|
The eye of a hurricane may be the calm of the storm, but it also houses the engine that drives the storm. NASA and National Space Development Agency (NASDA) of Japan's Tropical Rainfall Measuring Mission (TRMM) satellite looked under Isabel's hood and showed scientists the pistons that power the hurricane, giving them an idea of the intensity and distribution of rainfall.
The world's first and only spaceborne rain radar allows scientists to create 3-D views of precipitation, height of the rain column and warmth of the core inside powerful hurricanes. Red color indicates rain rates in excess of 2 inches per hour. Green represents rain rates in excess of 1.0 inch per hour. Yellow shows excess of .5 inches of rain per hour. TRMM captured this image September 15, 2003.
Eye on Hurricane Isabel
Above:The Moderate Resolution Imaging Spectroradiometer (MODIS) instrument onboard NASA's Terra satellite captured this image September 17, 2003.Click to image see animation Credit: NASA
The animated sequence shows eight days of high-resolution images of Hurricane Isabel. These images are so detailed that you can see the wind vortices inside the eye. These images were captured September 8, 10,11,12, 14, 15, 16, and 17, 2003.Check the National Hurricane Center for the latest storm forecasts.
More information and high resolution images see: NASA Goddard Space Flight CenterTo learn more about how hurricanes form see Recipe for a Hurricane | http://www.nasa.gov/vision/earth/lookingatearth/Isabels_Engine.html | 13 |
23 | |Number theory index||History Topics Index|
Prime numbers and their properties were first studied extensively by the ancient Greek mathematicians.
The mathematicians of Pythagoras's school (500 BC to 300 BC) were interested in numbers for their mystical and numerological properties. They understood the idea of primality and were interested in perfect and amicable numbers.
A perfect number is one whose proper divisors sum to the number itself. e.g. The number 6 has proper divisors 1, 2 and 3 and 1 + 2 + 3 = 6, 28 has divisors 1, 2, 4, 7 and 14 and 1 + 2 + 4 + 7 + 14 = 28.
A pair of amicable numbers is a pair like 220 and 284 such that the proper divisors of one number sum to the other and vice versa.
You can see more about these numbers in the History topics article Perfect numbers.
By the time Euclid's Elements appeared in about 300 BC, several important results about primes had been proved. In Book IX of the Elements, Euclid proves that there are infinitely many prime numbers. This is one of the first proofs known which uses the method of contradiction to establish a result. Euclid also gives a proof of the Fundamental Theorem of Arithmetic: Every integer can be written as a product of primes in an essentially unique way.
Euclid also showed that if the number 2n - 1 is prime then the number 2n-1(2n - 1) is a perfect number. The mathematician Euler (much later in 1747) was able to show that all even perfect numbers are of this form. It is not known to this day whether there are any odd perfect numbers.
In about 200 BC the Greek Eratosthenes devised an algorithm for calculating primes called the Sieve of Eratosthenes.
There is then a long gap in the history of prime numbers during what is usually called the Dark Ages.
The next important developments were made by Fermat at the beginning of the 17th Century. He proved a speculation of Albert Girard that every prime number of the form 4 n + 1 can be written in a unique way as the sum of two squares and was able to show how any number could be written as a sum of four squares.
He devised a new method of factorising large numbers which he demonstrated by factorising the number 2027651281 = 44021 × 46061.
He proved what has come to be known as Fermat's Little Theorem (to distinguish it from his so-called Last Theorem).
This states that if p is a prime then for any integer a we have ap = a modulo p.
This proves one half of what has been called the Chinese hypothesis which dates from about 2000 years earlier, that an integer n is prime if and only if the number 2n - 2 is divisible by n. The other half of this is false, since, for example, 2341 - 2 is divisible by 341 even though 341 = 31 × 11 is composite. Fermat's Little Theorem is the basis for many other results in Number Theory and is the basis for methods of checking whether numbers are prime which are still in use on today's electronic computers.
Fermat corresponded with other mathematicians of his day and in particular with the monk Marin Mersenne. In one of his letters to Mersenne he conjectured that the numbers 2n + 1 were always prime if n is a power of 2. He had verified this for n = 1, 2, 4, 8 and 16 and he knew that if n were not a power of 2, the result failed. Numbers of this form are called Fermat numbers and it was not until more than 100 years later that Euler showed that the next case 232 + 1 = 4294967297 is divisible by 641 and so is not prime.
Number of the form 2n - 1 also attracted attention because it is easy to show that if unless n is prime these number must be composite. These are often called Mersenne numbers Mn because Mersenne studied them.
Not all numbers of the form 2n - 1 with n prime are prime. For example 211 - 1 = 2047 = 23 × 89 is composite, though this was first noted as late as 1536.
For many years numbers of this form provided the largest known primes. The number M19 was proved to be prime by Cataldi in 1588 and this was the largest known prime for about 200 years until Euler proved that M31 is prime. This established the record for another century and when Lucas showed that M127 (which is a 39 digit number) is prime that took the record as far as the age of the electronic computer.
In 1952 the Mersenne numbers M521, M607, M1279, M2203 and M2281 were proved to be prime by Robinson using an early computer and the electronic age had begun.
By 2005 a total of 42 Mersenne primes have been found. The largest is M25964951 which has 7816230 decimal digits.
Euler's work had a great impact on number theory in general and on primes in particular.
He extended Fermat's Little Theorem and introduced the Euler φ-function. As mentioned above he factorised the 5th Fermat Number 232 + 1, he found 60 pairs of the amicable numbers referred to above, and he stated (but was unable to prove) what became known as the Law of Quadratic Reciprocity.
He was the first to realise that number theory could be studied using the tools of analysis and in so-doing founded the subject of Analytic Number Theory. He was able to show that not only is the so-called Harmonic series ∑ (1/n) divergent, but the series
1/2 + 1/3 + 1/5 + 1/7 + 1/11 + ...
formed by summing the reciprocals of the prime numbers, is also divergent. The sum to n terms of the Harmonic series grows roughly like log(n), while the latter series diverges even more slowly like log[ log(n) ]. This means, for example, that summing the reciprocals of all the primes that have been listed, even by the most powerful computers, only gives a sum of about 4, but the series still diverges to ∞.
At first sight the primes seem to be distributed among the integers in rather a haphazard way. For example in the 100 numbers immediately before 10 000 000 there are 9 primes, while in the 100 numbers after there are only 2 primes. However, on a large scale, the way in which the primes are distributed is very regular. Legendre and Gauss both did extensive calculations of the density of primes. Gauss (who was a prodigious calculator) told a friend that whenever he had a spare 15 minutes he would spend it in counting the primes in a 'chiliad' (a range of 1000 numbers). By the end of his life it is estimated that he had counted all the primes up to about 3 million. Both Legendre and Gauss came to the conclusion that for large n the density of primes near n is about 1/log(n). Legendre gave an estimate for π(n) the number of primes ≤ n of
π(n) = n/(log(n) - 1.08366)
while Gauss's estimate is in terms of the logarithmic integral
π(n) = ∫ (1/log(t) dt where the range of integration is 2 to n.
You can see the Legendre estimate and the Gauss estimate and can compare them.
The statement that the density of primes is 1/log(n) is known as the Prime Number Theorem. Attempts to prove it continued throughout the 19th Century with notable progress being made by Chebyshev and Riemann who was able to relate the problem to something called the Riemann Hypothesis: a still unproved result about the zeros in the Complex plane of something called the Riemann zeta-function. The result was eventually proved (using powerful methods in Complex analysis) by Hadamard and de la Vallée Poussin in 1896.
There are still many open questions (some of them dating back hundreds of years) relating to prime numbers.
Some unsolved problems
Here are the latest prime records that we know.
The largest known prime (found by GIMPS [Great Internet Mersenne Prime Search] in August 2008) was the 45th Mersenne prime: M43112609 which has 1209780189 decimal digits. The most recently discovered Mersenne prime (September 2008) is M37156667. See the Official announcement
The largest known twin primes are 2003663613 × 2195000 ± 1. They have 58711 digits and were announced by Vautier, McKibbon and Gribenko in 2007.
The largest known factorial prime (prime of the form n! ± 1) is 34790! - 1. It is a number of 142891 digits and was announced by Marchal, Carmody and Kuosa in 2002.
The largest known primorial prime (prime of the form n# ± 1 where n# is the product of all primes ≤ n) is 392113# + 1. It is a number of 169966 digits and was announced by Heuer in 2001.
References (21 books/articles)
Other Web sites:
Article by: J J O'Connor and E F Robertson
|History Topics Index||Number theory index|
|Main index||Biographies Index
|Famous curves index||Birthplace Maps
|Mathematicians of the day||Anniversaries for the year
|Search Form|| Societies, honours, etc
The URL of this page is: | http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Prime_numbers.html | 13 |
33 | Ivars Peterson's MathTrek
Most people know just one way to multiply two large numbers by hand. Typically, they learned it in elementary school. They're often surprised to find that there are a variety of ways to do multiplications, and each such algorithm has advantages and disadvantages. Moreover, grade-school multiplication can be far from the best method available in certain contexts.
Slight differences in the efficiency of multiplication algorithms can make a huge difference when calculators or computers do the work. Computers worldwide perform enormous numbers of multiplications each day. In most computers, each operation consumes mere microseconds, but multiplied by the number of computations performed, the differences in time taken can be significant. So, the general question of how quickly two n-bit numbers can be multiplied has not only great theoretical importance but also practical relevance.
Indeed, when it comes to multiplying two numbers, the best (or fastest) way to do it is often far from obvious.
One particularly intriguing and efficient multiplication algorithm was developed in the late 1950s by Anatolii Alexeevich Karatsuba, now at the Steklov Institute of Mathematics in Moscow.
Karatsuba's "divide-and-conquer" multiplication algorithm has its roots in a method that Carl Friedrich Gauss (17771855) introduced involving the multiplication of complex numbers.
A complex number is an expression of the form a + bi, where a and b are real numbers, and i has the property that i2 = 1. The real number a is called the real part of the complex number, and the real number b is the imaginary part. When the imaginary part b is 0, the complex number is just the real number a.
Suppose that you want to multiply two complex numbers, a + bi and c + di. To do so, you use the following rule:
(a + bi)(c + di) = [ac bd] + [ad + bc]i.
For example: (2 + 3i)(1 + 2i) = [2 6] + [4 + 3]i = (4 + 7i).
Expressed in terms of a program, you would input a, b, c, and d and output ac bd and ad + bc.
Computationally, multiplying two digits is much more costly than is adding two digits. Suppose then that multiplying two real numbers costs $1 and adding them costs a penny. To obtain ac bd and ad + bc requires four multiplications and two additions, for a total of $4.02.
Is there a cheaper way to obtain the output from the input? The Gauss optimization algorithm offers an alternative approach. Here's how the computation can be done for $3.05, with three multiplications and five additions.
x1 = a + b
x2 = c + d
x3 = x1 x2 = ac + ad + bc + bd
x4 = ac
x5 = bd
x6 = x4 x5 = ac bd
x7 = x3 x4 x5 = bc + ad
So, Gauss optimization saves one multiplication out of four.
Karatsuba's divide-and-conquer multiplication algorithm takes advantage of this saving.
Consider a multiplication algorithm that parallels the way multiplication of complex numbers works. Roughly speaking, the idea is to divide a given multiplication problem into smaller subproblems, solve them recursively, then glue the subproblem answers together to obtain the answer to the larger problem.
Suppose that you want to compute 12345678 * 21394276.
Break each number into two 4-digit parts.
a = 1234, b = 5678, c = 2139, d = 4276
Find the products bd, bc, ad, and ac.
bd = 5678 * 4276
bc = 5678 * 2139
ad = 1234 * 4276
ac = 1234 * 2139
Break each of the resulting numbers into 2-digit parts, Repeat the calculations with these parts. For example, break 1234 * 2139 into 12, 34, 21, and 39. Find the appropriate products.
12 * 21, 12 * 39, 34 * 21, 34 * 39
Repeat the same steps with these 2-digit numbers, breaking each one into 1-digit parts and finding the appropriate products. For example, 12 * 21 gives the multiplications 1 * 2, 1 * 1, 2 * 2, and 2 * 1.
In this divide-and-conquer algorithm, given two decimal numbers x and y, where x = a10n/2 + b and y = c10n/2 + d, you have xy = ac10n + (ad + bc)10n/2 + bd.
So, for n = 2, ac = 1 * 2 = 2, ad = 1 * 1 = 1, bc = 2 * 2 = 4, bd = 2 * 1 = 2, you obtain:
12 * 21 = 2 * 102 + (1 + 4)101 + 2 = 252
Similarly, for 12 * 39, you get:
1 * 3 = 3, 1 * 9 = 9, 2 * 3 = 6, 2 * 9 = 18
3 * 102 + 9 * 101 + 6 * 101 + 18 * 1 = 468
You get 12 * 21 = 252, 12 * 39 = 468, 34 * 21 = 714, 34 * 39 = 1326.
This allows you to compute 1234 * 2139.
252 * 104 + 468 * 102 + 714 * 102 + 1326 * 1 = 2639526.
Similarly, you obtain:
1234 * 4276 = 5276584
5678 * 2139 = 12145242
5678 * 4276 = 24279128
Hence, 12345678 * 21394276
= 2639526 * 108 + 5276584 * 104 + 12145242 * 104 + 24279128 * 1
Karatsuba's insight was to apply Gauss optimization to this divide-conquer-and-glue approach, replacing some multiplications with extra additions. For large numbers, decimal or binary, Karatsuba's algorithm is remarkably efficient. It's the sort of thing that your computer might be doing behind the scenes to get an answer to you a split second faster.
It's not the fastest known multiplication algorithm, but it's a significant improvement on grade-school multiplication. Indeed, properly implemented, Karatsuba multiplication beats grade-school multiplication even for 16-digit numbers and is way better for 32-digit numbers.
And we probably haven't heard that last word on multiplication. More innovations in computer arithmetic may yet lie ahead.
Karatsuba, A.A. 1995. The complexity of computations. Proceedings of the Steklov Institute of Mathematics 211:169-183.
For additional information about Karatsuba multiplication, its efficiency, and various implementations, see http://mathworld.wolfram.com/KaratsubaMultiplication.html, http://www.cs.cmu.edu/~cburch/251/karat/, http://ozark.hendrix.edu/~burch/proj/karat/index.html, http://numbers.computation.free.fr/Constants/Algorithms/representation.html, http://en.wikipedia.org/wiki/Karatsuba_algorithm, and http://www.math.niu.edu/~rusin/known-math/99/karatsuba.
Comments are welcome. Please send messages to Ivars Peterson at [email protected]. | http://maa.org/mathland/mathtrek_02_12_07.html | 13 |
13 | What is Problem Solving?
On this page we discuss "What is Problem Solving?" under the three headings:
Naturally enough, Problem Solving is about solving problems. And we’ll restrict ourselves to thinking about mathematical problems here even though Problem Solving in school has a wider goal. When you think about it, the whole aim of education is to equip children to solve problems. In the Mathematics Curriculum therefore, Problem Solving contributes to the generic skill of problem solving in the New Zealand Curriculum Framework.
But Problem Solving also contributes to mathematics itself. It is part of one whole area of the subject that, until fairly recently, has largely passed unnoticed in schools around the world. Mathematics consists of skills and processes. The skills are things that we are all familiar with. These include the basic arithmetical processes and the algorithms that go with them. They include algebra in all its levels as well as sophisticated areas such as the calculus. This is the side of the subject that is largely represented in the Strands of Number, Algebra, Statistics, Geometry and Measurement.
On the other hand, the processes of mathematics are the ways of using the skills creatively in new situations. Problem Solving is a mathematical process. As such it is to be found in the Strand of Mathematical Processes along with Logic and Reasoning, and Communication. This is the side of mathematics that enables us to use the skills in a wide variety of situations.
Before we get too far into the discussion of Problem Solving, it is worth pointing out that we find it useful to distinguish between the three words "method", "answer" and "solution". By "method" we mean the means used to get an answer. This will generally involve one or more Problem Solving Strategies. On the other hand, we use "answer" to mean a number, quantity or some other entity that the problem is asking for. Finally, a "solution" is the whole process of solving a problem, including the method of obtaining an answer and the answer itself.
method + answer = solution
But how do we do Problem Solving? There appear to be four basic steps. Pólya enunciated these in 1945 but all of them were known and used well before then. And we mean well before then. The Ancient Greek mathematicians like Euclid and Pythagoras certainly knew how it was done.
Pólya’s four stages of problem solving are listed below.
1. Understand and explore the problem;
2. Find a strategy;
3. Use the strategy to solve the problem;
4. Look back and reflect on the solution.
Although we have listed the Four Stages of Problem Solving in order, for difficult problems it may not be possible to simply move through them consecutively to produce an answer. It is frequently the case that children move backwards and forwards between and across the steps. In fact the diagram below is much more like what happens in practice
There is no chance of being able to solve a problem unless you are can first understand it. This process requires not only knowing what you have to find but also the key pieces of information that somehow need to be put together to obtain the answer.
Children (and adults too for that matter) will often not be able to absorb all the important information of a problem in one go. It will almost always be necessary to read a problem several times, both at the start and during working on it. During the solution process, children may find that they have to look back at the original question from time to time to make sure that they are on the right track. With younger children it is worth repeating the problem and then asking them to put the question in their own words. Older children might use a highlighter pen to mark and emphasise the most useful parts of the problem.
Pólya’s second stage of finding a strategy tends to suggest that it is a fairly simple matter to think of an appropriate strategy. However, there are certainly problems where children may find it necessary to play around with the information before they are able to think of a strategy that might produce a solution. This exploratory phase will also help them to understand the problem better and may make them aware of some piece of information that they had neglected after the first reading.
Having explored the problem and decided on a plan of attack, the third problem-solving step, solve the problem, can be taken. Hopefully now the problem will be solved and an answer obtained. During this phase it is important for the children to keep a track of what they are doing. This is useful to show others what they have done and it is also helpful in finding errors should the right answer not be found.
At this point many children, especially mathematically able ones, will stop. But it is worth getting them into the habit of looking back over what they have done. There are several good reasons for this. First of all it is good practice for them to check their working and make sure that they have not made any errors. Second, it is vital to make sure that the answer they obtained is in fact the answer to the problem and not to the problem that they thought was being asked. Third, in looking back and thinking a little more about the problem, children are often able to see another way of solving the problem. This new solution may be a nicer solution than the original and may give more insight into what is really going on. Finally, the better students especially, may be able to generalise or extend the problem.
Generalising a problem means creating a problem that has the original problem as a special case. So a problem about three pigs may be changed into one which has any number of pigs.
In Problem 4 of What is a Problem?, there is a problem on towers. The last part of that problem asks how many towers can be built for any particular height. The answer to this problem will contain the answer to the previous three questions. There we were asked for the number of towers of height one, two and three. If we have some sort of formula, or expression, for any height, then we can substitute into that formula to get the answer for height three, for instance. So the "any" height formula is a generalisation of the height three case. It contains the height three case as a special example.
Extending a problem is a related idea. Here though, we are looking at a new problem that is somehow related to the first one. For instance, a problem that involves addition might be looked at to see if it makes any sense with multiplication. A rather nice problem is to take any whole number and divide it by two if it’s even and multiply it by three and add one if it’s odd. Keep repeating this manipulation. Is the answer you get eventually 1? We’ll do an example. Let’s start with 34. Then we get
34 → 17 → 52 → 26 → 13 → 40 → 20 → 10 → 5 → 16 → 8 → 4 → 2 → 1
We certainly got to 1 then. Now it turns out that no one in the world knows if you will always get to 1 this way, no matter where you start. That’s something for you to worry about. But where does the extension come in? Well we can extend this problem, make another problem that’s a bit like it, by just changing the 3 to 5. So this time instead of dividing by 2 if the number is even and multiplying it by three and adding one if it’s odd, try dividing by 2 if the number is even and multiplying it by 5 and adding one if it’s odd. This new problem doesn’t contain the first one as a special case, so it’s not a generalisation. It is an extension though – it’s a problem that is closely related to the original. You might like to see if this new problem always ends up at 1. Or is that easy?
It is by this method of generalisation and extension that mathematics makes great strides forward. Up until Pythagoras’ time, many right-angled triangles were known. For instance, it was known that a triangle with sides 3, 4 and 5 was a right-angled triangle. Similarly people knew that triangles with sides 5, 12 and 13, and 7, 24 and 25 were right angled. Pythagoras’ generalisation was to show that EVERY triangle with sides a, b, c was a right-angled triangle if and only if a2 + b2 = c2.
This brings us to an aspect of problem solving that we haven’t mentioned so far. That is justification (or proof). Your students may often be able to guess what the answer to a problem is but their solution is not complete until they can justify their answer.
Now in some problems it is hard to find a justification. Indeed you may believe that it is not something that any of the class can do. So you may be happy that the children can guess the answer. However, bear in mind that this justification is what sets mathematics apart from every other discipline. Consequently the justification step is an important one that shouldn’t be missed too often.
Another way of looking at the Problem Solving process is what might be called the scientific approach. We show this in the diagram below.
Here the problem is given and initially the idea is to experiment with it or explore it in order to get some feeling as to how to proceed. After a while it is hoped that the solver is able to make a conjecture or guess what the answer might be. If the conjecture is true it might be possible to prove or justify it. In that case the looking back process sets in and an effort is made to generalise or extend the problem. In this case you have essentially chosen a new problem and so the whole process starts over again.
Sometimes, however, the conjecture is wrong and so a counter-example is found. This is an example that contradicts the conjecture. In that case another conjecture is sought and you have to look for a proof or another counterexample.
Some problems are too hard so it is necessary to give up. Now you may give up so that you can take a rest, in which case it is a ‘for now’ giving up. Actually this is a good problem solving strategy. Often when you give up for a while your subconscious takes over and comes up with a good idea that you can follow. On the other hand, some problems are so hard that you eventually have to give up ‘for ever’. There have been many difficult problems throughout history that mathematicians have had to give up on.
That then is a rough overview of what Problem Solving is all about. For simple problems the four stage Pólya method and the scientific method can be followed through without any difficulty. But when the problem is hard it often takes a lot of to-ing and fro-ing before the problem is finally solved – if it ever is! | http://www.nzmaths.co.nz/what-problem-solving | 13 |
21 | Space 'Rosetta Stone' Unlike Anything Seen Before
Meteorite fragments of the first asteroid ever spotted in space before it slammed into Earth?s atmosphere last year were recovered by scientists from the deserts of Sudan.
These precious pieces of space rock, described in a study detailed in the March 26 issue of the journal Nature, could be an important key to classifying meteorites and asteroids and determining exactly how they formed.
The asteroid was detected by the automated Catalina Sky Survey telescope at Mount Lemmon , Ariz., on Oct. 6, 2008. Just 19 hours after it was spotted, it collided with Earth's atmosphere and exploded 23 miles (37 kilometers) above the Nubian Desert of northern Sudan.
Because it exploded so high over Earth's surface, no chunks of it were expected to have made it to the ground. Witnesses in Sudan described seeing a fireball, which ended abruptly.
But Peter Jenniskens, a meteor astronomer with the SETI Institute's Carl Sagan Center, thought it would be possible to find some fragments of the bolide. Along with Muawia Shaddad of the University of Khartoum and students and staff, Jenniskens followed the asteroid's approach trajectory and found 47 meteorites strewn across an 18-mile (29-km) stretch of the Nubian Desert.
"This was an extraordinary opportunity, for the first time, to bring into the lab actual pieces of an asteroid we had seen in space," Jenniskens said.
Astronomers were able to detect the sunlight reflected off the car-sized asteroid (much smaller than the one thought to have wiped out the dinosaurs) while it was still hurtling through space. Looking at the signature of light, or spectra of space rocks is the only way scientists have had of dividing asteroids into broad categories based on the limited information the technique gives on composition.
However, layers of dust stuck to the surfaces of the asteroids can scatter light in unpredictable ways and may not show what type of rock lies underneath. This can also make it difficult to match up asteroids with meteorites found on Earth ? that's why this new discovery comes in so handy.
Both the asteroid, dubbed 2008 TC3, and its meteoric fragments indicate that it could belong to the so-called F-class asteroids.
"F-class asteroids were long a mystery," said SETI planetary spectroscopist Janice Bishop. "Astronomers have measured their unique spectral properties with telescopes, but prior to 2008 TC3 there was no corresponding meteorite class, no rocks we could look at in the lab."
The chemical makeup of the meteorite fragments, collectively known as "Almahata Sitta," shows that they belong to a rare class of meteorites called ureilites, which may all have come from the same original parent body. Though what that parent body was, scientists do not know.
"The recovered meteorites were unlike anything in our meteorite collections up to that point," Jenniskens said.
The meteorites are made of very dark, porous material that is highly fragile (which explains why the bolide exploded so high up in the atmosphere).
The carbon content of the meteorites shows that at some point in the past, they were subjected to very high temperatures.
"Without a doubt, of all the meteorites that we've ever studied, the carbon in this one has been cooked to the greatest extent," said study team member Andrew Steele of the Carnegie Institution in Washington, D.C. "Very cooked, graphite-like carbon is the main constituent of the carbon in this meteorite."
Steele also found nanodiamonds in the meteorite, which could provide clues as to whether heating was caused by impacts to the parent asteroid or by some other process.
Having spectral and laboratory information on the meteorites and their parent asteroid will help scientists better identify ureilite asteroids still circling in space.
"2008 TC3 could serve as a Rosetta Stone, providing us with essential clues to the processes that built Earth and its planetary siblings," said study team member Rocco Mancinelli, also of SETI.
One known asteroid with a similar spectrum, the 2.6-km wide 1998 KU2, has already been identified as a possible source for the smaller asteroid 2008 TC3 that impacted Earth.
With efforts such as the Pan-STARRS project sweeping the skies in search of other near-Earth asteroids, Jenniskens expects that more events like 2008 TC3 will happen.
"I look forward to getting the next call from the next person to spot one of these," he said. "I would love to travel to the impact area in time to see the fireball in the sky, study its breakup and recover the pieces. If it's big enough, we may well find other fragile materials not yet in our meteorite collections."
- Video: Asteroid Hunting
- Asteroid Exploded in Earth's Atmosphere
- Images: Asteroids
MORE FROM SPACE.com | http://www.space.com/6491-space-rosetta-stone.html | 13 |
27 | Solving Equations Involving Decimal Division Video Tutorial
arithmetic operations video, decimal division video, decimals video, dividing decimals video, division video, division of decimals video, equations video, number sense video, numbers video, operations video, operations with decimals video, solving equations video.
Solving Equations Involving Decimal Division
This math video tutorial gives a step by step explanation to a math problem on "Solving Equations Involving Decimal Division".
Solving equations involving decimal division video involves arithmetic operations, decimal division, decimals, dividing decimals, division, division of decimals, equations, number sense, numbers, operations, operations with decimals, solving equations.
The video tutorial is recommended for 1st Grade, 2nd Grade, 3rd Grade, 4th Grade, 5th Grade, 6th Grade, 7th Grade, 8th Grade, 9th Grade, 10th Grade, 11th Grade, and/or 12th Grade Math students studying Algebra, Geometry, Trigonometry, Calculus, Probability and Statistics, Arithmetic, Basic Math, Pre-Algebra, Pre-Calculus, and/or Advanced Algebra.
Division is essentially the opposite of multiplication. Division finds the quotient of two numbers, the dividend divided by the divisor. Any dividend divided by zero is undefined. For positive numbers, if the dividend is larger than the divisor, the quotient will be greater than one, otherwise it will be less than one (a similar rule applies for negative numbers). The quotient multiplied by the divisor always yields the dividend.
Division is neither commutative nor associative. As it is helpful to look at subtraction as addition, it is helpful to look at division as multiplication of the dividend times the reciprocal of the divisor, that is a ÷ b = a × 1/b. When written as a product, it will obey all the properties of multiplication.
An equation is a mathematical statement, in symbols, that two things are the same (or equivalent). Equations are written with an equal sign. Equations are often used to state the equality of two expressions containing one or more variables.
Solving an equation means finding a value for the variable that makes the equation true. | http://tulyn.com/videotutorials/solving_equations_involving_decimal_division-by-polly.html | 13 |
12 | expansion, in physics, increase in volume resulting from an increase in temperature. Contraction is the reverse process. When heat is applied to a body, the rate of vibration and the distances between the molecules composing it are increased and, hence, the space occupied by the body, i.e., its volume, increases. This increase in volume is not constant for all substances for any given rise in temperature, but is a specific property of each kind of matter. For example, zinc and lead undergo greater expansion in a one-degree rise in temperature than do silver or brass. Since solids have a definite shape, each linear dimension of the solid increases by a proportional amount for a given temperature increase. The amount that a unit length along any direction of a substance increases for a temperature increase of one degree is called the coefficient of linear expansion of the substance. Most liquids also expand when heated. However, since liquids do not have a definite shape, it is the expansion of their volume as a whole that is relevant rather than the increase in a linear dimension. The amount of expansion that a unit volume (e.g., a cubic centimeter or a cubic foot) of any substance undergoes per one-degree rise in temperature is called its volume coefficient or coefficient of cubical expansion and is listed as a property of that substance. The coefficient of linear expansion can be calculated by dividing the coefficient of cubical expansion of the substance by three. When the amount of expansion of a given length of a substance has been determined experimentally, the linear coefficient is calculated by dividing the total amount of expansion by the product of the original number of length units and the number of degrees of rise in temperature. Gases also exhibit thermal expansion. The coefficient of expansion is about the same for all the common gases at ordinary temperatures; it is 1/273 of the volume at 0°C per degree rise in temperature. The Kelvin, or absolute, scale is based upon this behavior (see Kelvin temperature scale). Charles's law concerning the expansion of gases states that the volume of a gas is directly proportional to its absolute temperature (see gas laws). Liquids differ from each other as do solids in their expansion coefficients. Water, unlike most substances, contracts rather than expands as its temperature is increased from 0°C to 4°C; above 4°C it exhibits normal behavior, expanding as the temperature increases.
More on expansion from Fact Monster:
See more Encyclopedia articles on: Physics | http://www.factmonster.com/encyclopedia/science/expansion.html | 13 |
55 | Common Lisp/Basic topics/Functions
A function is a concept that is encountered in almost every programming language, but in Lisp functions are especially important. Historically, Lisp was inspired by lambda calculus, where every object is a function. On the other end of the spectrum there are many programming languages where functions are hardly objects at all. This is not the case with Lisp: functions here have the same privileges as the other objects, and we will discuss it in this chapter.
Functions are most often created using the defun (DEfine FUNction) macro. This macro takes a list of arguments and a sequence of Lisp forms, called a body of a function. A typical use of defun is like this:
(defun print-arguments-and-return-sum (x1 x2) (print x1) (print x2) (+ x1 x2))
Here, the list of arguments is (x1 x2) and the body is (print x1) (print x2) (+ x1 x2). When the function is called, each form in the body is evaluated sequentially (from first to last). What if something happened and we want to return from the function without reaching the last form? The return-from macro allows us to do that. For example:
(defun print-arguments-and-return-sum (x1 x2) (print x1) (print x2) (unless (and (numberp x1) (numberp x2)) (return-from print-arguments-and-return-sum "Error!")) (+ x1 x2))
The second argument to return-from is optional, which means that it can be called with only one argument - in this case the function would return nil. But wait: how did they do this? Our function accepts exactly two arguments: nothing more, nothing less. The answer is that what we called "argument list" is not as simple as it seems. It is in fact called lambda list and it is not the only place where we will encounter it. Later, I will explain how to allow optional and keyword arguments in functions.
Functions as data
As was mentioned in the beginning of this chapter, Lisp functions can be used like any other object: they can be stored in variables, passed as parameters to other functions, and returned as values from functions. In the previous section we defined a function. The function in now stored in the function cell of the symbol print-arguments-and-return-sum - defun put it there. However, it is not bound to this location forever - we can extract it and put in some other symbol, for example. To access the function cell of a symbol we can use the accessor symbol-function. Let's store our function in some other symbol:
(setf paars (symbol-function 'print-arguments-and-return-sum))
The first reaction would be to do something like that:
>(paars 1 1) EVAL: undefined function PAARS [Condition of type SYSTEM::SIMPLE-UNDEFINED-FUNCTION]
This is because we put the function into the value cell of the symbol instead of its function cell. It's easy to fix:
(setf (symbol-function 'paars) (symbol-function 'print-arguments-and-return-sum))
Since symbol-function is an accessor we can use setf with it. Now
(paars 1 1) produces what it should.
While symbol-function is there for a reason, it is almost never used in the real code. This is because it is superseded by several other Lisp features. One of them is the function special operator. It works like the symbol-function special operator, except it doesn't evaluate its argument, and it returns the function that is currently bound to the symbol, which may not actually be its function cell.
(function foo) may also be abbreviated as
#' which tremendously improves its usefulness. On the flip side, it's impossible to write
(setf (function paars) (function print-arguments-and-return-sum))
Fortunately it's possible to call functions from other places than function cell of a symbol. A function designator is either a symbol (in this case its function cell is used) or the function itself. funcall and apply are used to call functions by their function designators. Remember that symbol paars now contains the same function in its function cell and its value cell. Let's change its value cell so that the difference is apparent:
(setf paars #'+)
Now let's funcall it in different ways:
(funcall paars 1 2) ;equivalent to (+ 1 2) (funcall 'paars 1 2) ;equivalent to (funcall (symbol-function paars) 1 2) (funcall #'paars 1 2) ;equivalent to (paars 1 2)
The difference between the second and third example is that if paars was temporarily bound (with flet or labels) to some other function, the third funcall would use this temporary function, while the second funcall would still use its function cell. | http://en.m.wikibooks.org/wiki/Common_Lisp/Basic_topics/Functions | 13 |
11 | 5 Written Questions
5 Matching Questions
- The Midpoint Theorem
- The Ruler Postulate
- The Angle Addition Postulate
- If a=b+c and c>0...
- a The points of a line can be placed in correspondence with the real numbers in such a way that (1) to every point of hte line there corresponds exactly one real number: (2) to every real number there corresponds exaclty one point of the line; and (3) the distance between any two points is the absolute value of the difference of the corresponding numbers
- b then a>b
- c If D is in the interior of angle BAC, then measurement of angle BAC = measurement of angle BAD + measurement of angle DAC
- d Every segment has exactly one midpont
- e two angles with the same measure
5 Multiple Choice Questions
- Complements of congruent angles are congruent.
- their intersection contains only one point
- unproved statements
- all elements that belong to one or both sets (of two sets)
- Let AB be a ray on the edge of the half-plane H. For every number r between 0 and 180 there is exaclty one ray AP, with P in H, such that the measure of angle PAB=r
5 True/False Questions
If two angles are complementary... → then both are acute.
Congruence between angles is... → congruent
The Line Postulate → For every two points there is exactly one line that contains both points
The Flat Plane Postulate → If two points of a line lie in a plane, then the line lies in the same plane.
The Line Postulate → Any three points lie in at least one plane, and any three noncollinear points lie in exactly one plane. | http://quizlet.com/3100336/test/ | 13 |
20 | Operators-Expressions and Escape Sequences in C -Chapter 3
We are into third chapter of C programming in which we will discuss operators, operands, expressions and escape sequences. I assume that you have already gone through Chapter 1 – Data Types in C and Chapter 2 – Variables and Keywords in C.
Operator – An operator is usually a symbolic representation of a particular operation. Ex:- + means addition * means multiplication etc.
Operand – An operand is usually the data upon which the operator performs an action. Ex:- c=a+b; Here c, a, b are operands; where as + and = are 2 different operators.Operands can be constants or variables. Ex:- c=a+10;
Expression – An expression is a complete instruction which may be a combination of operator(s) and operand(s). Ex:- c=a+b; is a complete expression. Other examples of complete expressions are a++; p=p+3; ++a;
There is nothing much to tell in detail about operands and expression. Lets dive more deep into the world of operators in C.
There are basically 7 types of operators in C namely:-
- Arithmetic operators
- Unary operators
- Assignment operators
- Relational and equality operators
- Logical operators
- Bitwise operators
Note:- If an operator requires 2 operands to act upon – it is called a binary operator. If an operator requires only one operand to act upon – its then called a unary operator.
There are 5 arithmetic operators in C as shown in the table below. Here are some points to note while you deal with arithmetic operators.
- Both operands should be numeric values (integer quantities,floating point or character quantities (ASCII values)).
- There is no operator for calculating exponential values.
- In the case of division operator – if both operands is of integer type, then result will be an integer.Any fractional part that may occur in the case of an integer division will be truncated towards zero.(Ex:- 10/3 yields the answer as 3 not as 3.33). If any one of the operand is a floating point type, then result of the division will be floating point.
|Addition||Performs addition on 2 operands.|
|Subtraction||Performs subtraction on 2 operands: Ex:- c=a-b; here b is subtracted from a and result is stored in c|
|Multiplication||Performs multiplication on 2 operands.|
|Division||Performs integer division. Denominator must be non-zero.|
|Remainder (Mode operator)||Outputs the remainder of an integer division. Both operands must be integers. If there is any fractional part in the input value, it will get truncated. Ex: 10.14/3.75 – our answer will be 1 (bcz it gets the remainder of 10/3)|
mp=xp*yp; dp= xp/yp;
a=27 s= 13 m= 140 d= 2 r= 6
ap=28 sp=13.5 mp=150.43 dp= 2.862 rp=6
Note:- In both cases of float and int – the result produced by mode operator is same (r = rp =6).
Note:- c=a+b; in this case the contents to the right side of ‘=’ is evaluated first and then assigned to variable in the left side.
They require only a single operand to act upon. They are:-
- Minus operator
- Increment and Decrement operator
- Sizeof operator
Minus operator – is also known as negation operator. This operator always precedes a variable, a constant or an expression. It assigns the negative sign to the variable/constant. Ex:- int i= -8; now variable i is assigned with negative value 8. Another Ex:- int i=-10, j; j= -i; now value in j= – (-10) which is equal to +10.
Increment and Decrement Operator
From the name itself you can guess the use of these 2 operators – they are used for incrementing and decrementing variables. Increment operator is ++ which adds 1 to its operand and decrement operator is — which subtracts 1 from its operand. Both operators can be either post-fixed or pre-fixed. There are differences in the working of postfixed and prefixed operators.
Example:- a++; b–; ++b; –a; etc.
Difference between postfixing and prefixing
int a=5, x,y;
x=a++; /* post fixing */
y=++a; /* pre fixing */
x = 5 and then a will be incremented by 1 (a=6); Here a is assigned to x first and then only a is incremented
y=6 and a =6 Here a is incremented first and then after incrementing (a), it is assigned y.
This operator returns the size of an operand in bytes.
b = 2 (since a is an integer variable and integer occupies 2 byes)
There are 2 types of assignment operators. The one that we use normally and often is = operator.
Ex:- c=a+b; Here expression towards the right side of = operator is evaluated first. The result of expression is then assigned to the variable on left side. Here = is called the assignment operator.
Another type of assignment operator is += which is used when the variable on left side and the variable that comes immediately after operator are same.
Example:- a = a+2; it can also be expressed as a+=2;
a =7 ; here the value inside a is added with 2 and then the new result is assigned back to a itself.
Note: You can use a variable instead of a constant. Ex: a=a+b; can be written as a+=b;
This kind of assignment operation is valid for other operators like - * and /.
Examples:- a/=5 (a=a/5); a -=5 (a=a-5); a*=5 (a=a*5);
There are 4 relational operators as shown below.
|>||Greater than||Compares two values. Ex: a>b – value inside a is compared with value inside b and if a is greater condition is true otherwise condition is false. If condition is true then the expression returns 1 else it returns 0.|
|>=||Greater than or equal to||Ex:- a>=b|
|<||Less than||Ex:- a|
|<= strong=”">||Less than or equal to||Ex:- a<=b td=”">|
|==||Equal to||Ex:- a==b – checks if a is equal to b. If they are equal, then expression is considered TRUE, else FALSE.|
|!=||Not equal to||Ex:- a!=b – checks if a and b are unequal. If they are unequal, then expression is considered TRUE, else FALSE.|
|&&||Logical AND||Ex:- (a>9)&&(b<5) –5=”" –=”">|
|||||Logical OR||Ex:- ( a>9)||(b<5) –5=”" –=”">|
Logical AND (&&) – This expression is evaluated from left to right and is considered TRUE only if both conditions (on left and right ) are TRUE. If either of the two condition is a FALSE, then the whole expression will be false.
Logical OR (||) – This expression is evaluated from left to right and is considered TRUE –if any one of the two conditions (on left and right) is TRUE. If both of the two conditions are FALSE, then only the whole expression will turn FALSE.
|Negation||Complements the operand. 1’s are converted to 0’s and 0’s are converted to 1.|
|AND||Output is exactly according truth table for AND operation.|
|OR||Output is exactly according truth table for OR operation.|
|X-OR (Exclusive OR)||Output is exactly according truth table for X-OR operation.|
|Shift right||Shifts the bits towards right (a fixed no of place as given in the expression)|
|Shift left||Shifts the bits towards left (a fixed no of place as given in the expression)|
|\n||New line||At the output display, this sequence takes cursor to the next line.|
|\b||Backspace||Moves cursor from its current to position to left side (one position)|
|\t||Tab space||Moves the cursor from current position to one tab space towards right.|
|\a||Alert/Alarm||Produces a sound in the speaker connected to computer.|
|\”||Double Quotes||Prints a double quote on the output display.|
|\?||Question mark||Prints a question mark on the display.|
|\’||Single quote||Prints a single quote|
|\\||Backslash||Prints a back slash|
The backward slash ‘\’ is called escape character. The character that follows backslash gives meaning/purpose to an escape sequence. An escape sequence is actually considered as a single character(though it may seem like a combination of two characters).
We have covered basics about operators in C now. More specific examples and explanations will be given in coming chapters.
In the image given below, almost all operators in C are listed with its order and priority. I will explain about order and priority of operators in another article.
You may also like:
- Control structures and statements in C and C++
- Quick Sorting algorithm with example code in C/C++/Java languages
- Insertion sorting algorithm with example in C/C++/Java languages
- Selection Sort in C/C++/Java programming languages
- Difference between Procedure Oriented(POP) and Object Oriented Programming(OOP) | http://www.circuitstoday.com/operators-expressions-and-escape-sequences-in-c-chapter-3 | 13 |
12 | Knowing the names of the sides of a right triangle is essential. Besides being a part of the vocabulary of math, these names are used to communicate which side of a triangle you are referring to without using a diagram. Also, if you misname the sides, your calculations of ratios (which you are about to study) will be off.
The hypotenuse is the side directly across from the 90° angle and is always the longest side of a right triangle.
The other two sides are named according to their position to the reference angle, which is the angle you are referring to or seeking. The reference angle is never the 90° angle.
The adjacent side is the side next to the reference angle.
The opposite side is the side directly across from the reference angle.
The gray shading indicates the reference angle.
It is important to understand that the names of the opposite side and adjacent sides change when you move from one reference angle to the other. | http://mathforum.org/sarah/hamilton/ham.namesides.html | 13 |
12 | What's Under There?
In this activity students determine ways to make "observations" about
unknowns, such as the land beneath an ice sheet or the interior of the
Earth, using tools other than sight. In groups, the students build "mystery boxes"
and exchange them with other groups. Each group is permitted a limited
number of probes (cores) into the box; they must define a sampling strategy to
maximize the amount of information acquired. These conditions simulate restricted resources
available to research teams tackling similar challenges. The students
"map" their results and describe the interior of the box based on their
measurements. This activity leads into a subsequent activity that examines
remote sensing of the base of ice sheets.
What is under the ice sheet? What is under the ocean? What is inside the
Earth? How do we know? We have been exploring our Earth for thousands of
years and we have many tools with which we can explore. Believe it or not,
we still do not know what every square centimeter looks like! Satellites
have helped us get a view of the surface, but what about the ocean floor and the land under the huge ice sheets of Greenland and
Antarctica? Most of this area remains mostly unknown! Scientists are finding out more and
more, however, as technology develops. They "image" these hard-to-get-to
locations remotely. That is, they use instruments that allow them to get a
picture or a sample without the scientist having to go under the ice or
into the deep ocean trenches.
Middle School, Earth Science, Environmental Science
The student will become acquainted with the concept that much of our
knowledge about the Earth is not derived from first-hand observations.
Much of our knowledge is extrapolated from isolated samples and sparse data.
The students will:
plan a sampling strategy to maximize information acquisition under sampling restrictions
organize the sample data
map the sample data
develop hypotheses about the remotely sampled region
Teacher Preparation for
Place the objects in the aquarium or deep pan. Cover with water. Add a few
drops of India Ink to the water such that the students cannot see the objects in the container.
For the class:aquarium or other deep container
solid objects (rocks, nuts and bolts, clay pots, water balloons, and
other non-floating objects)
For each group of 4 to 5 students:
a shoe box with lid
straws or probes
graph paper that will fit over the box top
Two class periods
Engagement and Exploration (Student Inquiry Activity)
Take the students to the prepared aquarium. Tell them that you have
submerged several objects under the water. Ask
the students to describe the objects without getting their hands wet or
removing any of the water. What can the students do to
figure out what is in the aquarium? What tools would they like to
As a class, examine the large map of Antarctica
1. What's on the top of the continent?
2. What is under the ice?
3. What is under the water?
4. How do you know?
5. How might we find out?
Researchers have been trying to figure out what is under the ice sheets of
Greenland and Antarctica for many decades. How thick is the ice? Are
there valuable minerals under the ice? Is the ice floating? Is the ice
getting thicker or thinner? Just like the students cannot put their hands
in the aquarium or drain the water off, the polar researchers cannot melt the ice to find out!
What do the students think the researchers do to figure out what is under
the ice? The students may suggest that the researchers dig holes to see
what is under the ice. This is one way! The research teams drill through
the ice to acquire ice cores (for other purposes); they also get other
valuable information such as the thickness of the ice and a sample of what
is under the ice sheet. However, the ice cores are expensive. Scientists
have other tools that they use. These will be explored in another activity.
Elaboration (Polar Applications)
Provide each group of students with a shoe box, top, and clay. Tell
each group that they will use their clay to construct a landscape in
their box. The landscape can look however they wish - it can have oceans
and mountains and arches.
After the student groups have prepared their boxes, have them put the top
on the box and secure it with tape. Ask the student groups exchange boxes. It might be a good idea to
exchange with groups that worked far apart, so that they will not have seen
Tell the groups that they will now describe the SURFACE of the landscape without peeking!
How might they go about this? The students may suggest that they stick
probes into the box to measure what is there. This is one way -- coring --
scientists find out what lies beneath the ice sheets. Coring or drilling
through the Earth or an ice sheet or to the ocean bottom is expensive,
however. This means that scientists cannot make as many holes as they
would like. They must carefully plan a sampling strategy. The students
will have to plan a sampling strategy, too. They will have 15 straws to
start with to sample the box.
Provide each group with 15 straws or probes, a ruler, graph paper, and colored
pencils. Have the groups draw a rectangle on the paper the same size as
the shoe box. Ask the students to put a small dot in the locations they
would like a sample. They will need to discuss the sampling strategy in
the group. What strategy will provide good coverage? What is the smallest
and largest feature they may sample with a parcticular strategy? Once a
sample plan is determined, the teacher can then poke holes in the box top at
that location, using a sharp skewer or a pair of scissors. Make sure the hole is
large enough to allow the straw probe fit through, but not large enough
for the students to see what is inside.
Ask the students to measure the height of the box in centimeters. Next, have the students mark the straws in centimeter increments from the bottom of the straw to the same height as the box height. This way, when the students measure the "topography" in the box, they need only read the number of centimeters remaining above the box top.
Push a straw into the box top. Mark the position on the corresponding point
on the paper. Continue to push the straw in, gently, until the straw just
touches the "land" in the box. Read the height of the straw above the box top, The students may have to estimate to the nearest centimeter. Measure the rest of the holes. If the students wish, they can leave the straws in the box - they will be able to "see" the topography reflected in the straw height.
When the holes have all been measured, ask the student groups to examine
their data. What trends do they see? Are there parcticular areas that would
benefit from additional coring? Are there areas that have large changes
but few cores? Are there "holes" in the dataset? Have each group present their findings to
the teacher and indicate where else that they would like to core and why.
At this point, the teacher can decide to "fund" additional sampling. Each
group can be granted up to 5 more straws to augment their sampled data.
After the additional cores have been collected and the positions and depths
recorded on the grid, the students are ready to make a contour map. Have the student groups examine their data. Are there areas of high numbers? Low numbers? What do these areas mean about the "land" inside the box? Where is it high? Low?
As the facilitator, work with the student groups to contour the data. Group the points on the paper that have the same or very close measurements, for instance, include all numbers from 0 to 0.5 centimeters above the box bottom; the next contour can include all numbers from 0.5 to 1.0 centimeters, etc.. Help the students determine which numbers to "lump." Use
different colors to group different levels of measurements. Do not cross over
any of the lines in any color.
Have the students finish their maps. What else should they include on their maps? What commonly occurs on maps? The students may want to indicate scale, contour interval, etc.
Exchange (Students Draw Conclusions)
When the groups have finished contouring their data, ask them to examine what they see. Have each group present their findings to the class. After each group has described the "land," permit the students to open the box to compare their results. Remind the students that scientists do not have the option to "open the box" or melt the ice sheet, or drain the ocean - enjoy it while they can!
Do the results of the contour map agree with the original? Why or why not? Where were more cores needed? Where might a few cores have sufficed? Where might cores not have provided information about depth and configuration of the topography (arches). If the students could do it over, with their knowledge of the land, how might they have placed the cores?
Where might scientists use similar methods for exploration? The students may name coring through ice sheets, finding depth in the ocean, and other exploration. What are the drawbacks to coring? Coring, or probing, can be expensive and the results are only for a single point. These results are extrapolated to other areas. Explorers can miss information.
In the next activity, students will explore other methods used by scientists to explore places they cannot visit in person.
Evaluation (Assessing Student Performance)
Sandra Shutey, Butte High School, Butte, Montana, with ideas from the
GLACIER curriculum design team.
Satellite Image Map of Antarctica:
United States Geological Survey Information Services
Denver, CO 80225
Arctic Perspectives - Arctic Maps
Post Office Box 75503
St. Paul, Minnesota 55175
Student Reproducible Masters
look forward to hearing from you! Please review this activity.
Return to top of
Back to: TEA | http://tea.armadaproject.org/activity/tea_activity_shutey_intro.html | 13 |
15 | Remotely-sensed imagery from aircraft and satellites represent one of the fastest-growing sources for raster GIS data. While remote-sensing technology has been around for decades, recent technological advances and legislative changes have led to an dramatic increase in the types of imagery available. In the U.S., due to recent legislative changes and the repeal of the Landsat Commercialization Act, these data can now be obtained at costs well below previous levels. Also, satellite imaging has now been around long enough to allow study of temporal changes on the land surface. For example, early Landsat images beginning in 1972 can now be compared with recent observations, providing a 25+ year record of land-use, vegetation, and urban change.
Remote-sensing technologies come in two flavors: Passive remote sensing relies on naturally reflected or emitted energy of the imaged surface (think of taking a photograph with a camera under sunlit conditions). Most remote sensing instruments fall into this category, obtaining pictures of visible, near-infrared and thermal infrared energy. Active remote sensing means that the sensor provides its own illumination and measures what comes back (think of a camera with a flash). Remote sensing technologies that use active remote sensing include lidar (laser) and radar.
Imaging systems differ significantly from camera photography in two important ways. First, they are not restricted only to “visible” part of the electromagnetic spectrum (so named because it is the range over which the human eye can see, from about 0.4 to 0.7 micrometers in wavelength). It also can measure energy at wavelengths invisible to the eye, such as near-infrared, thermal infrared and radio wavelengths. Second, most remote sensing instruments record these different wavelengths at the same time, yielding not one but numerous images of the same location on the ground, each corresponding to a different range of wavelengths (called a “band”). For example, the Enhanced Thematic Mapper instrument on the Landsat-7 satellite (launched by NASA in 1999) has seven bands in the visible, near-infrared, mid-infrared and thermal-infrared wavelengths, as well as a fine-resolution “panchromatic” band that collects over all wavelengths. Therefore, a single Landsat-7 “image” is in fact comprised of eight separate images or bands, each corresponding to a different part of the electromagnetic spectrum. During image analysis of these data, each band is treated as a layer in a raster GIS.
Passive visible and near-infrared data are used in a variety of GIS applications. Classification of vegetation and land-use is particularly common, and may be performed at a variety of temporal and spatial scales. Most earth imaging satellites or polar-orbiting, meaning that they circle the planet in a roughly north-south ellipse while the earth revolves beneath them. Therefore, unless the satellite has some sort of “pointing” capability, there are only certain times when a particular place on the ground will be imaged. The length of time between imaging can be short (~daily) or long (~once per month or even longer), depending on the satellites design. In order to have frequent temporal coverage, the sensor must image a wide swath of ground beneath the satellite. Unfortunately, this also means that spatial resolution, (i.e. the size of the smallest imaged element on the ground) must be coarse in order to image such a large area at once. Therefore, most passive remote sensing data possess a trade-off between frequent, global coverage with coarse spatial resolution; or infrequent coverage with a high spatial resolution. Because applications vary in their spatial and temporal resolution requirements, a variety of sensors exist to meet these needs. For example, the Advanced Very High Resolution Radiometer (AVHRR) has 1.1 km pixels, but images are 2400 km wide and collected every 12 hours. Landsat-7 provides high spatial resolution (15-30 m) but obtains an image less than 200 km wide only once per month. The new IKONOS satellite (launched by Space Imaging in 1999) has an even higher spatial resolution (~4 m). However, the resulting images are only 11X11 km in size and are obtained infrequently or by special request.
Since the launch of the ERS-1 Synthetic Aperture Radar (SAR) satellite in 1991, active remote sensing (radar and lidar) systems are rapidly increasing in availability. Radars are sensitive to very different surface properties than visible/near-infrared imagery. For example, rather than vegetation “color,” radar images are sensitive to the moisture content in leaves and their shape, orientation and size. In the last five years, airborne lidars have seen increasing use for mapping surface topography in three dimensions. Existing and planned radar and lidar altimeters will monitor closely the elevation of the worlds ice caps and sea level with centimeter precision.
Image processing software designed for analysis of remotely sensed data is really a specialized form of raster GIS. While it is possible to manipulate these images in mainstream raster GIS software such as ArcInfo GRID, IDRISI and GRASS, most technicians use software specifically designed to work with data formats for satellite and aircraft imagery, such as PCI, ENVI, ERDAS IMAGINE, and ERMapper. These packages are specifically designed for remote sensing applications and and provide a wide array of tools for image filtering, classification, annotation and texture analysis.
Dr. Larry Smith is an assistant professor with the Geography Department at the University of California, Los Angeles. He also holds a joint appointment with the Department of Earth & Space Sciences. His research interests of hydrology and remote sensing have lead him to studies in Iceland and Russia. | http://www.gislounge.com/remote-sensing-technologies/ | 13 |
11 | Credit: European Southern Observatory
Two teams of researchers are now competing to develop a device that could profoundly change our understanding of the universe…but you’d be forgiven if you mistook it for a vaguely menacing hair-restoration product. Called a “laser frequency comb,” these are special laser systems that rapidly emit pulses of light across a wide range of frequencies or colors. In a plot of the emitted light, each distinct frequency appears as a peak; collectively, all the frequencies resemble a fine-toothed comb. And by examining starlight through the teeth of a laser comb, astronomers could begin finding Earth-like extrasolar planets on the cheap using ground-based observatories rather than expensive space telescopes.
A star’s spectrum, its component colors of light, can reveal whether or not it has planets circling it. Every planet gently tugs on its star as it orbits, pulling the star toward and away from us in a regular pattern that we can detect via subtle periodic changes in the star’s color. Similar to how a train whistle rises in pitch as it approaches or falls in pitch as it departs, a star’s light becomes bluer as it draws near us, and redder as it recedes. This is called a “Doppler shift.”
Doppler shifts are how the majority of planets known beyond our solar system have been discovered, but these worlds resemble Jupiter or Neptune—giant planets whose large gravitational tugs are correspondingly easier to see. Planets like Earth are far more difficult to detect via Doppler shift. Due to this difficulty, experts have believed for years that the only sure way to find extrasolar “Earths” in the near future would be via expensive space-based telescopes that would detect the dips in starlight caused by planets transiting the faces of their suns. This is how the recently launched Kepler satellite looks for planets.
This method only reveals the discovered planets’ diameters and orbits—to find other vital information about them, like their mass, astronomers must rely on Doppler shifts, which rely in turn on precisely calibrated light-measuring devices called spectrographs. But even the best current calibration methods unavoidably introduce inconsistencies into spectrographic measurements. Further, each spectrograph is unique and custom-built, causing problems for sharing and comparing data between telescopes.
Laser combs could change all that by providing a standardized—and exceedingly precise—way of measuring Doppler shifts from stars’ spectra. Not only would this make it possible to study planets Kepler finds in more detail, it could potentially allow searches for Earth-like worlds to take place from the ground, at a much lower cost.
“We’re actually starved of spectrographs right now,” says Dimitar Sasselov, an astronomer leading the laser comb team based at the Harvard-Smithsonian Center for Astrophysics. “With Kepler, we’ll have more planet candidates than we can actually confirm. The more spectrographs equipped with a laser comb, the better, because then you can easily combine and compare observations from different telescopes.”
Much work remains to be done before laser combs are ready for prime-time astronomy, though. To ensure it properly calibrates a spectrograph, a laser comb must itself be calibrated. This means syncing its ultra-fast laser bursts with the time signature from an atomic clock, a task made less daunting thanks to the US Air Force’s public network of GPS satellites. More problematically, the typical output of a laser comb doesn’t cover enough of the spectrum to be very useful to astronomers, and also has frequency “teeth” so close together they’re incompatible with existing spectrographs. Both the Harvard-Smithsonian team and their German competitors at the Max Planck Institute for Quantum Optics have painstakingly overcome these obstacles in the lab, but the resulting devices are fragile.
“These aren’t yet ‘turn-key’ products—they’re the subjects of laboratory development,” says Bill Cochran, a veteran planet-hunter at the University of Texas, Austin. “They require a small army of technicians to make them work, but I need something where I can just flip a switch, and it must work consistently and reliably for ten years!”
Consequently, both teams have begun testing their laser combs on actual observational equipment—the Harvard-Smithsonian team is using a telescope on Mount Hopkins in Arizona, while the Max Planck researchers are using a European telescope in La Silla, Chile. Next summer, just in time for Kepler’s first batch of Earth-like planet candidates, the Harvard-Smithsonian team plans to link their laser comb to a state-of-the-art spectrograph at the giant William Herschel Telescope in the Canary Islands. The Max Planck team also plans to bring its laser comb technology to the next generation of very large telescopes.
If either team succeeds in making laser combs practical for astronomy, the increased precision they would bring to observations will be nothing short of revolutionary. In addition to enabling the ability to detect Earth-like planets from the ground, laser combs would also allow much more in-depth studies of stellar activity and a better knowledge of our galaxy’s structure. They could even unlock the secrets of dark matter and dark energy, the elusive components of our universe that represent one of the greatest unsolved mysteries of modern science.
“It will affect not only our understanding of extrasolar planets and stars and galactic dynamics, but also cosmology.” Sasselov says. “Imagine being immersed in an ocean not of water but of light, surrounded by light waves that are all traveling in different directions and have different wavelengths, and trying to measure them all. Astronomers do this all the time—the waves of light that come from objects are all we really have to study them. And suddenly, you can see clearly through it, as opposed to having a blurry picture. This is what being able to calibrate using laser combs means.”
Originally published May 26, 2009 | http://seedmagazine.com/content/article/planet_hunting_down_to_earth/ | 13 |
15 | In the infrared, astronomers can gather information about the universe as it was a very long time ago and study the early evolution of galaxies. Although light travels extremely fast (186,000 miles per second) the universe is so incredibly vast that it can take up to billions of years for light to reach us. The farther away an object is, the farther in the past we see it. For example, it takes light about 8 minutes to reach us from our Sun, so solar astronomers see the Sun as it was 8 minutes ago. If a large flare started this second, they would not see it for another 8 minutes. Light from the nearest star takes about 4.3 years to reach us, and light from the center of our own galaxy takes about 25,000 years to reach us. The billions of galaxies outside our own galaxy range in distance from hundreds of thousands to billions of light years away. For the most distant galaxies, we see them as they were billions of years ago.
As a result of the Big Bang (the tremendous explosion which marked the beginning of our Universe), the universe is expanding and most of the galaxies within it are moving away from each other. Astronomers have discovered that all distant galaxies are moving away from us and that the farther away they are, the faster they are moving. This recession of galaxies away from us has an interesting effect on the light emitted from these galaxies. When an object is moving away from us, the light that it emits is "redshifted". This means that the wavelengths of light get longer and are shifted towards the red part of the spectrum. This effect, called the Doppler effect, is similar to what happens to sound waves emitted from a moving object. For example, if you are standing next to a railroad track and a train passes you while blowing its horn, you will hear the sound change from a higher to a lower frequency as the train passes you by. As a result of this Doppler effect, at large redshifts, visible light from distant sources is shifted into the infrared part of the spectrum. This means that infrared studies can give us much information about the visible spectra of very young, distant galaxies. The image on the left is an infrared view of some of the farthest galaxies ever seen. It was taken by the Hubble Space Telescope's NICMOS camera. Some of the galaxies shown here were previously unknown. (Image credit: R.I. Thompson (U. Arizona), NICMOS, HST, NASA)
In 1965, the radiation left over from the Big Bang was discovered by radio astronomers Arno Penzias and Robert Wilson. This radiation, which peaks at 3 degrees Kelvin (-454 degrees Fahrenheit) can be found in all directions in space. Astronomers believe that this radiation was much hotter in the past and that it should behave like a "blackbody" (an object that is perfectly black because it absorbs all of the electromagnetic radiation that reaches it). To prove this, additional data were needed. In 1975, infrared observations made from a balloon flight proved that the Cosmic Background Radiation follows a blackbody curve. Additional studies of the Cosmic Background Radiation were done using the COBE satellite which was launched in 1989. COBE discovered that the background radiation is not entirely smooth and shows extremely small variations in temperature. These small temperature differences may be due to variations in the density of the early universe which may have led to the formation of galaxies.
Infrared studies have also found a potential protogalaxy (a galaxy in the process of formation) more than 15 billion light years from Earth. This object, named IRAS 10214+4724, may be a huge, contracting hydrogen cloud just beginning to shine with newborn stars. This is close to the edge of the observable universe and its light has taken since nearly the beginning of the universe to reach us. Protogalaxies provide us with a look at the era when galaxies were first coming to life. | http://www.ipac.caltech.edu/outreach/Edu/early.html | 13 |
13 | Just as we can add and subtract constants from both sides of an equation, we can also add and subtract copies of the variable from both sides of the equation. Therefore, if the same variable appears on both sides of the equation, we can reduce them as much as possible in order to get one variable on one side of the equation. It's always nice to have just a single "x" (especially when following a treasure map, you know, as you do). We need to add or subtract the same number of copies of the variable from each side.
Remember that our mission, if we choose to accept it, is to get the variable on one side of the = sign and a number on the other side.
Solve the equation 4x = 5x + 1. Check your answer.
We'd like to have all the x's by themselves on one side of the equation, so we subtract 4 copies of x from each side to find that
0 = x + 1
Yay—so few copies! This will shave a bundle off our Kinko's bill.
We know what to do from here: subtract 1 from each side of the equation, and write -1 = x.
To check our answer, we evaluate the left side of the original equation and the right side of the original equation individually for x = -1. The left side of the equation, evaluated at x = -1, is
4(-1) = -4
The right side of the equation evaluated at x = -1 is
5(-1) + 1 = -5 + 1 = -4
Because the two sides of the equation agree when evaluated at x = -1, the solution to the equation is indeed x = -1. There's one of those negative solutions again. Sorry, Diophantus. | http://www.shmoop.com/equations-inequalities/adding-subtracting-variables.html | 13 |
13 | exponents & roots
factors, factoring, & prime numbers
fractions, decimals &
ratio & proportion
The Pythagorean theorem can be modeled both algebraically and geometrically. Using both models at the middle school level can help students see that algebra and geometry are related, and that they connect with other content areas in the curriculum. The NCTM Standards' electronic example, Understanding the Pythagorean Relationship Using Interactive Figures, provides a visual perspective.
The Pythagorean theorem can be used to solve the problems listed below. Some, such as The Mad Hatter's Gone! and Canadian Soccer, use 3,4,5 triangles that simplify calculations. Others, like A Stellar Garden and The Spiral on the Can, will lead to approximations when using a calculator, or to solutions containing irrational numbers.
Background information on The Pythagorean Theorem can be found in the Dr. Math FAQ, or by searching the Math Forum's Internet Mathematics Library for pythagor or Pythagorean theorem (that exact phrase).
Access to these problems requires a Membership.
Home || The Math Library || Quick Reference || Search || Help | http://mathforum.org/library/problems/sets/middle_pythagorean.html | 13 |
22 | Einstein's General Relativity Theory: Gravity as Geometry
General relativity was Einstein’s theory of gravity, published in 1915, which extended special relativity to take into account non-inertial frames of reference — areas that are accelerating with respect to each other. General relativity takes the form of field equations, describing the curvature of space-time and the distribution of matter throughout space-time. The effects of matter and space-time on each other are what we perceive as gravity.
The theory of the space-time continuum already existed, but under general relativity Einstein was able to describe gravity as the bending of space-time geometry. Einstein defined a set of field equations, which represented the way that gravity behaved in response to matter in space-time. These field equations could be used to represent the geometry of space-time that was at the heart of the theory of general relativity.
As Einstein developed his general theory of relativity, he had to refine the accepted notion of the space-time continuum into a more precise mathematical framework. He also introduced another principle, the principle of covariance. This principle states that the laws of physics must take the same form in all coordinate systems.
In other words, all space-time coordinates are treated the same by the laws of physics — in the form of Einstein’s field equations. This is similar to the relativity principle, which states that the laws of physics are the same for all observers moving at constant speeds. In fact, after general relativity was developed, it was clear that the principles of special relativity were a special case.
Einstein’s basic principle was that no matter where you are — Toledo, Mount Everest, Jupiter, or the Andromeda galaxy — the same laws apply. This time, though, the laws were the field equations, and your motion could very definitely impact what solutions came out of the field equations.
Applying the principle of covariance meant that the space-time coordinates in a gravitational field had to work exactly the same way as the space-time coordinates on a spaceship that was accelerating. If you’re accelerating through empty space (where the space-time field is flat, as in the left picture of this figure), the geometry of space-time would appear to curve. This meant that if there’s an object with mass generating a gravitational field, it had to curve the space-time field as well (as shown in the right picture of the figure).
In other words, Einstein had succeeded in explaining the Newtonian mystery of where gravity came from! Gravity resulted from massive objects bending space-time geometry itself.
Because space-time curved, the objects moving through space would follow the straightest path along the curve, which explains the motion of the planets. They follow a curved path around the sun because the sun bends space-time around it.
Again, you can think of this by analogy. If you’re flying by plane on Earth, you follow a path that curves around the Earth. In fact, if you take a flat map and draw a straight line between the start and end points of a trip, that would not be the shortest path to follow. The shortest path is actually the one formed by a great circle that you’d get if you cut the Earth directly in half, with both points along the outside of the cut. Traveling from New York City to northern Australia involves flying up along southern Canada and Alaska — nowhere close to a straight line on the flat maps we’re used to.
Similarly, the planets in the solar system follow the shortest paths — those that require the least amount of energy — and that results in the motion we observe.
In 1911, Einstein had done enough work on general relativity to predict how much the light should curve in this situation, which should be visible to astronomers during an eclipse.
When he published his complete theory of general relativity in 1915, Einstein had corrected a couple of errors and in 1919, an expedition set out to observe the deflection of light by the sun during an eclipse, in to the west African island of Principe. The expedition leader was British astronomer Arthur Eddington, a strong supporter of Einstein.
Eddington returned to England with the pictures he needed, and his calculations showed that the deflection of light precisely matched Einstein’s predictions. General relativity had made a prediction that matched observation.
Albert Einstein had successfully created a theory that explained the gravitational forces of the universe and had done so by applying a handful of basic principles. To the degree possible, the work had been confirmed, and most of the physics world agreed with it. Almost overnight, Einstein’s name became world famous. In 1921, Einstein traveled through the United States to a media circus that probably wasn’t matched until the Beatlemania of the 1960s. | http://www.dummies.com/how-to/content/einsteins-general-relativity-theory-gravity-as-geo.navId-404494.html | 13 |
16 | A black hole is a massive object whose gravitational field is so intense that it prevents any form of matter or radiation from escaping. The term derives from the fact that its absorption of visible light renders the hole invisible and indistinguishable from the black space around it.
In this artist’s rendition, the yellow region at the center represents a supermassive black hole. Around are dust grains mixed with heated, outflowing gas. But just how big is “supermassive?” Find out in the next photo!
This diagram shows different sizes of black holes as compared to our sun. As you can see, a black hole could swallow millions of stars. And in fact, they often do — see how black holes stifle stars next.
Supermassive black holes in some giant galaxies create such a hostile environment, they shut down the formation of new stars. Strangely, black holes lie at the center of most galaxies, as shown in the next image.
Like most galaxies, NGC 1097, a barred spiral galaxy, has a supermassive black hole at its center. The next image is a new Hubble photo showing another black hole at the center of a galaxy.
This is a new composite image of a galaxy cluster located about 2.6 billion light-years away. The three views of the region were taken with NASA’s Hubble Space Telescope in February 2006. See a massive radio telescope in the next photo.
This is a depiction of a wormhole, or an Einstein-Rosen bridge, bursting open in the vacuum of space. Many believe these curves in spacetime could enable time travel. Find out about another mystery of space in the next photo: the elusive dark matter.
Dark matter composition is up for debate, with subatomic particles and black holes considered as candidates.
Black holes are predicted to exist through solutions of Einstein’s field equations of general relativity. They are not directly observable, but several indirect observation techniques in different wavelengths have been developed and used to study the phenomena they induce in their environment. In particular, gases caught by the gravitational field of a black hole are heated to considerably high temperatures before being swallowed, and thereby emit a significant amount of X-rays. Therefore, even if a black hole does not itself give off any radiation, it may nevertheless be detectable by its effect on its surrounding environment. Such observations have resulted in the general scientific consensus that, barring a breakdown in our understanding of nature, black holes do exist in our universe. | http://www.supiri.com/space/about-black-holes/ | 13 |
15 | Forests are defined by the FAO Forestry Department as `all vegetation formations with a minimum of 10 percent crown cover of trees and/or bamboo with a minimum height of 5 m and generally associated with wild flora, fauna and natural soil conditions'. In many countries, coastal areas such as beaches, dunes, swamps and wildlands - even when they are not covered with trees - are officially designated as `forested' lands and thus fall under the management responsibility of the Forestry Department or similar agency.
Forest resources (including wildlife) of coastal areas are frequently so different from their inland counterparts as to require different and special forms of management and conservation approaches. Mangroves and tidal forests for example have no parallels in terrestrial uplands. As a result, the information, policy and management requirements concerning integrated coastal area management (ICAM) for forestry are also different.
In each of the climatic regions of the world, inland forests and woodlands may extend to the sea and thus form part of the coastal area. In addition to such formations, controlled by climatic factors, special forest communities, primarily controlled by edaphic factors and an extreme water regime, are found in coastal areas and along inland rivers. Such forest communities include: mangroves, beach forests, peat swamps, periodic swamps (tidal and flood plain forests), permanent freshwater swamps and riparian forests. Of these, the first three types are confined to the coastal area, whereas the remaining types can also be found further inland.
Mangroves are the most typical forest formations of sheltered coastlines in the tropics and subtropics. They consist of trees and bushes growing below the high water level of spring tides. Their root systems are regularly inundated with saline water, although it may be diluted by freshwater surface runoff. The term `mangrove' is applied to both the ecosystem as such and to individual trees and shrubs.
Precise data on global mangrove resources are scarce. Estimates are that there are some 16 million ha of mangrove forests worldwide (FAO, 1994a). The general distribution of mangroves corresponds to that of tropical forests, but extends further north and south of the equator, sometimes beyond the tropics, although in a reduced form, for instance in warm temperate climates in South Africa and New Zealand to the south and in Japan to the north.
Mangrove forests are characterized by a very low floristic diversity compared with most inland forests in the tropics. This is because few plants can tolerate and flourish in saline mud and withstand frequent inundation by sea water.
There are two distinct biogeographic zones of mangroves in the world: those of West Africa, the Caribbean and America; and those on the east coast of Africa, Madagascar and the Indo-Pacific region. While the first contain only ten tree species, mangroves of the Indo-Pacific are richer, containing some 40 tree species (excluding palms).
Most of the animal species found in mangroves also occur in other environments, such as beaches, rivers, freshwater swamps or in other forest formations near water. On the whole, animal species strictly confined to mangroves are very few (crabs have a maximum number of species in mangroves). In many countries however, the mangroves represent the last refuge for a number of rare and endangered animals such as the proboscis monkey (Nasalis larvatus) in Borneo, the royal Bengal tiger (Panthera tigris) and the spotted deer (Axix axis) in the Sundarbans mangroves in the Bay of Bengal, manatees (Trichechus spp.) and dugongs (Dugong dugon). Mangroves are also an ideal sanctuary for birds, some of which are migratory. According to Saenger et al. (1983), the total list of mangrove bird species in each of the main biogeographical regions include from 150 to 250 species. Worldwide, 65 of these are listed as endangered or vulnerable, including for instance the milky stork (Mycteria cinerea), which lives in the rivers of mangroves.
This type of forest is in general found above the high-tide mark on sandy soil and may merge into agricultural land or upland forest.
Sand dune and beach vegetations are mostly scrub-like with a high presence of stunted tree growths. These coastal forest ecosystems are adapted to growing conditions that are often difficult as a result of edaphic1 or climatic extremes (strong winds, salinity, lack or excess of humidity). They are very sensitive to modifications of the ecosystem. A slight change in the groundwater level for example might eliminate the existing scrub vegetation. Sand dune and beach vegetations have an important role in land stabilization and thus prevent the silting up of coastal lagoons and rivers, as well as protecting human settlements further inland from moving sand dunes.
The dominant animal species on the adjacent beaches are crabs and molluscs. The beaches are also very important as breeding sites for sea turtles and, therefore, attract predators of turtles' eggs, such as monitor lizards (Varanus sp.).
This is a forest formation defined more on its special habitat than on structure and physiognomy. Peat swamp forests are particularly extensive in parts of Sumatra, Malaysia, Borneo and New Guinea, where they were formed as the sea level rose at the end of the last glacial period about 18 000 years ago. Domed peat swamps can be up to 20 km long and the peat may reach 13 m in thickness in the most developed domes. Animals found in peat swamps include leaf-eating monkeys such as the proboscis monkey and the langurs found in Borneo.
As with peat swamp forests, these are defined mainly by habitat and contain a diverse assemblage of forest types periodically flooded by river water (daily, monthly or seasonally). Periodic swamps can be further subdivided into tidal and flood plain forests.
Tidal forests are found on somewhat higher elevations than mangroves (although the term is sometimes used to describe mangroves as well). Such forests are influenced by the tidal movements and may be flooded by fresh or slightly brackish water twice a day. Tidal amplitude varies from place to place. Where the amplitude is high, the area subject to periodic tidal flushing is large and usually gives rise to a wide range of ecological sites. The natural vegetation in tidal forests is more diverse than that of mangroves, although still not as diverse as that of dense inland forests.
Flood plains are areas seasonally flooded by fresh water, as a result of rainwater rather than tidal movements. Forests are the natural vegetation cover of riverine flood plains, except where a permanent high water-table prevents tree growth.
The Amazon, which has annual floods but which is also influenced by tides to some 600 km inland, has very extensive permanent and periodic swamp forests. The alluvial plains of Asia once carried extensive periodic swamp forests, but few now remain as these have mostly been cleared for wetland rice cultivation. The Zaire basin is about one-third occupied by periodic swamp forests, many disturbed by human interventions, and little-studied (Whitmore, 1990).
Throughout the world, flood plains are recognized as being among the most productive ecosystems with abundant and species-rich wildlife.
The term is here used for permanent freshwater swamp forests. As opposed to periodic swamps, the forest floor of these is constantly wet and, in contrast to peat swamps, this forest type is characterized by its eutrophic (organomineral) richer plant species and fairly high pH (6.0 or more) (Whitmore, 1990).
Also called riverine or gallery forests. These are found adjacent to or near rivers. In the tropics, riparian forests are characterized as being extremely dense and productive, and have large numbers of climbing plants.
In addition to their aesthetic and recreational values, riparian forests are important in preserving water quality and controlling erosion and as wildlife refuges especially for amphibians and reptiles, beavers, otters and hippopotamus. Monkeys and other tree-dwelling mammals and birds are often abundant in riparian forests.
Other coastal forest ecosystems include: savannah woodlands, dry forests, lowland rain forests, temperate and boreal forests and forest plantations. Many of the natural coastal forests are under severe threat. Most of the lowland rain forests have vanished as a result of the ease with which commercial trees, standing on slopes facing the sea or other accessible coastal waters, could be harvested merely by cutting them down and letting them fall into the nearby water. As a consequence, most coastal dry forests and savannah woodlands have been seriously degraded by overexploitation for fuelwood and construction poles, and conversion to agriculture or to grazing lands through the practice of repeated burning.
Coastal plantations have often been established for both production and protection purposes. As an example of the latter, coastal plantations were established in Denmark as far back as the 1830s to stabilize sand dunes which were moving inland and which had already covered several villages.
The total economic value of coastal forests stems from use values (direct uses, indirect uses and option values) and non-use values (existence and bequest values).2 Table C.1 gives examples of the different values as related to coastal forests. Table C.4 gives examples of valuation approaches applicable to the various types of forest products or services.
|Use values||Use values||Use values||Non-use values|
|Direct uses||Indirect uses||Option values||Existence and bequest values|
|Timber||Nutrient cycling (including detritus for aquatic food web)||Premium to preserve future direct and indirect uses (e.g. future drugs, genes for plant breeding, new technology complement)||Forests as objects of intrinsic value, or as a responsibility (stewardship)|
|Non-timber forest products (including fish and shellfish)||Watershed protection||Endangered species|
|Recreation||Coastal protection||Charismatic species|
|Nature tourism||Air and water pollution reduction||Threatened or rare habitats/ecosystems|
|Genetic resources||Microclimate function||Cherished landscapes|
|Education and research||Carbon store||Cultural heritage|
|Human habitat||Wildlife habitat (including birds and aquatic species)|
Source: adapted from Pearce, 1991.
Direct use values, in particular the commercial value of timber and other forest products, often dominate land-use decisions. The wider social and environmental values are often neglected, partly as a result of the difficulty in obtaining an objective estimate of these, even though in many cases these values exceed the value of traded and untraded forest products.
Indirect use values correspond to `ecological functions' and are at times referred to as environmental services. Some of these occur off-site, i.e. they are economic externalities and are therefore likely to be ignored when forest management decisions are made.
The option existence and bequest values are typically high for coastal forests - especially for tropical rain forests or forests containing endangered or charismatic animal species.
In addition to the activities carried out within the coastal forests (see below), small- and large-scale forest industries are also often found in coastal areas, taking advantage of the supply of raw materials and the ease of transport by waterways and roads, the existence of ports for export, etc. In addition to sawmills and pulp and paper mills, these forest industries may include veneer and particle board factories, charcoal kilns (particularly near mangrove areas), furniture makers and commercial handicraft producers.
There is little information available on the value of marketed goods from coastal forests. In general, their contribution to national gross domestic product (GDP) is small and this fact may lead to their being neglected. Commercial wood production from coastal forests ranges from timber, poles and posts to fuelwood, charcoal and tannin. Non-wood products include thatch, fruits, nuts, honey, wildlife, fish, fodder and medicinal plants. A list of forest-based products obtainable from mangroves is shown in Box C.1.
Products obtainable from mangroves
A. Mangrove forest products
|Food, drugs and beverages
to preserve leather and tobacco
Paper - various
B. Other natural products
Source: adapted from FAO, 1984a.
Accounts of government forest revenues are often a poor indication of the value of the forest products. As an example, in 1982/83, in the Sundarbans mangroves of Bangladesh, some of the royalties collected by the forestry department were exceedingly low: for sundri (Heritiera fomes) fuelwood for instance, the market rate was nearly 40 times the royalty rate; and for shrimps the minimum market rate to royalty rate ratio at the time was 136:1 (FAO, 1994a).
Frequently, the value of untraded production (e.g. traditional fishing, hunting and gathering) in mangrove forest areas is substantial, the value often exceeding that from cultivated crops and from formal-sector wage income (Ruitenbeek, 1992).
Other direct use values of the coastal forests include their social functions. Coastal forests provide habitat, subsistence and livelihood, to forest dwellers, thereby supplying the means to hold these communities together, as well as opportunities for education, scientific research, recreation and tourism. Worldwide, the lives of millions of people are closely tied to productive flood plains, the associated periodic river floods and subsequent recessions. The socio-economic importance of these areas is especially evident in the more arid regions of the developing world. The seasonal ebb and flood of river waters determines the lifestyles and agricultural practices of the rural communities depending on these ecosystems.
Examples of the educational value of coastal forests are found in peninsular Malaysia, where more than 7 000 schoolchildren annually visit the Kuala Selangor Nature Park, a mangrove area with boardwalk, education centre, etc. (MNS, 1991). In nearby Kuantan, along the Selangor river, a main tourist attraction are evening cruises on the river to watch the display of fireflies and, along the Kinabatangan river in Sabah, cruises are undertaken to watch the proboscis monkeys as they settle in for the night in the riparian forest.
In terms of employment opportunities in coastal forests, ESCAP (1987) estimated the probable direct employment offered by the Sundarbans mangrove forest in Bangladesh to be in the range of 500 000 to 600 000 people for at least half of the year, added to which the direct industrial employment generated through the exploitation of the forest resources alone equalled around 10 000 jobs.
A prominent environmental role of mangroves, tidal, flood plain and riparian forests is the production of leaf litter and detrital matter which is exported to lagoons and the near-shore coastal environment, where it enters the marine food web. Mangroves and flood plains in particular are highly productive ecosystems and the importance of mangrove areas as feeding, breeding and nursery grounds for numerous commercial fish and shellfish (including most commercial tropical shrimps) is well established (Heald and Odum, 1970; MacNae, 1974; Martosubroto and Naamin, 1977). Since many of these fish and shellfish are caught offshore, the value is not normally attributed to mangroves. However, over 30 percent of the fisheries of peninsular Malaysia (about 200 000 tonnes) are reported to have some association with the mangrove ecosystem. Coastal forests also provide a valuable physical habitat for a variety of wildlife species, many of them endangered.3
Shoreline forests are recognized as a buffer against the actions of wind, waves and water currents. In Viet Nam, mangroves are planted in front of dykes situated along rivers, estuaries and lagoons under tidal influence, as a protection measure (L┐yche, 1991). Where mangroves have been removed, expensive coastal defences may be needed to protect the agricultural resource base. In arid zones, sand dune fixation is an important function of coastal forests, benefiting agricultural and residential hinterland.
In addition, mangrove forests act as a sediment trap for upland runoff sediments, thus protecting sea grass beds, near-shore reefs and shipping lanes from siltation, and reducing water turbidity. They also function as nutrient sinks and filter some types of pollutants.
The option value of coastal forests - the premium people would be prepared to pay to preserve an area for future use by themselves and/or by others, including future generations - may be expected to be positive in the case of most forests and other natural ecosystems where the future demand is certain and the supply, in many cases, is not.
An example of how mangrove values are estimated is given in Box C.2.
Net present value of mangrove forestry and fisheries in Fiji
Using data on the amounts of wood and fish actually obtained from mangrove areas and their market value and harvesting costs, the net present value (NPV) of forestry and fisheries were estimated for three mangrove areas in Fiji, using the incomes or productivity approach with a 5 percent social discount rate and a 50-year planning horizon.
Forestry net benefits
Commercial net benefits were calculated as wood harvested multiplied by market value, minus harvesting costs.
Subsistence net benefits were calculated using the actual amount of wood harvested multiplied by the shadow value in the form of the price for inland or mangrove fuelwood sold by licensed wood concessionaires.
Taking the species composition of the mangrove area into account, the weighted average NPV was estimated for each of the three main mangrove areas yielding the following:
NPV: US$164 to $217 per hectare.
Fisheries net benefits
In only one of the three areas was the fisheries potential judged to be fully utilized and the data are based on this area.
Annual catch (commercial and subsistence): 3 026 tonnes. Area of mangroves: 9 136 ha, thus averaging 331 kg per hectare, equalling $864 per hectare in market value annually.
By taking harvesting costs into account, the following result was obtained:
NPV: $5 468 per hectare, or approx. $300 per hectare per year.
This is assuming a proportionate decline in the fisheries. With only a 50 percent decline (as some of the fish are not entirely dependent on the mangroves) the figure for the NPV is $2 734 per hectare.
The value of mangroves for nutrient filtering has been estimated, using the alternative cost or shadow project method, by Green (1983), who compared the costs of a conventional waste water treatment plant with the use of oxidation ponds covering 32 ha of mangroves. An average annual benefit of $5 820 per hectare was obtained. This figure is, however, only valid for small areas of mangroves and, as it represents the average, not the marginal value, it should be treated with caution.
The option value and the existence value of mangroves are not captured using the above incomes approach and an attempt to include these values was made by using the compensation approach, as the loss of fishing rights in Fiji caused by the reclamation of mangroves has been compensated by the developers. The recompense sum is determined by an independent arbitrator within a non-market institution. Large variations in recompense sums were however recorded ($49 to $4 458 per hectare) according to the end use and the bargaining power of the owner of the fishing rights. Using 1986 prices the following results were obtained:
Average: $30 per hectare for non-industrial use and $60 per hectare for industrial use.
Maximum: $3 211 per hectare.
By adding the benefits foregone in forestry and fisheries, it can be concluded that the minimum NPV of the mangroves of Fiji is $3 000 per hectare under present supply and demand and existing market and institutional organizations.
Source: Lal, 1990.
The term coastal forests covers a wide range of different ecosystems many of which can still be classified as natural ecosystems, although - particularly in the temperate region - they may have been modified through human interventions over the years. However, they still generally contain a greater biological diversity (at genetic, species and/or ecosystem levels) than most agricultural land.
The most important characteristics of coastal forests are probably their very strong links and interdependence with other terrestrial and marine ecosystems.
Mangroves exemplify such links, existing at the interface of sea and land, and relying, as do tidal and flood plain forests, on fresh water and nutrients supplied by upland rivers to a much larger extent than more commonly found inland forest types. Figure C.1 illustrates the mangrove-marine food web.
Source: CV-CIRRD, 1993.
In the arid tropics, there may be no permanent flow of fresh water to the sea, and the leaf litter and detritus brought to the marine ecosystem by tidal flushing of coastal mangrove areas, where these exist, is the only source of nutrients from the terrestrial zone during the dry season. This further magnifies the role of mangroves in the marine food web. In the Sudan, for example, such a role is considered to be a crucial function of the narrow mangrove fringe found along parts of the Red Sea coast (L┐yche-Wilkie, 1995).
As for the wildlife species found in coastal forests, most are dependent on other ecosystems as well. Mammals may move between different ecosystems on a daily or seasonal basis, water birds are often migratory, and many commercial shrimps and fish use the mangroves as spawning ground and nursery sites but move offshore in later stages of their life cycle. Anadromous species, such as salmon, spawn in freshwater rivers, but spend most of their life cycle in marine waters; catadromous species on the other hand, spawn at sea, but spend most of their life in freshwater rivers. These species probably thus pass through coastal forests at some point in their life.
A variety of natural or human-incurred risks and uncertainties affect the sustainable management of coastal forest resources. Some natural risks may be exacerbated by human activities. Uncertainty arises from: the natural variability inherent in coastal forest ecosystems; the incomplete knowledge of the functioning of complex natural ecosystems; the long time-frame needed in forest management; and the inability to predict accurately the future demands for goods and services provided by natural and cultivated forests.
Natural risks. These include strong winds, hurricanes and typhoons, floods (including tidal waves) and droughts, which can all cause considerable damage to coastal forests.
Global climate change caused by human actions may, through a rise in temperatures, result in `natural' risks such as a rise in sea level, changes in ocean currents, river runoff and sediment loads, and increases in the frequency and severity of floods, drought, storms and hurricanes/typhoons.
Human-incurred threats. Human-incurred threats to coastal forests stem mainly from the competition for land, water and forest resources. These include conversion of coastal forest to other uses, building of dams and flood control measures, unsustainable use of forest resources both within the coastal area and further upland, and pollution of air and water.
In many developing countries, deforestation continues to be significant; the annual loss of natural forests resulting from human pressures amounted to an estimated 13.7 million ha in the 1990 to 1995 period (FAO, 1997d). Human-incurred threats to forests are often more pronounced in coastal areas as a result of the relatively high population density of such areas caused by the availability of fertile soils, fishery resources and convenient trade links with other domestic and foreign markets.
Natural variability. One particular uncertainty faced by forest managers relates to the natural variability exhibited by the coastal forest and wildlife resources. Such natural variability can be found at two levels:
The above risks and uncertainty caused by incomplete knowledge are compounded by the long time-frame needed in forest and wildlife management. Trees, and some animals, need a long time to mature: 30 years for mangrove forests used for poles and charcoal; and 150 years for oak (Quercus) grown for timber in temperate forests. This long period between regeneration and harvesting makes the selection of management objectives more difficult because of further uncertainty regarding future market preferences for specific forest and wildlife products or services, future market prices, labour costs, etc.
An important characteristic of natural ecosystems (including natural coastal forests) is that once a natural ecosystem has been significantly altered, through unsustainable levels or inappropriate methods of use, it may be impossible to restore it to its original state. Conversion of natural coastal forests to other uses is an extreme example.
It may be possible to replant mangrove trees in degraded areas or in abandoned shrimp ponds, but the resulting plantation will have far fewer plant and animal species than the original natural mangrove ecosystem.
Acid sulphate soils. A particular cause of concern with regard to irreversibility is the high pyrite (FeS2) content in many mangrove and tidal forest soils, which renders them particularly susceptible to soil acidification when subject to oxidation. This is probably the most acute problem faced by farmers and aquaculture pond operators when converting such forests and other wetlands to rice cultivation or aquaculture ponds, and it makes restoration of degraded areas almost impossible.
Reclamation of acid sulphate soils requires special procedures such as saltwater leaching alternating with drying out, or the establishment and maintenance of a perennially high, virtually constant groundwater-table, through a shallow, intensive drainage system. These may be technically difficult or economically unfeasible.
Coastal forests tend to be owned by the state. The inability of many state agencies in the tropics to enforce property rights, however, often means that a de facto open access regime exists, which frequently results in overexploitation of forest resources.4 This problem is only partly overcome by awarding concessions and usufructuary rights as these are often short-term in nature and not transferable and, therefore, fail to provide incentives for investments and prudent use of the resources.
Where the state agency has the ability to enforce laws and regulations and the government has a policy of promoting multipurpose management of state-owned forests, sustainable forest management can be achieved (Box C.3).
Mangrove stewardship agreement in the Philippines
One example of successful multipurpose management of a state-owned coastal forest using a participatory approach and aiming to restore the more traditional communal ownership of forests, is the issuing of `Mangrove Stewardship Agreements' in the Philippines. Local communities (or private individuals) obtain a 25-year usufruct lease over a given mangrove area with the right to cut trees selectively, establish new mangrove plantations and collect the fish and shellfish of the area based on a mutually agreed mangrove forest management plan. The Department of Environment and Natural Resources (DENR), which implements this scheme, will assist the local communities and individuals in preparing this management plan if needed. Local NGOs are also contracted by DENR to assist in the initial `Community Organizing' activities, which include an awareness campaign of the benefits obtainable from mangrove areas and an explanation of the steps involved in obtaining a Stewardship Agreement.
As a result of the variety of goods and services provided by coastal forests and their links with other ecosystems, a large number of institutions often have an interest in, and sometimes jurisdiction over parts of, the coastal forest ecosystems. This raises the risk of conflict between institutions, even within a single ministry.
The forestry department or its equivalent generally has jurisdiction over the coastal forest resources. However, the parks and wildlife department, where it exists, may have jurisdiction over the forest wildlife, and the fisheries department almost certainly has jurisdiction over the fisheries resources found in the rivers within coastal forests, and may regulate the use of mangrove areas for cage and pond culture. Other institutions with an interest in coastal forests include those related to tourism, land-use planning, mining, housing, ports and other infrastructure.
In many countries, there is often little public awareness of the variety of benefits provided by coastal forests, and campaigns should be conducted to overcome this. Mangroves and other swamp forests in particular have often been regarded as wastelands with little use except for conversion purposes. As a result of the low commercial value of wood products compared with the potential value of agriculture or shrimp production, conversion has often been justified, in the past, on the basis of a financial analysis of only the direct costs and benefits. Such analyses, however, do not take into account the value of the large number of unpriced environmental and social services provided by coastal forests, which in many cases far outweigh the value of any conversion scheme.5
The ecological links between coastal forests and other terrestrial and marine ecosystems and the institutional links between the forestry sector and other sectors, must be addressed through an area-based strategy that takes a holistic approach to sustainable development. An ICAM strategy provides the appropriate framework for such an approach.
The nature of coastal forests as described above calls for a precautionary approach6 to the management of their resources and the adoption of flexible strategies and management plans drawing on the knowledge of the local communities.
The precautionary principle can be incorporated into coastal forest management by imposing sustainability constraints on the utilization of coastal forest ecosystems. Other measures include environmental impact assessments, risk assessments, pilot projects and regular monitoring and evaluation of the effects of management. Research, in particular on the interdependence of coastal forests and other ecosystems and on the quantification and mitigation of negative impacts between sectors, is also needed.
Environmental impact assessments7 should be undertaken prior to conversions or other activities that may have a significant negative impact on coastal forest ecosystems. Such activities may arise within the forest (e.g. major tourism development) or in other sectors outside the forest (e.g. flood control measures). Where there is insufficient information on the impact of proposed management actions, applied research and/or pilot projects should be initiated.
Public participation in the management of coastal forest resources will increase the likelihood of success of any management plan and should be accompanied by long-term and secure tenure/usufruct.
1 See Glossary.
2 For a description of these concepts, see Part A, Box A.24.
3 See Section 1.1.
4 See Part A, Section 1.6.1 and Box A.2. Also Part E, Box E.7.
5 See Part A, Section 1.6.1 and Boxes A.22 and A.24.
6 See Part A, Section 1.6.3 and Boxes A.3 and A.5.
7 See Part A, Box A.6. | http://www.fao.org/docrep/W8440e/W8440e11.htm | 13 |
13 | The Physics Behind The Rocket
Rockets fly. We all know that. But how do they fly? And more importantly, how can we calculate how they will fly given certain parameters? The simple answer is that rockets fly by using Newtons third law, for every action there is an equal and opposite reaction. But the calculations that govern rocket motion are much more complicated than this. I will go over the derivation of these calculations here.
To figure out how a rocket will accelerate, we rely heavily on conservation of momentum. That is, for a given system, in our case the rocket-gas system, the TOTAL momentum will remain constant even if individual components move around. So if some component of the system (the gas) moves in one direction with a given momentum, some other object (the rocket) will have to move such that the two momentums exactly cancel each other out.
If a rocket is moving through space with a given velocity, V, it is quite easy to figure out its momentum, Pi, Since it is simply equal to the mass of the rocket times it's velocity.
Once you apply thrust, however, the situation becomes more complicated. You now have two masses to deal with, and two momentums to deal with. However, keep in mind that the total momentum of the system remains unchanged. So if we call the change in the velocity of the rocket dV, the mass of the gas emitted dM, and the velocity of the gas emitted relative to a stationary observer U, then the situation becomes Pf=U*dM+(M-dM)(V+dV), since the momentum of the system is equal to the momentum of the gas plus the new momentum of the rocket.
We now have all the elements we need to relate the velocity of the gas emitted to the velocity of the rocket, and to find the final momentum of the system. To do this, we will need to solve the above equation for the velocity of the gas. Our first step is to write U in terms of the velocity of the gas relative to the rocket and the velocity of the rocket. This relation ship can be modeled by the diagram below.
Writing this in equation form, we get U=V+dV-Vgas. Putting this into our original equation, we get
Since Pi=Pf, We can further rewrite the equation as
Mutiplying this equation out gives us
Canceling Positives with negatives and canceling MV across the equal sign allows us to rewrite the equation as
0=-VgdM+MdV or VgdM=MdV
This equation is good for modeling the motion at a specific instant in time, but what if we were more interested in the change over a period of time, i.e. how much the rocket would accelerate if we applied thrust in the form of emitting gas at a certain velocity over a period of time? To modify the equation to model the situation over a period of time, we simply need to divide both sides by the change in time, or dt, which gives us the equation
Since dv/dt is acceleration, and mass times acceleration is force, we can further rewrite this equation as
where Fnt is the thrust provided by the gas. If we let R=dM/dt we can further simplify this equation to
We can now use this equation to find the force produced by the rocket when given the mass of fuel burned, the original mass of the rocket, the distance covered during the burn, and the duration of the burn. Once we have that value, we can easily rearrange the equation to find other pieces of information, such as how long we would need to burn to change the speed of the rocket, dv, by a certain amount. See a sample calculation | http://ffden-2.phys.uaf.edu/211.fall2000.web.projects/I.%20Brewster/physics.html | 13 |
17 | Continuity and Limits
In this text we'll just introduce a few simple techniques for evaluating limits and show you some examples. The more formal ways of finding limits will be left for calculus.
A limit of a function at a certain x -value does not depend on the value of the function for that x . So one technique for evaluating a limit is evaluating a function for many x -values very close to the desired x . For example, f (x) = 3x . What is f (x) ? Let's find the values of f at some x -values near 4 . f (3.99) = 11.97, f (3.9999) = 11.9997, f (4.01) = 12.03, andf (4.0001) = 12.0003 . From this, it is safe to say that as x approaches 4 , f (x) approaches 12 . That is to say, f (x) = 12 .
The technique of evaluating a function for many values of x near the desired value is rather tedious. For certain functions, a much easier technique works: direct substitution. In the problem above, we could have simply evaluated f (4) = 12 , and had our limit with one calculation. Because a limit at a given value of x does not depend on the value of the function at that x -value, direct substitution is a shortcut that does not always work. Often a function is undefined at the desired x -value, and in some functions, the value of f (a)≠ f (x) . So direct substitution is a technique that should be tried with most functions (because it is so quick and easy to do) but always double-checked. It tends to work for the limits of polynomials and trigonometric functions, but is less reliable for functions which are undefined at certain values of x .
The other simple technique for finding a limit involves direct substitution, but requires more creativity. If direct substitution is attempted, but the function is undefined for the given value of x , algebraic techniques for simplifying a function may be used to findan expression of the function for which the value of the function at the desired x is defined. Then direct substitution can be used to find the limit. Such algebraic techniques include factoring and rationalizing the denominator, to name a few. However a function is manipulated so that direct substitution may work, the answer still should be checked by either looking at the graph of the function or evaluating the function for x - values near the desired value. Now we'll look at a few examples of limits.
What is ?
What is ?
Consider the function f (x) = xforx < 0, f (x) = x + 1forx≥ 0 . What is f (x) , what is f (x) , and what is f (x) ?
Consider the function f (x) = xforallx≠3, f (x) = 2forx = 3 . What is f (x) ? | http://www.sparknotes.com/math/precalc/continuityandlimits/section2.rhtml | 13 |
14 | A science fair question is the very beginning of a science fair experiment. A science fair project question might start with something like: "Why does.." or "How will..." or "Where does..." and so forth.
In our tomato plant example our question might be: "Is tomato plant growth speed affected by the structure of it's growth medium?" This question is asking whether tomato plants will grow better or worse in soil or water if everything else is equal.
An experiment is just a trial where you are trying to figure out whether something is true or not. So, ultimately, you are going to try and answer your science fair question by doing an experiment.
You have to be careful with your science fair question – if what your question is asking is not measurable then it will not make a good science fair experiment.
Avoid questionnaire type questions that require people to give opinions, their impressions, or their memories – this is not science because it is not objectively measurable.
Avoid dangerous science fair experiments! There is no need – there are plenty of cool science fair experiments out there that do not require caustic chemicals or other dangerous materials.
Avoid immeasurable science fair experiments – science relies on measurements, if you can't measure what you're doing then you can’t do science.
And lastly – be creative! When you look at what people do at science fairs, you will see a lot of the same kind of simple (and boring) science experiments year after year. Spend a little time now and come up with a science fair experiment that you can be proud of – it will pay off in a big way.
So what do you want to come up with when forming your science fair question? Look at the research you did on your topic of interest and your list of ideas you came up with during your brainstorming session. Think about a specific question you have about your topic.
What if your topic was plant growth in (nutrient filled) water instead of soil (hydroponics)? An example of a science fair question for you might be: “What is the growth speed difference between tomato plants grown hydroponically versus those grown in potting soil?”
If you look at the hydroponically grown tomato example above, you’ll see that we were very specific in what we were asking: “What is the growth speed difference between tomato plants grown hydroponically and those grown in soil?”
When you have a question in mind, you will want to take note of three things:
An Independent Variable is what you (the scientist) are changing or enacting in order to do your science fair experiment – there is only oneIndependent Variable in any valid experiment and in our tomato experiment it would be the difference in growing mediums for our tomato plants (one is a nutrient solution and the other is soil.)
The Dependent Variable is what changes as a result of the Independent Variable – so in our example it would the growth difference in the tomato plants.
There can be more than one Dependent Variable. In our example the growth difference in the tomato plants might be measured in the height of the plant and in the size of the tomatoes. Each of these would be a separate Dependent Variable.
Lastly Controlled Variables are the things that you the scientist would want to keep constant (unchanging) throughout your science fair experiment.
In our science fair experiment an example of a Controlled Variable would be the amount of light each plant (the ones in nutrient solution and the ones in soil) would receive – we want that amount to be the same, otherwise it could affect the growth of the plants differently and therefore cause the experiment to fail because we would not know if difference in plant growth were due to the different growth mediums or different exposures to light.
When doing your science fair experiment it will be very important for you to identify any Controlled Variables that might affect the outcome of your experiment.
Got your science fair question? Great, let’s go on. | http://www.cool-science-projects.com/science-fair-question.html | 13 |
26 | To actually use a Boolean variable, you can assign a value to it. By default, if you declare a Boolean variable but do not initialized it, it receives a value of False:
Public Module Exercise Public Function Main() As Integer Dim EmployeeIsMarried As Boolean MsgBox("Employee Is Married? " & EmployeeIsMarried) Return 0 End Function End Module
This would produce:
To initialize a Boolean variable, assign it a True or a False value. In the Visual Basic language, a Boolean variable can also deal with numeric values. The False value is equivalent to 0. For example, instead of False, you can initialize a Boolean variable with 0. Any other numeric value, whether positive or negative, corresponds to True:
Public Module Exercise Public Function Main() As Integer Dim EmployeeIsMarried As Boolean EmployeeIsMarried = -792730 MsgBox("Employee Is Married? " & EmployeeIsMarried) Return 0 End Function End Module
The number can be decimal or hexadecimal:
Public Module Exercise Public Function Main() As Integer Dim EmployeeIsMarried As Boolean EmployeeIsMarried = &HFA26B5 MsgBox("Employee Is Married? " & EmployeeIsMarried) Return 0 End Function End Module
As done with the other data types we have used so far, a Boolean values can be involved with a procedure. This means that a Boolean variable can be passed to a procedure and/or a function can be made to return a Boolean value.
To pass an argument as a Boolean value, in the parentheses of the procedure, type the name of the argument followed by the As Boolean expression. Here is an example:
Private Sub CheckingEmployee(ByVal IsFullTime As Boolean) End Sub
In the same way, you can pass as many Boolean arguments as you need, and you can combine Boolean and non-Boolean arguments as you judge necessary. Then, in the body of the procedure, use (or don't use) the Boolean argument.
Just as done for the other data types, you can create a function that returns a Boolean value. When declaring the function, specify its name and the As Boolean expression on the right side of the closing parenthesis. Here is an example:
Public Function IsDifferent() As Boolean End Function
Of course, the function can take arguments of any kind you judge necessary:
Public Function IsDifferent(ByVal Value1 As Integer, ByVal Value2 As Integer) As Boolean End Function
In the body of the function, do whatever you judge necessary. Before exiting the function, you must return a value that evaluates to True or False. | http://www.functionx.com/visualbasic/conditions/BooleanValues.htm | 13 |
10 | The following blog post is part of a blog series called "Comments on the Common Core," written by Eye On Education's Senior Editor, Lauren Davis.
Persuasive writing has been very popular in ELA classrooms in recent years. During a persuasive writing unit, students are typically asked to write a letter or essay convincing someone to do or believe something. Students are taught persuasive techniques such as bandwagon, glittering generalities, and snob appeal.
Now the Common Core is moving teachers from persuasion to argument. This shift might be confusing to people who see “persuasion” and “argument” as the same thing. However, the Common Core’s authors draw a distinction between the terms.
When writing to persuade, … [one] common strategy is an appeal to the credibility, character, or authority of the writer … Another is an appeal to the audience’s self-interest, sense of identity, or emotions …. A logical argument, on the other hand, convinces the audience because of the perceived merit and reasonableness of the claims and proofs offered rather than either the emotions the writing evokes in the audience or the character or credentials of the writer. (The Common Core State Standards, p. 24)
In other words, argument is about logic, not emotion. The Common Core authors also say that argument has a “special place” in the standards, since it is a crucial genre for college and careers.
So how do teachers adjust their lessons to fit the new requirements? Here are some strategies for teaching argument.
- Teach concession-refutation. Students should be aware of and address the other side of an issue, not just their own side. You may need to give students sentence frames. You can find some on the website of Eye On Education’s author Amy Benjamin. Go to http://www.amybenjamin.com and click on “Writing” under Common Core—the sentence frames are part of that PowerPoint.
- Show students how to avoid common logical errors. A list of common logical fallacies (with examples) can be found here. The site www.fallacyfiles.org contains additional examples of logical fallacies in the world. Students can even contribute to the site.
- Analyze mentor texts with students. For example, students can look for examples of concession-refutation in newspaper articles, and they can see how an author supports his/her claims with logical and clear evidence.
- Teach students how to marshal facts. Effective argumentation requires strong evidence. Students need to learn how to gather that evidence and how to incorporate it into their writing. Don’t just let students go to Google and pick the first thing they see. Teach students how to create focused search terms; how to narrow their search results; how to evaluate a website for reliability accuracy, currency, and bias; how to incorporate information into their essays (when to quote and when to paraphrase, and what constitutes a real paraphrase vs. plagiarism); and how to cite sources.
- Teach students some academic vocabulary involving argument writing—claim, evidence, marshal, concession, refutation, etc.
And even though argumentation should be your main focus based on the Common Core’s requirements, I’m all for throwing in some persuasive lessons if time permits. Teaching students to understand emotional appeals will help them become more media literate and savvy about propaganda in the world around them. I’d hate to see that left out completely. You could also do argument writing that includes elements of persuasion, since a lot of authors combine facts and emotion.
How are you teaching argument? Leave a comment!
Check back on September 12 for How to Design Text-Based Questions (and Teach Students to Answer Them!)
Previous Post: Why Computer-Based Scoring for the Common Core Makes Me Uneasy | http://www.eyeoneducation.com/Blog/articleType/ArticleView/articleId/1939/How-to-Shift-from-Teaching-Persuasion-to-Teaching-Argument | 13 |
14 | Conservation of Momentum
In application of momentum, the most important law to know is the Conservation of
Momentum. Momentum is said to be conserved when there is zero net external force acting on the object.
To explain this mathematically:
Conservation of Momentum is mathematically expressed as:
- Definition of momentum: p = mv
- Newton's 1st law: F = ma => m dv/dt = dp/dt
- Net external force is zero: F = 0 => dv/dt = 0
- dv/dt = 0 => Dp = 0
mvi = mvf
When expanded to n-particle system the n-conservative momentum of each particle add:
Now let's take a look at some things to verify what we covered in previous sections.
In Aiming, a statement was made that the angle formed when the cue collides
with the object ball which is initially at rest is always 90 degrees. This is so if assuming
there is no friction and no rotation on the balls so that no kinetic energy is dissipated. This
is the reason why in real life (with friction and rotation) it is never perfectly right angular
after collision but very similar. Back to verifying why it is theoretically 90 degrees, we must
consider first the conservation of momentum and the conservation of kinetic energy:
All balls are equal in mass and the object ball is initially at rest, v2i = 0
Conservation of momentum gives us:
mv1i = mv1f + mv2f
v1i = v1f + v2f
This is a vector equation and geometrically means that three vectors form a triangle.
(1/2)mv1i2 = (1/2)mv1f2 + (1/2)mv2f2
v1i2 = v1f2 + v2f2
As you can see, the obtained velocities are in a form of Pythagorean theorem and the vector velocity
v1i is the hypotenuse of two leg vector velocities, v1f and v2f.
Hence, right angle between the two final vectors.
In Elasticity, we learned that the coefficient of restitution,
e = -(v1i - v2i)/(v1f - v2f).
To verify this, we begin again by writing the conservation of kinetic energy:
(1/2)mv1i2 + (1/2)Mv2i2
= (1/2)mv1f2 + (1/2)Mv2f2
After rearrangement of terms:
mv1i2 - mv1f2 =
Mv2f2 - Mv2i2
Eq1: m(v1i - v1f)(v1i + v1f) =
M(v2f - v2i)(v2f + v2i)
To simplify the equation any further, we must look at two-object conservation of momentum:
mv1i + Mv2i = mv1f + Mv2f
Eq2: m(v1i - v1f) = M(v2f - v2i)
By inspection, we notice that a part of Eq1 may be simplified using Eq2 and obtain:
v1i + v1f = v2i + v2f OR
v1i - v2i = -(v1f - v2f)
Eq3: -(v1i - v2i)/(v1f - v2f) = 1
We notice from Eq3 which is the coefficient of restitution for a perfectly elastic collision,
e = 1. This is true because in elastic collisions, the kinetic energy is conserved and hence
the linear momentum is also conserved.
A cue ball collides elastically with a red ball and a blue ball which are both initially at rest at 10 m/s.
If the cue ball comes to rest and the blue ball begins to move at 2m/s after collision, what is the speed of the | http://library.thinkquest.org/C006300/noflash/data/six4.htm | 13 |
20 | The Wide-field Infrared Survey Explorer (WISE) is a NASA-funded Explorer mission that will provide a vast storehouse of knowledge about the solar system, the Milky Way, and the Universe. Among the objects WISE will study are asteroids, the coolest and dimmest stars, and the most luminous galaxies.
WISE is an unmanned satellite carrying an infrared-sensitive telescope that will image the entire sky. Since objects around room temperature emit infrared radiation, the WISE telescope and detectors are kept very cold (below -430° F /15 Kelvins, which is only 15° Centigrade above absolute zero) by a cryostat -- like an ice chest but filled with solid hydrogen instead of ice.
Solar panels provide WISE with the electricity it needs to operate, and always point toward the Sun. Orbiting several hundred miles above the dividing line between night and day on Earth, the telescope looks out at right angles to the Sun and always points away from Earth. As WISE orbits from the North Pole to the equator to the South Pole and then back up to the North Pole, the telescope sweeps out a circle in the sky. As the Earth moves around the Sun, this circle will move around the sky, and after six months WISE will have observed the whole sky.
As WISE sweeps along the circle a small mirror scans in the opposite direction, capturing an image of the sky onto an infrared sensitive digital camera which will take a picture every 11 seconds. Each picture covers an area of the sky 3 times larger than the full moon. After 6 months WISE will have taken nearly 1,500,000 pictures covering the entire sky. Each picture has one megapixel at each of four different wavelengths that range from 5 to 35 times longer than the longest waves the human eye can see. Data taken by WISE is downloaded by radio transmission 4 times per day to computers on the ground which combine the many images taken by WISE into an atlas covering the entire celestial sphere and a list of all the detected objects.
Total Ozone Mapping Spectrometer (TOMS)
Launched in 1996, the Total Ozone Mapping Spectrometer (TOMS) satellite was expected to map and understand the magnitude of polar ozone depletion for two years. More than ten years later, it was still in orbit and providing valuable scientific data. Its life was extended thanks to the collaboration with SOI, whose students became lead controllers of the NASA spacecraft in 2002, first from Goddard Space Flight Center and then from a control center on campus.
Using students for TOMS mission support reduced NASA’s operational cost from millions of dollars a year to a few hundred thousand dollars, making the extended mission operations possible. The partnership also gave the students an important hands-on learning experience. The Capitol College team demonstrated that a small contingent of engineering students could perform a number of complex technical tasks well with limited subject-matter expert supervision.
In December 2006, TOMS had a catastrophic failure of the transmitter in its second transponder, resulting in the total loss of all data downlink capability and the termination of the mission. TOMS delivered some of the most critical and influential environmental data ever recorded, documenting the long-term decline of global atmospheric ozone and the emergence and development of the Antarctic ozone hole. It allowed the world to view and understand the ozone in a new way, helping to shape international environmental perspectives and policy.
Today, the work done by the TOMS program has been taken over by the Ozone Monitoring Instrument (OMI) aboard the Aura satellite. The Space Operations Institute TOMS Operations Team was recognized as a recipient of the prestigious William T. Pecora Award for developing innovative techniques for providing mission support and science data capture.
ERBS and UARS
The SOI took over operations for the Earth Radiation Budget Satellite (ERBS) and the Upper Atmosphere Research Satellite (UARS) upon receiving the Basic Grant from NASA in 2002. Both satellite operations were decommissioned in 2006, the process for which was accomplished without incident.
ERBS was part of the NASA's three-satellite Earth Radiation Budget Experiment (ERBE), designed to investigate how energy from the sun is absorbed and re-emitted by the Earth. This process of absorption and re-radiation was one of the principal drivers of the Earth's weather patterns. Observations from ERBS were also used to determine the effects of human activities (such as burning fossil fuels) and natural occurrences (such as volcanic eruptions) on the Earth's radiation balance. The other instruments of the ERBE were flown on NOAA 9 and 10.
The UARS satellite was launched in 1991 by the Space Shuttle Discovery to measure ozone and chemical compounds found in the ozone layer which affect ozone chemistry and processes. UARS also measures winds and temperatures in the stratosphere as well as the energy input from the sun. Together, these help define the role of the upper atmosphere in climate and climate variability. The satellite is 35 feet long, 15 feet in diameter, weighs 13,000 pounds, and carries 10 instruments. UARS orbits at an altitude of 375 miles with an orbital inclination of 57 degrees. Designed to operate for three years, six of its ten instruments are still functioning. | http://www.capitol-college.edu/prospective-students/undergraduate/space-operations-institute/missions/completed-missions | 13 |
21 | Geometry's origins go back to approximately 3,000 BC in ancient Egypt. Ancient Egyptians used an early stage of geometry in several ways, including the surveying of land, construction of pyramids, and astronomy. Around 2,900 BC, ancient Egyptians began using their knowledge to construct pyramids with four triangular faces and a square base.
The next great advancement in geometry came from Euclid in 300 BC when he wrote a text titled 'Elements.' In this text, Euclid presented an ideal axiomatic form (now known as Euclidean geometry) in which propositions could be proven through a small set of statements that are accepted as true. In fact, Euclid was able to derive a great portion of planar geometry from just the first five postulates in 'Elements.' These postulates are listed below:
(1) A straight line segment can be drawn joining any two points.
(2) A straight line segment can be drawn joining any two points.
(3) Given any straight line segment, a circle can be drawn having the segment as radius and one endpoint as center.
(4) All right angles are congruent.
(5) If two lines are drawn which intersect a third line in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended infinitely.
Euclid's fifth postulate is also known as the parallel postulate.
The next tremendous advancement in the field of geometry occurred in the 17th century when René Descartes discovered coordinate geometry. Coordinates and equations could be used in this type of geometry in order to illustrate proofs. The creation of coordinate geometry opened the doors to the development of calculus and physics.
In the 19th century, Carl Friedrich Gauss, Nikolai Lobachevsky, and János Bolyai formally discovered non-Euclidean geometry. In this kind of geometry, four of Euclid's first five postulates remained consistent, but the idea that parallel lines do not meet did not stay true. This idea is a driving force behind elliptical geometry and hyperbolic geometry. | http://www.wyzant.com/help/math/geometry/introduction/history_of_geometry | 13 |
34 | Experiment of the Month
James Stoltzfus needed to make up a laboratory that he had missed. The experiment was to determine if energy and momentum were conserved in a collision between two identical steel balls. Rather than repeat the experiment that had already been done, he did an experiment which extended the previous results.
The balls were the size of small marbles. One balanced on a tee, while the other rolled down a ramp, flew off the end, and collided with the teed ball. The picture shows the "shooter" ball about to leave the ramp and strike the "target" ball. After the collision, the balls fly away from the collision point and fall to the floor. In the experiment they landed in the shaded area of the picture.
The collision point is represented by a dot on the floor directly under the tee. This dot is marked by placing carbon paper, facing down, on top of a large piece of newsprint, at the points where the two balls are expected to land.
The balls always take the same amount of time to reach the floor, so the displacement of the landing point from the collision point is a measure of the horizontal component of velocity of the ball. We neglect wind resistance, and the velocities during the collision are all horizontal. As a result, the displacement (collision point to landing point) is proportional to the velocity (and momentum) just after the collision.
For reference, the shooter ball is also rolled with no target ball present. Its landing point is a measure of the momentum of the system just before the collision. A ball dropped directly from the tee can mark the point below the tee.
Mr. Stoltzfus varied the angle of collision by moving the tee sideways so that the collision varied from nearly head-on to just "kissing." With a protractor he measured and recorded the angle made by the line from the tee to its pivot just below the shooter ball in the photo above. Each pair of ball dots was identified by writing the angle that caused them on the newsprint. Mr. Stoltzfus did two collisions for each angle. His results are shown below.
In the photo on the left above, one of the data points (for tee angle of 258 degrees) has been copied, magnified, and inserted next to the actual data points. The numbers refer to the angle that Mr. Stoltzfus read on the protractor. The photo on the right above has had shaded circles added to all the landing points, to clearly identify them. The non-colliding dots are the farthest right in the photos, and the collision point is the furthest left.
The intriguing circular pattern has another feature: Each collision produces a pair of dots; one from each ball. If each pair is joined to its mate by a line, a pattern results which is shown at the right.
The lines form diameters of the circle, crossing at its center. It turns out that such a pattern is predicted for the special case in which momentum and energy are both conserved in the collision. The analysis was sketched by Dr. Nolan, who is teaching physics 231 course this year, and who assigned this laboratory.
To understand the analysis, we consider the vector displacements of a pair of landing points, from the collision point. We will represent these displacements by the letter "p" to emphasize that they serve as a measure of the momentums after collision of the two balls.
Similarly, the displacement from the collision point to the landing point of the un-collided "shooter" ball is represented by "p0" because it serves as a measure of the momentum of the system before the collision. An example from the collision labelled 258 is shown at the right.
The vector analysis of the problem indicates two salient points. First, the momentums of the two balls are perpendicular to each other. In the notation of the picture, p1 is perpendicular to p2. Second, the circle has a radius equal to half of p0.
To begin the analysis, we consider two vectors, p1 and half of p0 as shown in the figure at the right. Each point appears to fall on a circle of radius, R. R is evidently also of the same magnitude as half of p0. There is also a vector R, from the center of the circle to a particular landing point, p1. The vector relation shown is written algebraically as
provided that the points really do fall on a circle.
Our algebraic aim is to show that the magnitude of R is 1/2 p0 for every landing point in the data. It turns out that this will be true if both energy and momentum are conserved during the collision.
Conservation of momentum tells us that
If we neglect any vertical motion during the collision, then energy conservation tells us that
We combine these two by replacing p0 with the vector sum of p1 and p2, and calculating the square of that new vector:
This can be true only if
that is if the two scattered momentums are perpendicular to each other. Returning to our calculation for the magnitude of the radius, we have
That is, for any landing point p1, the distance to the point 1/2 p0 is always the same. The pattern is a circle provided that energy and momentum are conserved. | http://www.millersville.edu/physics/experiments/078/index.php | 13 |
19 | Secant right-hand side
In geometry, the relative position of two lines, or a line and a Curve, can be qualified by the adjective secant . This one comes from the Latin secare , which means to cross. In mathematical terms, a line is secant on another line, or more generally with a curve, when it has a nonempty intersection with this one.
To carry out the study of a curve in the vicinity of one of its points P , it is useful to consider the secants resulting from P , i.e. the lines passing by P and another point Q of the curve. It is starting from these secants that the concept is defined of tangent with the curve at the point P : it is about secant the line limit, when it exists, resulting from P when the second point Q approaches P along the curve.
So when Q is sufficiently close to P , the secant can be regarded as an approximation of the tangent.
In the particular case of the curve representative of a numerical function y=f (X) , the Pente of the tangent is the limiting of the slope of the secants, which gives a geometrical interpretation of the Dérivabilité of a function.
Bond between the concepts of secant function and secant lineLet us consider a reality θ. Let us draw a secant line with the Cercle unit (centered with the origin) which passes by the origin and the point (cos θ, sin θ), not of the circle whose vector image forms an angle θ with the directing vector of the x-axis. The absolute Value of the secant trigonometrical of θ is equal to the length of the segment secant line energy of the origin until the line of equation X = 1. If the segment passes by the point (cos θ, sin θ), then the trigonometrical secant of θ is positive, if it passes by the antipodean Point, then the secant of θ is negative.
Approximation by a secantLet us consider the curve of equation there = F ( X ) in a Cartesian Frame of reference, and consider a point P of coordinated ( C , F ( C )), and another point Q of coordinates ( C + Δ X , F ( C + Δ X )). Then the Slope m secant line, not passing P and Q , is given by:
The member of right-hand side of the preceding equation is the report/ratio of Newton in C (or rate of increase). When Δ X approaches zero, this report/ratio approaches the derived number F ( C ), by supposing the existence of the derivative.
- Infinitesimal calculus
|Random links:||University of advanced industrial technologies | Saint-Blaise-of-boxwood | Orthodoxe church autocéphale Ukrainian canonical | Coupe Davis 1986 | Achéménès | Codasyl| | http://www.speedylook.com/Secant_right-hand_side.html | 13 |
15 | We know that there are at least 75 planets outside our own solar system, orbiting their distant stars. The rate of planet discovery has sped up recently, and many more planets will likely be discovered in the weeks and years to come.
And yet, we have never seen any of these planets with our own eyes. Planets do not glow like a star - they only reflect light. That makes them a lot harder to see from far away. Any light reflecting off a planet also tends to be overwhelmed by the brightness of the host star.
So how do we know the planets are really there if we can’t see them? Several different techniques have been developed, and they all rely on one thing – how planets affect the stars they orbit.
The radial velocity technique has been the most successful detection method so far. This technique looks at how stars are affected by the gravity of an orbiting planet. Over the course of an orbit, the planet will pull at the star from different sides. Scientists measure the Doppler shift of the starlight to tell when the star is moving slightly away from us or toward us.
"As a star moves away from us, the starlight is Doppler-stretched to longer wavelengths, shifting the starlight toward the red end of the spectrum," explains Paul Butler, an astronomer with the Carnegie Institution of Washington and NAI member. "When the star moves toward us, the starlight is scrunched toward shorter wavelengths, shifting the starlight toward the blue. The Doppler shift that a planet imposes on a star is tiny. The ‘color’ change is imperceptible to the human eye."
Butler and his team have found many planets using the radial velocity, or "precision Doppler," technique. The planets detected by this technique have all been massive - the largest about 15 times more massive than Jupiter, the smallest about the same mass as Saturn. Although the planet’s mass affects the amount of tugging on a star, the radial velocity technique only indicates the minimum mass of the orbiting planet. For a more precise determination of a planet’s mass, Butler combines radial velocity observations with readings from another technique called transit photometry.
Transit photometry measures the apparent change in a star's brightness when a planet passes in front of it. The planet blocks some of the starlight reaching us, making the star seem slightly dimmer. This loss of light magnitude depends on the size of the planet.
In order for transit photometry to work, we must view the planetary system right at the orbital plane. If we’re watching from either too far above or too far below the planet’s orbit, the planet won’t pass in front of our view and we won’t witness any apparent dimming of the star.
Butler says transit photometry has provided his team with independent confirmation for one planet, HD 209458, and has allowed for the direct determination of the physical size and mass of the planet.
"We continue to work on finding transit planet candidates from our Doppler velocity measurements, primarily planets that orbit within 0.2 AU – 20 percent of the Earth to Sun distance -- of the host star," says Butler. "Transit measurements combined with Doppler velocity measurements also yield the orbital inclination and the true mass of the planet, as well as the physical size and bulk density of the planet."
Another limitation of the radial velocity technique is that the Doppler shift can’t be accurately measured for all stars, because many of them aren’t moving directly toward or away from us. Still, Butler says that this "orbital inclination" limitation doesn’t prevent the radial velocity method from detecting the movement of most stars.
"While the Doppler technique becomes less sensitive to planetary systems as the orbital plane becomes ‘face on’ relative to our line of sight, this is a minor effect," says Butler. "Very, very few planetary systems will be so ‘face on’ as to render them undetectable."
To overcome this limitation, Butler hopes to eventually combine his Doppler velocity measurements with observations from astrometry. Butler says that astrometric instruments are not currently capable of detecting extrasolar planets, but they should achieve sufficient sensitivity to detect planets within the next few years.
Like radial velocity, astrometry looks at how a planet’s gravity tugs on its star. But instead of measuring the Doppler shift of the starlight, astrometry measures the star’s position relative to distant background stars. As a planet completes an orbit around a star, the star appears to move back and forth in the sky. An astrometric instrument (such as an interferometer) can measure this change of position, which can then tell us something about the planet’s mass and orbital distance.
"Over the next 10 years, all we are going to ask of the inferometric astrometry technique is to solve for the orbital inclination of known planetary systems," says Butler.
Another type of extrasolar planet detection is gravitational microlensing. This technique uses foreground stars as a sort of magnifying glass to help detect distant stars and their planets. When a star that is closer to us passes in front of a more distant star, its gravity bends and amplifies the light from the distant star. This results in an apparent increase of light from the distant star.
Any planets that orbit the more distant star will perturb the gravitational lens, creating a brief variation in the amplified starlight. The duration of this change depends on the mass of the planet and the distance between the planet and its star, as well as the star velocity perpendicular to our line of sight.
However, Butler doesn’t think the gravitational microlensing technique is a very practical means for locating extrasolar planets.
"Gravitational microlensing is frankly not of much value," says Butler. "Microlensing events are notoriously complicated, involving complex theoretical calculations and interpretations of the data. The data can seldom be cleanly interpreted as the signature of a planet. Typically the host star can not even be seen in a microlensing effect, so we don't even know anything about the star, let alone the orbiting planet."
Gravitational microlensing events occur relatively quickly and do not reoccur, so it is almost impossible to confirm the data. In addition, says Butler, the other planetary detection techniques cannot confirm any of the planets discovered by gravitational microlensing.
"Such detections can not be followed up by any technique that we can imagine over the next hundred years," says Butler. "This is because microlensing can only detect planets that orbit stars many thousands of light-years away, while astrometry and direct imaging can only work on the nearest stars out to about 100 light years."
Still, according to William Borucki, a research scientist in the Planetary Studies Branch of NASA Ames and NAI member, having many so many different methods of extrasolar planet detection is a good thing. Where one method has a drawback, another method can provide information to fill in the gap.
"You wouldn’t want to have just one way of doing things," says Borucki. "For instance, you wouldn’t want to draw with just chalk. If someone told you that you couldn’t have a pencil or pen or typewriter or anything except chalk, you wouldn’t be very happy. Each detection method has its own strengths and weaknesses. They compliment each other very nicely."
Borucki is working on developing Kepler, a space telescope dedicated to transit photometry. While Kepler has not yet been approved by NASA, the telescope is designed to search for Earth-like extrasolar planets in the habitable zone of their stars. Some data suggest that such terrestrial planets may be out there, but currently these worlds are just below the limits of our detection.
"Kepler’s goal is to find out if planets with Earth-sized masses are rare or common," says Borucki. "There could be oodles of life out there, it could be that most stars have such planets. But then we have to ask ourselves, why haven’t they come to talk to us?"
"Personally, I think it is probable that there are a lot of Earths out there," Borucki continues. "But just imagine what a tremendous discovery it would be to find that there are no other Earth-like planets out there – no other planets capable of sustaining life. It would change the way we thought of ourselves, and the Universe, forever."
Butler says he is working to answer two questions: first, what fraction of nearby stars have planets? Also, what fraction of planetary systems are similar to the Solar System?
"The discovery of Solar System-like planets such as Jupiter and Saturn will require 10 to 30 years, roughly the orbital periods of Jupiter and Saturn," says Butler. "We thus expect to have the first statistically relevant answers to these questions around the end of this decade."
Detecting extrasolar planets takes time. In order to prove it is a planet that is affecting the star, information must be recorded for at least one full planetary orbit. Orbital times vary – the Earth takes one year to complete an orbit, while Jupiter takes about 12 years - so it can take many years to collect the necessary data.
Butler says that in the short term, the most important thing we can do is improve the precision of our Doppler technique to find smaller and more distant planets.
The next major breakthrough, says Butler, will be interferometric astrometry detections, which he says will begin in earnest in about 10 to 15 years. In addition, space based photometric transit telescopes like Kepler may also find Earth-like planets by the end of this decade.
Furthest away in the future are projects where we will actually get to see the planets themselves, says Butler.
"Direct imaging techniques like the Terrestrial Planet Finder are probably about 20 years or more away," he states. "Such techniques ultimately offer the opportunity to take direct spectra of extrasolar planets and thus directly search for signs of life."
SEND THIS STORY TO A FRIEND
History of planet detection
Explanation of Doppler technique with animations
Space interferometer Mission (SIM) | http://nai.arc.nasa.gov/news_stories/news_print.cfm?ID=107 | 13 |
24 | - "Radioactive" and "Radioactivity" redirect here.
Radioactive decay is the process by which an excited, unstable atomic nucleus loses energy by emitting radiation in the form of particles or electromagnetic waves, thereby transitioning toward a more stable state.
The atomic nucleus comprises certain combinations of protons and neutrons held in a stable configuration through a precise balance of powerful forces: The strong force holding the protons and neutrons together is powerful but very short range; the electrostatic repulsion of the positively charged protons is less powerful but long range; the weak force makes the neutron inherently unstable and will turn it into a proton if given the chance. This balance is very delicate: a uranium-238 nucleus has a half-life of 4.5 billion years while uranium-237 with just one less neutron has a half-life of 1.3 minutes.
If there is an imbalance in these forces, the system will eventually shed the excess by ejecting radiation in some combination of particles and wave energy. The most common radioactive decays occur in response to one of three possible types of imbalance. If the nucleus has too many neutrons, one of its neutrons decays (through beta decay) into one proton plus two fragments ejected from the nucleus, a neutrino and an electron (called a a beta particle). If the nucleus has too many protons, it undergoes alpha decay by ejecting two protons and two neutrons as an alpha particle. If the nucleus is excited (has too much energy) it ejects a gamma ray.
Materials exhibiting radioactive decay have yielded widespread application to enhance human welfare. The various applications take advantage of the different decay properties, different decay products, and different chemical properties of the many elements having some isotopes that are radioactive. Major types of applications use the radiation either for diagnosing a problem or for treating a problem by killing specific harmful cells. Areas of application include human and veterinary medicine, nutrition research, basic research in genetics and metabolism, household smoke detectors, industrial and mining inspection of welds, security inspection of cargo, tracing and analyzing pollutants in studies of runoff, and dating materials in geology, paleontology, and archaeology.
Radioactive decay results in an atom of one type, called the parent nuclide, being transformed to an atom of a different type, called the daughter nuclide. For example, a carbon-14 atom (the "parent") emits radiation and transforms to a nitrogen-14 atom (the "daughter"). This transformation involves quantum probability, so it is impossible to predict when a particular atom will decay. Given a large number of atoms, however, the decay rate is predictable and measured by the "half-life"—the time it takes for 50 percent of the atoms to undergo the change. The half-life of radioactive atoms varies enormously; from fractions of a millisecond to billions of years.
The SI unit of radioactive decay (the phenomenon of natural and artificial radioactivity) is the becquerel (Bq). One Bq is defined as one transformation (or decay) per second. Since any reasonably-sized sample of radioactive material contains many atoms, a Bq is a tiny measure of activity; amounts on the order of TBq (terabecquerel) or GBq (gigabecquerel) are commonly used. Another unit of (radio)activity is the curie, Ci, which was originally defined as the activity of one gram of pure radium, isotope Ra-226. At present, it is equal (by definition) to the activity of any radionuclide decaying with a disintegration rate of 3.7 × 1010 Bq. The use of Ci is presently discouraged by SI.
The neutrons and protons that constitute nuclei, as well as other particles that may approach them, are governed by several interactions. The strong nuclear force, not observed at the familiar macroscopic scale, is the most powerful force over subatomic distances. The electrostatic force is also significant, while the weak nuclear force is responsible for Beta decay.
The interplay of these forces is simple. Some configurations of the particles in a nucleus have the property that, should they shift ever so slightly, the particles could fall into a lower-energy arrangement (with the extra energy moving elsewhere). One might draw an analogy with a snowfield on a mountain: While friction between the snow crystals can support the snow's weight, the system is inherently unstable with regards to a lower-potential-energy state, and a disturbance may facilitate the path to a greater entropy state (that is, towards the ground state where heat will be produced, and thus total energy is distributed over a larger number of quantum states). Thus, an avalanche results. The total energy does not change in this process, but because of entropy effects, avalanches only happen in one direction, and the end of this direction, which is dictated by the largest number of chance-mediated ways to distribute available energy, is what we commonly refer to as the "ground state."
Such a collapse (a decay event) requires a specific activation energy. In the case of a snow avalanche, this energy classically comes as a disturbance from outside the system, although such disturbances can be arbitrarily small. In the case of an excited atomic nucleus, the arbitrarily small disturbance comes from quantum vacuum fluctuations. A nucleus (or any excited system in quantum mechanics) is unstable, and can thus spontaneously stabilize to a less-excited system. This process is driven by entropy considerations: The energy does not change, but at the end of the process, the total energy is more diffused in spacial volume. The resulting transformation alters the structure of the nucleus. Such a reaction is thus a nuclear reaction, in contrast to chemical reactions, which also are driven by entropy, but which involve changes in the arrangement of the outer electrons of atoms, rather than their nuclei.
Some nuclear reactions do involve external sources of energy, in the form of collisions with outside particles. However, these are not considered decay. Rather, they are examples of induced nuclear reactions. Nuclear fission and fusion are common types of induced nuclear reactions.
Radioactivity was first discovered in 1896, by the French scientist Henri Becquerel while working on phosphorescent materials. These materials glow in the dark after exposure to light, and he thought that the glow produced in cathode ray tubes by X-rays might somehow be connected with phosphorescence. So, he tried wrapping a photographic plate in black paper and placing various phosphorescent minerals on it. All results were negative until he tried using uranium salts. The result with these compounds was a deep blackening of the plate.
However, it soon became clear that the blackening of the plate had nothing to do with phosphorescence because the plate blackened when the mineral was kept in the dark. Also, non-phosphorescent salts of uranium and even metallic uranium blackened the plate. Clearly there was some new form of radiation that could pass through paper that was causing the plate to blacken.
At first, it seemed that the new radiation was similar to the then recently discovered X-rays. However, further research by Becquerel, Marie Curie, Pierre Curie, Ernest Rutherford, and others discovered that radioactivity was significantly more complicated. Different types of decay can occur, but Rutherford was the first to realize that they all occur with the same mathematical, approximately exponential, formula.
As for types of radioactive radiation, it was found that an electric or magnetic field could split such emissions into three types of beams. For lack of better terms, the rays were given the alphabetic names alpha, beta, and gamma; names they still hold today. It was immediately obvious from the direction of electromagnetic forces that alpha rays carried a positive charge, beta rays carried a negative charge, and gamma rays were neutral. From the magnitude of deflection, it was also clear that alpha particles were much more massive than beta particles. Passing alpha rays through a thin glass membrane and trapping them in a discharge tube allowed researchers to study the emission spectrum of the resulting gas, and ultimately prove that alpha particles are in fact helium nuclei. Other experiments showed the similarity between beta radiation and cathode rays; they are both streams of electrons, and between gamma radiation and X-rays, which are both high energy electromagnetic radiation.
Although alpha, beta, and gamma are most common, other types of decay were eventually discovered. Shortly after discovery of the neutron in 1932, it was discovered by Enrico Fermi that certain rare decay reactions give rise to neutrons as a decay particle. Isolated proton emission was also eventually observed in some elements. Shortly after the discovery of the positron in cosmic ray products, it was realized that the same process that operates in classical beta decay can also produce positrons (positron emission), analogously to negative electrons. Each of the two types of beta decay acts to move a nucleus toward a ratio of neutrons and protons which has the least energy for the combination. Finally, in a phenomenon called cluster decay, specific combinations of neutrons and protons other than alpha particles were found to occasionally spontaneously be emitted from atoms.
Still other types of radioactive decay were found which emit previously seen particles, but by different mechanisms. An example is internal conversion, which results in electron and sometimes high energy photon emission, even though it involves neither beta nor gamma decay.
The early researchers also discovered that many other chemical elements besides uranium have radioactive isotopes. A systematic search for the total radioactivity in uranium ores also guided Marie Curie to isolate a new element, polonium, and to separate a new element, radium, from barium; the two elements' chemical similarity would otherwise have made them difficult to distinguish.
The dangers of radioactivity and of radiation were not immediately recognized. Acute effects of radiation were first observed in the use of X-rays when the Serbo-Croatian-American electric engineer, Nikola Tesla, intentionally subjected his fingers to X-rays in 1896. He published his observations concerning the burns that developed, though he attributed them to ozone rather than to the X-rays. Fortunately, his injuries healed later.
The genetic effects of radiation, including the effects on cancer risk, were recognized much later. It was only in 1927 that Hermann Joseph Muller published his research that showed the genetic effects. In 1946, he was awarded the Nobel prize for his findings.
Before the biological effects of radiation were known, many physicians and corporations had begun marketing radioactive substances as patent medicine, much of which was harmful to health and gave rise to the term radioactive quackery; particularly alarming examples were radium enema treatments, and radium-containing waters to be drunk as tonics. Marie Curie spoke out against this sort of treatment, warning that the effects of radiation on the human body were not well understood (Curie later died from aplastic anemia, assumed due to her own work with radium, but later examination of her bones showed that she had been a careful laboratory worker and had a low burden of radium; a better candidate for her disease was her long exposure to unshielded X-ray tubes while a volunteer medical worker in World War I). By the 1930s, after a number of cases of bone-necrosis and death in enthusiasts, radium-containing medical products had nearly vanished from the market.
Modes of decay
Radionuclides can undergo a number of different reactions. These are summarized in the following table. A nucleus with atomic weight A and a positive charge Z (called atomic number) is represented as (A, Z).
|Mode of decay||Participating particles||Daughter nucleus|
|Decays with emission of nucleons:|
|Alpha decay||An alpha particle (A=4, Z=2) emitted from nucleus||(A-4, Z-2)|
|Proton emission||A proton ejected from nucleus||(A-1, Z-1)|
|Neutron emission||A neutron ejected from nucleus||(A-1, Z)|
|Double proton emission||Two protons ejected from nucleus simultaneously||(A-2, Z-2)|
|Spontaneous fission||Nucleus disintegrates into two or more smaller nuclei and other particles||-|
|Cluster decay||Nucleus emits a specific type of smaller nucleus (A1, Z1) larger than an alpha particle||(A-A1, Z-Z1) + (A1,Z1)|
|Different modes of beta decay:|
|Beta-Negative decay||A nucleus emits an electron and an antineutrino||(A, Z+1)|
|Positron emission, also Beta-Positive decay||A nucleus emits a positron and a neutrino||(A, Z-1)|
|Electron capture||A nucleus captures an orbiting electron and emits a neutrino - The daughter nucleus is left in an excited and unstable state||(A, Z-1)|
|Double beta decay||A nucleus emits two electrons and two antineutrinos||(A, Z+2)|
|Double electron capture||A nucleus absorbs two orbital electrons and emits two neutrinos - The daughter nucleus is left in an excited and unstable state||(A, Z-2)|
|Electron capture with positron emission||A nucleus absorbs one orbital electron, emits one positron and two neutrinos||(A, Z-2)|
|Double positron emission||A nucleus emits two positrons and two neutrinos||(A, Z-2)|
|Transitions between states of the same nucleus:|
|Gamma decay||Excited nucleus releases a high-energy photon (gamma ray)||(A, Z)|
|Internal conversion||Excited nucleus transfers energy to an orbital electron and it is ejected from the atom||(A, Z)|
Radioactive decay results in a reduction of summed rest mass, which is converted to energy (the disintegration energy) according to the formula E = mc2. This energy is released as kinetic energy of the emitted particles. The energy remains associated with a measure of mass of the decay system invariant mass, inasmuch the kinetic energy of emitted particles contributes also to the total invariant mass of systems. Thus, the sum of rest masses of particles is not conserved in decay, but the system mass or system invariant mass (as also system total energy) is conserved.
In a simple, one-step radioactive decay, the new nucleus that emerges is stable. C-14 undergoing beta decay to N-14 and K-40 undergoing electron capture to Ar-40 are examples.
On the other hand, the daughter nuclide of a decay event can be unstable, sometimes even more unstable than the parent. If this is the case, it will proceed to decay again. A sequence of several decay events, producing in the end a stable nuclide, is a decay chain. Ultrapure uranium, for instance, is hardly radioactive at all. After a few weeks, however, the unstable daughter nucleides accumulate—such as radium—and it is their radioactivity that becomes noticeable.
Of the commonly occurring forms of radioactive decay, the only one that changes the number of aggregate protons and neutrons (nucleons) contained in the nucleus is alpha emission, which reduces it by four. Thus, the number of nucleons modulo 4 is preserved across any decay chain. This leads to the four radioactive decay series with atomic weights 4n+0, 4n+1, 4n+2, and 4n+3.
In an alpha decay, the atomic weight decreases by 4 and the atomic number decreases by 2. In a beta decay, the atomic weight stays the same and the atomic number increases by 1. In a gamma decay, both atomic weight and number stay the same. A branching path occurs when there are alternate routes to the same stable destination. One branch is usually highly favored over the other.
These are the four radioactive decay series.
Uranium-235 series (4n+3)
Thorium-232 series (4n+0)
Uranium-238 series (4n+2)
Neptunium-237 series (4n+1)
The members of this series are not presently found in nature because the half-life of the longest lived isotope in the series is short compared to the age of the earth.
According to the widely accepted Big Bang model, the universe began as a mixture of hydrogen-1 (75 percent) and helium-4 (25 percent) with only traces of other light atoms. All the other elements, including the radioactive ones, were generated later during the thermonuclear burning of stars—the fusion of the lighter elements into the heavier ones. The iron nucleus, which is the end result of regular thermonuclear burning of stars, has the least binding energy of any atomic nucleus and thus becomes available in whole or in parts as a source of building blocks for making the larger nuclei such as that of uranium under the intense energy conditions during the supernova detonation of a star. The uranium on earth is thought to have been created in a supernova about 6 billion years ago. Naturally, only the most long-lived of the radioactive atoms generated in this cataclysm have lasted to this day. Only thorium-232, uranium-238, and, barely, uranium-235 have the required longevity to serve as the parents for the three radioactive series.
A more recent, and ongoing, creation of radioactive isotopes found on earth is during interactions between stable isotopes and energetic particles. For example, carbon-14, a radioactive nuclide with a half-life of only 5730 years, is constantly being produced in Earth's upper atmosphere due to interactions between cosmic rays and nitrogen.
A large number of radioactive isotopes not found in nature are generated nowadays in the cores of atomic reactors for commercial use.
Radioactive materials and their decay products—alpha particles (2 protons plus 2 neutrons), beta particles (electrons or positrons), gamma radiation, and the daughter isotopes—have been put to the service of humanity in a great number of ways. At the same time, high doses of radiation from radioactive materials can be toxic unless they are applied with medical precision and control. Such exposures are unlikely except for the unlikely cases of a nuclear weapon detonation or an accident or attack on a nuclear facility.
In medicine, some radioactive isotopes, such as iron-59 and iodine-131, are usable directly in the body because the isotopes are chemically the same as stable iron and iodine respectively. Iron-59, steadily announcing its location by emitting beta-decay electrons, is readily incorporated into blood cells and thereby serves as an aid in studying iron deficiency, a nutritional deficiency affecting more than 2 billion people globally. Iron-59 is an important tool in the effort to understand the many factors affecting a person's ability to metabolize iron in the diet so that it becomes part of the the blood. Iodine-131 administered in the blood to people suffering from hyperthyroidism or thyroid cancer concentrates in the thyroid where gamma radiation emitted by the iodine-131 kills many of the thyroid cells. Hyperthyroidism in cats is treated effectively by one dose of iodine-131.
Radioactive isotopes whose chemical nature does not permit them to be readily incorporated into the body, are delivered to targeted areas by attaching them to a particular molecule that does tend to concentrate in a particular bodily location—just as iodine naturally concentrates in the thyroid gland. For studying activity in the brain, the radioactive isotope fluorine-18 is commonly attached to an analog of the sugar glucose which tends to concentrate in the active regions of the brain within a short time after the molecule is injected into the blood. Fluorine-18 decays by releasing a positron whose life is soon ended as it meets an electron and the two annihilate yielding gamma radiation that is readily detected by the Positron Emission Tomography (PET) technology. Similar techniques of radioisotopic labeling, have been used to track the passage of a variety of chemical substances through complex systems, especially living organisms.
Three gamma emitting radioisotopes are commonly used as a source of radiation. Technetium-99m, a metastable form with a half-life of 6 hours, emits a relatively low frequency gamma radiation that is readily detected. It has been widely used for imaging and functional studies of the brain, myocardium, thyroid, lungs, liver, gallbladder, kidneys, skeleton, blood, and tumors. Gamma radiation from cobalt-60 is used for sterilizing medical equipment, treating cancer, pasteurizing certain foods and spices, gauging the thickness of steel as it is being produced, and monitoring welds. Cesium-137 is used as a source of gamma radiation for treating cancer, measuring soil density at construction sites, monitoring the filling of packages of foods and pharmaceuticals, monitoring fluid flows in production plants, and studying rock layers in oil wells.
Americanium-241, which decays by emitting alpha particles and low energy gamma radiation, is commonly used in smoke detectors as the alpha particles ionize air in a chamber permitting a small current to flow. Smoke particles entering the chamber activate the detector by absorbing alpha particles without being ionized, thereby reducing the current.
On the premise that radioactive decay is truly random (rather than merely chaotic), it has been used in hardware random-number generators. Because the process is not thought to vary significantly in mechanism over time, it is also a valuable tool in estimating the absolute ages of certain materials. For geological materials, the radioisotopes (parents) and certain of their decay products (daughters) become trapped when a rock solidifies, and can then later be used to estimate the date of the solidification (subject to such uncertainties as the possible number of daughter elements present at the time of solidification and the possible number of parent or daughter atoms added or removed over time).
For dating organic matter, radioactive carbon-14 is used because the atmosphere contains a small percentage of carbon-14 along with the predominance of stable carbons 12 and 13. Living plants incorporate the same ratio of carbon-14 to carbon-12 into their tissues and the animals eating the plants have a similar ratio in their tissues. After organisms die, their carbon-14 decays to nitrogen at a certain rate while the carbon-12 content remains constant. Thus, in principle, measuring the ratio of carbon-14 to carbon-12 in the dead organism provides an indication of how long the organism has been dead. This dating method is limited by the 5730 year half-life of carbon-14 to a maximum of 50,000 to 60,000 years. The accuracy of carbon dating has been called into question primarily because the concentration of carbon-14 in the atmosphere varies over time and some plants have the capacity to exclude carbon-14 from their intake.
Radioactive decay rates
The decay rate, or activity, of a radioactive substance are characterized by:
- half life—symbol t1 / 2—the time for half of a substance to decay.
- mean lifetime—symbol τ—the average lifetime of any given particle.
- decay constant—symbol λ—the inverse of the mean lifetime.
- (Note that although these are constants, they are associated with statistically random behavior of substances, and predictions using these constants are less accurate for a small number of atoms.)
- Total activity—symbol A—number of decays an object undergoes per second.
- Number of particles—symbol N—the total number of particles in the sample.
- Specific activity—symbol SA—number of decays per second per amount of substance. The "amount of substance" can be the unit of either mass or volume.
These are related as follows:
- is the initial amount of active substance—substance that has the same percentage of unstable particles as when the substance was formed.
The units in which activities are measured are: Becquerel (symbol Bq) = number of disintegrations per second; curie (Ci) = 3.7 × 1010 disintegrations per second. Low activities are also measured in disintegrations per minute (dpm).
As discussed above, the decay of an unstable nucleus is entirely random and it is impossible to predict when a particular atom will decay. However, it is equally likely to decay at any time. Therefore, given a sample of a particular radioisotope, the number of decay events –dN expected to occur in a small interval of time dt is proportional to the number of atoms present. If N is the number of atoms, then the probability of decay (– dN/N) is proportional to dt:
Particular radionuclides decay at different rates, each having its own decay constant (λ). The negative sign indicates that N decreases with each decay event. The solution to this first-order differential equation is the following function:
This function represents exponential decay. It is only an approximate solution, for two reasons. Firstly, the exponential function is continuous, but the physical quantity N can only take non-negative integer values. Secondly, because it describes a random process, it is only statistically true. However, in most common cases, N is a very large number and the function is a good approximation.
In addition to the decay constant, radioactive decay is sometimes characterized by the mean lifetime. Each atom "lives" for a finite amount of time before it decays, and the mean lifetime is the arithmetic mean of all the atoms' lifetimes. It is represented by the symbol τ, and is related to the decay constant as follows:
A more commonly used parameter is the half-life. Given a sample of a particular radionuclide, the half-life is the time taken for half the radionuclide's atoms to decay. The half life is related to the decay constant as follows:
This relationship between the half-life and the decay constant shows that highly radioactive substances are quickly spent, while those that radiate weakly endure longer. Half-lives of known radionuclides vary widely, from more than 1019 years (such as for very nearly stable nuclides, for example, 209Bi), to 10-23 seconds for highly unstable ones.
- Krane, Kenneth S., and David Halliday. 1988. Introductory Nuclear Physics. New York: Wiley. ISBN 047180553X.
- Martin, Brian. 2006. Nuclear and Particle Physics: An Introduction. Hoboken, NJ: Wiley. ISBN 0470025328.
- Poenaru, D. N. 1996. Nuclear Decay Modes. Philadelphia: Institute of Physics. ISBN 0750303387.
- Saling, James. 2001. Radioactive Waste Management. New York: Taylor & Francis. ISBN 1560328428.
- Seiden, Abraham. 2004. Particle Physics: A Comprehensive Introduction. San Francisco, CA: Addison Wesley. ISBN 0805387366.
- Tipler, Paul, and Ralph Llewellyn. 2002. Modern Physics, 4th ed. New York: W.H. Freeman. ISBN 0-7167-4345-0.
- Turner, James E. 1995. Atoms, Radiation, and Radiation Protection, 2nd ed. New York: Wiley. ISBN 0471595810.
- General information. Retrieved October 14, 2007.
- Nomenclature of nuclear chemistry. Retrieved October 14, 2007.
- Some theoretical questions of nuclear stability. Retrieved October 14, 2007.
- Decay heat rate|quantity calculation. Retrieved October 14, 2007.
- Specific activity and related topics. Retrieved October 14, 2007.
- The Lund/LBNL Nuclear Data Search—Contains tabulated information on radioactive decay types and energies. Retrieved October 14, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Radioactive_decay | 13 |
10 | Coloring By Numbers
Unless you've been living under a rock, or are still cruising the Web with a Lynx browser, you're probably familiar with the use of colors in Web page design. HTML allows us to specify colors for text, links, visited links, backgrounds, etc., with a fair amount of ease.
You can do all these things, and much more, under CSS1. In fact, with CSS1 you can define a color value for any element, class, pseudo-class or -element, and so on. However, we aren't going to talk about how to set the colors today. Instead, we're going to discuss the ways in which the colors can be specified. There are more of them than you might think.
How Colors are Represented
For example, white is a combination of all three colors at full strength. Black is the absence of color. Red is created by turning red up to full strength, and blue and green down to nothing. To obtain various shades of red, turn down the red, then turn up blue or green until you get the desired color. To create the color purple, for example, turn the red and blue values up as high as they'll go, and leave green at zero.
Actually, I am going to talk briefly about one property:
color. I bring it up because I'm going to be using it in examples throughout this article.
color is used to set the foreground color for an element. In most cases, that will be text, although in some cases it could be used to determine the color of a border. Anyway, the generic syntax for
That isn't an actual value there. It's a placeholder for a whole range of possible values, which are the subject of today's discussion.
Playing the Percentages
The easiest CSS1 color system to understand uses percentage values. In this scheme, the RGB levels are specified by writing out three percentages, like this:
<P style="color: rgb(50%,0%,50%);"> This is a shade of purple.</P>
This is a shade of purple.
Let's pull out the actual rule and examine it. First, the set of RGB values is preceded by the keyword
rgb and surrounded by parentheses. A generic representation of this system would be
rgb(R%,G%,B%). Second, the three percentages are separated by commas. Finally, the order in which the percentages are given is critical. They must be specified in the order red-green-blue. If I were to change the order of the values above, I'd get a very different color:
<P style="color: rgb(0%,50%,50%);"> This is not a shade of purple.</P>
This is not a shade of purple.
One theoretical advantage of percentages is that you can, if you so desire, declare percentages using real numbers (as opposed to integers, which we'll get to in a moment). In other words, the following declaration is perfectly legal:
<P style="color: rgb(50.5%, 0.0%, 49.7%);"> This should be a shade of purple.</P>
This should be a shade of purple.
If, however, your browser shows a color which is significantly different from the one shown in the previous 'purple' example, then it probably isn't interpreting real numbers too well. This is one of those areas where authors should tread with caution.
A Scale of a Different Range
The next way to specify colors is very similar to using percentages. This time, however, instead of values having the range 0.0% - 100.0%, they have the range 0 - 255, like this:
<P style="color: rgb(128,0,128);"> This is a shade of purple.</P>
This is a shade of purple.
The number 255, as you may know, is one of those mystical powers-of-two values around which so much of computer programming revolves. We'll see how it came into color values in the next section, but for now, let's concentrate on how to use it.
As you can see above, it's another
rgb(R,G,B) expression, only this time, don't forget to omit the percentage signs. Also, using this particular system, only integers (whole numbers) are allowed, so no decimals! You can convert back and forth between this system and the percentage system simply by dividing an integer by 255 to get the percentage value, or calculating the percentage of 255 to get the proper integer. For example:
192 / 255 = .75294 (approx. 75%)
25% of 255 = 63.75 (approx. 64)
rgb(192,64,0) is very close to being the same as
rgb(75%,25%,0%). Note that I say "very close" because they aren't exact matches, but the difference of a few tenths of a percent won't be obvious to anyone except the most color-obsessive.
This Should Seem Familiar
We turn now to hexadecimal notation, which you've probably seen or used in countless Web pages, because it's been used for at least a couple of years now. In this color scheme, colors are represented using three hexadecimal values strung together into one string, something like this:
#FC0EA8. This system can be represented as
#RRGGBB, thus showing that the red, green, and blue values fall right where you'd expect them.
If you're familiar with hexadecimal notation, this part will be a bit of a bore. For those still in the dark, here's how it works. Every decimal number (that is, every number in the base-ten counting system we all use) has an equivalent value in the hexadecimal notation, which is base-sixteen. Counting up from zero goes like this:
00,01,02,03,04,05,06,07,08,09,0A,0B,0C,0D,0E,0F, 10,11,12,13,14,15,16,17,18,19,1A,1B,1C,1D,1E,1F, 20,21,22...
#RRGGBB notation, you can use values in the range 00 - FF. 00 is the same as zero, of course, and FF translates to 255 in decimal counting. Thus the use of the range 0 - 255 in the integer-value system we talked about above. In order to get the same shade of purple we've been using in some of our examples:
<P style="color: #800080;"> This is a shade of purple.</P>
This is a shade of purple.
From this, you can infer that
80 is roughly equivalent to 50%. As it happens, it's exactly the same as 128, since 128 decimal is the same as 80 hexadecimal.
As for converting from hexadecimal to integers (and from there to percentages), my recommendation is that you use a calculator which convert between decimal and hexadecimal numbers, or a obtain program for your computer which does the same thing. Of course, you might also have a program which lets you pick colors from a palette and then gives you the appropriate hexadecimal value. In that case, just use the program's output and don't worry about converting from one system to another.
There is an even easier way to use hexadecimal notation. It's sort of like a shorthand, although you can only use it in certain circumstances. This system is written
#RGB, instead of
#RRGGBB. In this case, each of the RGB levels is represented using only a single digit in the range 0 - F. Therefore, the color white, which is written
#FFFFFF in normal hex notation, is written
#FFF in this shorthand. Similarly, red can be written either
What you have to remember is that the three-digit form is converted to the six digit form by replicating digits, not filling in zeros. Therefore, the color
#080 is the same
#FC3 translates to
Just Name the Color!
There is one last way to declare colors. It's even simpler than the last one. In this system, you just fill in the name of the color, like this:
<P style="color: purple;"> This is a shade of purple.</P>
This is a shade of purple.
Of course, this is a lot easier to do, because color names are easier to remember than percentages or hexadecimal values or what have you. However, there is a major drawback, which is that there are only sixteen defined color names. They are, in alphabetical order:
navy, olive, purple, red, silver, teal, white, yellow.
That's it. Some browsers may recognize more values than just these sixteen, but they aren't part of the specification.
We've seen all the ways you can define a color in CSS1. For those who are interested, I've drawn up a list of color equivalencies, based on the sixteen named colors. A little bonus for those of you who stuck with me to the end: the three-digit hex system is a convenient way to declare "Web-safe" colors for page elements. All Web-safe colors are based on the hex-pairs 00, 33, 66, 99, CC, and FF. An example is
#33FF00. You can write the same value as
#0F3 and save yourself some keystrokes, and that holds true for any of these "safe" colors.
Thanks for sticking with me through all the numbers, and come back for the next "Sense of Style!"
Color Equivalencies Table
Interested in finding out what the percentage, integer and hexidecimal values are for the 16 standard colors in the HTML 4.0 spec? This table has everything you need to start adding color to your site. | http://meyerweb.com/eric/articles/webrev/199807.html | 13 |
31 | What To Know About The Math Section Of The ACT
The ACT Mathematics section is designed to assess the mathematical proficiency students have typically acquired in courses taken by the end of the 11th grade. Students receive an hour to finish the 60-question math section – which boils down to roughly a minute per question. The multiple-choice problems cover content areas such as pre-algebra, elementary algebra, intermediate algebra, coordinate geometry, plane geometry, and trigonometry. Students must be comfortable using computational skills and basic formulas, but a knowledge of complex formulas or the ability to perform extensive computations is not required.
The Pre-Algebra questions make up 23% of the ACT math section. These problems are based on basic operations that use whole numbers, decimals, fractions, integers, place value, square roots and approximations the concept of exponents, scientific notation, factors, ratio, proportion, and percent. Content will also cover linear equations in one variable, absolute value and ordering numbers by value, elementary counting techniques and simple probability, data collection, representation and interpretation, and assessing how well students understand descriptive statistics.
Elementary Algebra questions cover the properties of exponents and square roots, evaluation of algebraic expressions through substitution, using variables to express functional relationships, understanding algebraic operations, and the solution of quadratic equations by factoring. They are 17% of the ACT math.
Intermediate Algebra problems make up 15% of the math portion. They assess how well students understand the quadratic formula, absolute value inequalities and equations, radical and rational expressions, patterns and sequences, and systems of equations, quadratic inequalities, modeling, matrices, functions, roots of polynomials, and complex numbers.
Coordinate Geometry questions involve graphing and the relations between graphs and equations. This includes points, lines, polynomials, circles and other curves, graphing inequalities, slope, parallel and perpendicular lines, distance, midpoints, and conics. Coordinate Geometry questions are 15% of the ACT math exam.
Plane Geometry questions are designed to measure your understanding of the properties and relations of plane figures. Angles and relations among perpendicular and parallel lines, properties of circles, triangles, rectangles, parallelograms, and trapezoids, transformations, the concept of proof and proof techniques, volume, and applications of geometry to three dimensions are all covered. Plane Geometry problems are 23% of the math test.
Lastly, Trigonometry problems are 7% of the ACT math section. These questions cover trigonometric relations in right triangles, values and properties of trigonometric functions, graphing trigonometric functions, modeling using trigonometric functions, the usage of trigonometric identities, and solving for trigonometric equations.
Can you use a calculator? You may on this part of the exam, but you’ll have to put it away when it’s time for the next section on the ACT. Make sure that your calculator is ACT-approved! All TI-89 or TI-92 calculators are permitted – as is any four-function, graphing, or scientific calculator. Any calculators with a QWERTY keyboard are prohibited.
The ACT Math test is difficult because it assess knowledge that you’ve learned, not just intuited from the problem at hand. It includes a wide range of material from your middle and high school math courses – and since so many topics are covered, it’s important that you have a strong understanding of all these areas.
In order to do your best, you can follow some simple tips. These will help you approach the ACT math and the structure of the exam so test day will be a success. However, these tips can’t replace a good, old fashioned knowledge. Understanding these strategies and applying them with your math skills will help you reach the score you want. That said….
Review, Review, Review. Give yourself plenty of time to go back and revisit areas you haven’t spent time with in a while, or ones that were tricky in the past. It doesn’t matter if you’ve gotten straight A’s in pre-algebra through trig – you can still benefit from a full review of material from years past. The ACT math covers a broad, broad area, and goes into tiny details you’ve probably forgotten, and will be important on the exam.
Stay in your time frame. As we said earlier, the ACT math exam is designed to allot you a minute per question. Spend too long figuring out that trigonometry problem, and you’ll be rushing through the rest of the test.
Know the simple rules. The writers of the ACT are trying to trick you, so you have to outwit them. They want you to forget the basics, so make sure you have those down before test day arrives. Don’t forget the little things – like what you do to one side of an equation must be done to the other side.
Memorize your formulas. Sure, there’s a lot. But having these down pat saves you from having to plug in answer choices, so make sure you can solve for X all by yourself. You should be comfortable figuring out everything from the angles of intersecting lines to using the quadratic formula and solving for the area of a rhombus.
The ACT math section is tough. But whether you work with a tutor or by yourself to improve your reading speed, your dexterity with tricky problems, it’s important to make sure you have your math knowledge down. Spend time studying and reviewing for the ACT math portion, and you should be able to achieve your target score on test day. | http://www.varsitytutors.com/blog/math+section+of+the+act | 13 |
38 | In 1962, the American telecommunications giant AT&T launched the world's first true communications satellite, called Telstar. Since then, countless communications satellites have been placed into earth orbit, and the technology being applied to them is forever growing in sophistication.
Satellite communications are comprised of 2 main components:
The satellite itself is also known as the space segment, and is composed of three separate units, namely the fuel system, the satellite and telemetry controls, and the transponder. The transponder includes the receiving antenna to pick-up signals from the ground station, a broad band receiver, an input multiplexer, and a frequency converter which is used to reroute the received signals through a high powered amplifier for downlink. The primary role of a satellite is to reflect electronic signals. In the case of a telecom satellite, the primary task is to receive signals from a ground station and send them down to another ground station located a considerable distance away from the first. This relay action can be two-way, as in the case of a long distance phone call. Another use of the satellite is when, as is the case with television broadcasts, the ground station's uplink is then downlinked over a wide region, so that it may be received by many different customers possessing compatible equipment. Still another use for satellites is observation, wherein the satellite is equipped with cameras or various sensors, and it merely downlinks any information it picks up from its vantagepoint.
This is the earth segment. The ground station's job is two-fold. In the case of an uplink, or transmitting station, terrestrial data in the form of baseband signals, is passed through a baseband processor, an up converter, a high powered amplifier, and through a parabolic dish antenna up to an orbiting satellite. In the case of a downlink, or receiving station, works in the reverse fashion as the uplink, ultimately converting signals received through the parabolic antenna to base band signal.
Back to Table of Contents
Since the beginnings of the long distance telephone network, there has been a need to connect the telecommunications networks of one country to another. This has been accomplished in several ways. Submarine cables have been used most frequently. However, there are many occasions where a large long distance carrier will choose to establish a satellite based link to connect to transoceanic points, geographically remote areas or poor countries that have little communications infrastructure. Groups like the international satellite consortium Intelsat have fulfilled much of the world's need for this type of service.
Various schemes have been devised to allow satellites to increase the bandwidth available to ground based cellular networks. Every cell in a cellular network divides up a fixed range of channels which consist of either frequencies, as in the case of FDMA systems, or time slots, as in the case of TDMA. Since a particular cell can only operate within those channels allocated to it, overloading can occur. By using satellites which operate at a frequency outside those of the cell, we can provide extra satellite channels on demand to an overloaded cell. These extra channels can just as easily be, once free, used by any other overloaded cell in the network, and are not bound by bandwidth restrictions like those used by the cell. In other words, a satellite that provides service for a network of cells can allow its own bandwidth to be used by any cell that needs it without being bound by terrestrial bandwidth and location restrictions.
C-Band (3.7 - 4.2 GHz) - Satellites operating in this band can be spaced as close as two degrees apart in space, and normally carry 24 transponders operating at 10 to 17 watts each. Typical receive antennas are 6 to 7.5 feet in diameter. More than 250 channels of video and 75 audio services are available today from more than 20 C-Band satellites over North America. Virtually every cable programming service is delivered via C-Band.SBCA
Ku Band (11.7 - 12.2 GHz) - Satellites operating in this band can be spaced as closely as two degrees apart in space, and carry from 12 to 24 transponders that operate at a wide range of powers from 20 to 120 watts each. Typical receive antennas are three to six feet in diameter. More than 20 FSS Ku-Band satellites are in operation over North America today, including several "hybrid" satellites which carry both C-Band and Ku-Band transponders. PrimeStar currently operates off Satcom K-2, an FSS or so-called "medium-power" Ku-Band satellite. AlphaStar also uses an FSS-Ku Band satellite, Telestar 402-R.SBCA
Ku-Band (12.2 - 12.7 GHz) - Satellites operating in this band are spaced nine degrees apart in space, and normally carry 16 transponders that operate at powers in excess of 100 watts. Typical receive antennas are 18 inches in diameter. The United States has been allocated eight BSS orbital positions, of which three (101, 110 and 119 degrees) are the so-called prime "CONUS" slots from which a DBS provider can service the entire 48 contiguous states with one satellite. A total of 32 DBS "channels" are available at each orbital position, which allows for delivery of some 250 video signals when digital compression technology is employed.SBCA
DBS (Direct Broadcast Satellite) -The transmission of audio and video signals via satellite direct to the end user. More than four million households in the United States enjoy C-Band DBS. Medium-power Ku-Band DBS surfaced in the late 1990s with high power Ku-Band DBS launched in 1994.SBCA
In the maritime community, satellite communication systems such as Inmarsat provide good communication links to ships at sea. These links use a VSAT type device to connect to geosynchronous satellites, which in turn link the ship to a land based point of presence to the respective nations telecommunications system.
Along the same lines as the marine based service, there are VSAT devices which can be used to establish communication links even from the world's most remote regions. These devices can be hand-held, or fit into a briefcase. Digital data at 64K ISDN is available with some (Inmarsat).
Another VSAT oriented service, in which a small apparatus containing the ability to determine navigational coordinates by calculating a triangulating of the signals from multiple geosynchronous satellites.
Back to Table of Contents
Incorporating satellites into terrestrial networks is often hindered by three characteristics possessed by satellite communication.
Due to the high noise present on a satellite link, numerous error correction techniques have been tested in on such links. They fall into the two categories of forward-error-correction (FEC) and automatic-repeat-request (ARQ):
In this method a certain number of information symbols are mapped to new information symbols, but in such a way as to get more symbols than were original had. When these new symbols are checked on the receiving end, the redundant symbols are used to decipher the original symbols, as well as to check for data integrity. The more redundant symbols that are included in the mapping, the better the reliability of the error correction. However it should be noted that the more redundant symbols that are used to achieve better integrity, the more bandwidth that is wasted. Since this method uses relatively a large amount redundant data, it may not be the most efficient choice on a clear channel. However when noise levels are high, FEC can more reliably ensure the integrity of the data.
In this method, data is broken into packets. Within each packet is included an error checking key. This key is often of the cyclic redundancy check (CRC) sort. If the error code reflects a loss of integrity in a packet, the receiver can request the sender to resend that packet. ARR is not very good in a channel with high noise, since many retransmissions will be required, and the noise levels that corrupted the initial packet will be likely to cause corruption in subsequent packets. ARR is more suitable to relatively noise free channels.
With this form of ARR, the sender must wait for an acknowledgement of each packet before it can send a new one. This can take upwards of 4/10ths of a second per packet since it takes 2/10ths seconds for the receiver to get the packet an another 2/10th seconds for the sender to receive the acknowledgement.
This method of ARR is an improvement over stop and wait in that it allows the sender to keep sending packets until it gets a request for a resend. When the sender gets such a request, it sends packets starting at the requested packet over again. It can again send packets until it receives another retransmit request, and so on.
This ARR protocol is an improvement over GBN in that it allows the receiver to request a retransmit of only that packet that it needs, instead of that packet and all that follows it. The receiver, after receiving a bad packet and requesting a retransmit, can continue to accept any good packets that are coming. This method is the most efficient method for satellite transmissions of the three ARR methods discussed.
ARR methods can be demonstrated to provide a usable error correction scheme, but it is also the most expensive, in terms of hardware. This is in part due to the buffering memory that is required, but more importantly to the cost of the receiver, which needs to be able to transmit re-requests. Systems such as the Digital Broadcast Satellites used for television signal distribution would become inordinately expensive if they had to make use of ARR, since the home based receiver would now need to be a transmitter, and the 18 inch dish would be inadequate for the requirements of transmitting back to a satellite.
In today's global networking landscape, there are many ways to transmit data from one place to another. It is desirable to be able to incorporate any type of data transmission media into a network, especially in networks that encompass large areas. A hybrid network is one that allows data to flow across a network, using many types of media, either satellite, wireless or terrestrial, transparently. Since each type of media will have different characteristics, it is necessary to implement a standard transmission protocol. One that is normally used in hybrid networks is TCP/IP. In addition, much work is being done to use TCP/IP over ATM for the satellite segments of hybrid networks, about which more will be discussed later.
One way to get around the need in ARR for the receiver to have to request retransmit via an expensive and slow satellite link is to use a form of hybrid network. In one form of hybrid network, the reciever transmits its requests back to the sender via a terrestrial link. Terrestrial link allows for quicker, more economical and less error prone transmission from the reciever, and the costs associated with the receivers hardware are greatly reduced when compared to the costs involved if it had to transmit back over the satellite link. There are products on the market today that allow a home user to get intenet access at around 400MB via digital satellite, while its retransmit signals are sent via an inexpensive modem or ISDN line.
In fact, a product currently being marketed by Direct PC called Turbo Internet uses a form of hybrid network. The system uses two network interfaces; one connects via a special ISA bus PC adapter to a receive-only Very Small Aperture Terminal (VSAT), while the other is a modem attached to a serial port. Inbound traffic comes down to the VSAT, while outbound traffic goes through the modem link. The two interfaces are combined to appear as a single virtual interface to upper layer TCP/IP protocol stacks by a special NDIS compliant driver. The Serial Line Internet Protocol (SLIP) is used to connect the modem-based link with an internet service provider. Packets, which are encapsulated by the terminal such that the desired ip address of the destination host is embedded underneath the IP address of the Direct PC Gateway, to which all packets leaving the terminal must go. Once at the gateway, the outer packet is stripped, and the gateway contacts the destination address within. Upon the gateway's receiving the request from the host, it then prepares the packet for satellite transmission, which is then used to send the packet back to the terminal.
Back to Table of Contents
There are problems, however. ATM's relatively large propagation delays can significantly increase the latency of feedback mechanisms essential for congestion control. acquisition time, cell in-synch time and cell discard probability (BARAS).Solutions to these issues are still being explored.
The group that is currently working to develop interoperability specifications that facilitate ATM access and ATM network interconnect in both fixed and mobile satellite networks is known as The TIA/SCD/CIS - WATM group. As of March, 1997 they have proposed the following standards:
The group also has established requirements for dealing with the physical layer, the media access control layer and the data link control layer.
VSAT stands for Very Small Aperture Terminal. Although this acronym has been used amongst telecom groups for some time now to describe small earth stations, the concepts of VSAT are being applied to modern hand held satellite communications units, such as GPS (Global Positioning System), portable Inmarsat phones and other types of portable satellite communication devices.
GEO stands for Geostationary Earth Orbit. This refers to satellites that are placed in orbit such that they remain stationary relative to a fixed spot on earth. If a satellite is placed at 35,900 km above the earth, its angular velocity is equal to that of the earth, thereby causing it to appear to be over the same point on earth. This allows for them to provide constant coverage of the area and eliminate blackout periods of ordinary orbiting satellites, which is good for providing television broadcasting. However their high altitude causes a long delay, so two way communications, which would need to be uploaded and then downloaded over a distance of 72,000 km, are not often used with this type of orbit.
LEO stands for Low Earth Orbit, and it refers to satellites in orbit at less that 22300 miles above the earth. This type of an orbit reduces transmission times as compared to GEO. A LEO orbit can also be used to cover a polar region, which the GEO cannot accomplish. Since it does not appear stationary to earth stations, however, earth stations need an antenna assembly that will track the motion of the satellite.
There are basically two types of networks being proposed here, namely LEO based and GEO based ones.
LEO networks use low orbits, which allows for much less latency that do GEO based networks. One problem that these satellites have is that since the are not geostationary (they are contsantly orbiting around the earth) they cannot talk continuously to that same ground station. The way this is overcome is by using intesatellite communications, so that the sattellites function together as a blanket of coverage. A major player in this are is Teledesic.
The Teledesic Network uses a constellation of 840 operational interlinked low-Earth orbit satellites. The system is planned to provide "on-demand" channel rates from 16 Kbps up to 2.048 Mbps ("E1"), and for special applications up to 1.24416 Gbps ("OC-24"). The network uses fast packet switching technology based on the Asynchronous Transfer Mode (ATM) using fixed-length (512) bit packets.. Each satellite in the constellation is a node in the fast packet switch network, and has intersatellite communication links with eight adjacent satellites. Each satellite is normally linked with four satellites within the same plane (two in front and two behind) and with one in each of the two adjacent planes on both sides. Each satellite keeps the same position relative to other satellites in its orbital plane. The Teledesic Network uses a combination of multiple access methods to ensure efficient use of the spectrum. Each cell within a supercell is assigned to one of nine equal time slots. All communication takes place between the satellite and the terminals in that cell during its assigned time slot . Within each cellís time slot, the full frequency allocation is available to support communication channels. The cells are scanned in a regular cycle by the satelliteís transmit and receive beams, resulting in time division multiple access (TDMA) among the cells in a supercell. Since propagation delay varies with path length, satellite transmissions are timed to ensure that cell N (N=1, 2, 3,...9) of all supercells receive transmissions at the same time. Terminal transmissions to a satellite are also timed to ensure that transmissions from the same numbered cell in all supercells in its coverage area reach that satellite at the same time. Physical separation (space division multiple access (SDMA) and a checkerboard pattern of left and right circular polarization eliminate interference between cells scanned at the same time in adjacent supercells. Guard time intervals eliminate overlap between signals received from time-consecutive cells. TELEDESIC
Within each cellís time slot, terminals use Frequency Division Multiple Access (FDMA) on the uplink and Asynchronous Time Division Multiple Access (ATDMA) on the downlink. On the uplink, each active terminal is assigned one or more frequency slots for the callís duration and can send one packet per slot each scan period (23.111 msec). The number of slots assigned to a terminal determines its maximum available transmission rate. One slot corresponds to a standard terminalís 16 Kbps basic channel with its associated 2 Kbps signaling and control channel. A total of 1800 slots per cell scan interval are available for standard terminals. The terminal downlink uses the packetís header rather than a fixed assignment of time slots to address terminals. TELEDESIC
GEO's high points are that it's satellites are geostationary, which means that the difficulties of intersatellite communications are avoided. The problem arises due to the latency delays caused by the high orbit. Applications which rely on steady bandwidth, like multimedia, will definately be affected.
Back to Table of Contents | http://www.cse.wustl.edu/~jain/cis788-97/ftp/satellite_nets/index.htm | 13 |
15 | For a complete guide to security, check out 'Security+ Study Guide and DVD Training System' from Amazon.com
Although this methodology and approach can be used in just about any troubleshooting scenario, we will focus our exercise on routing. A router is a device that determines the path from a source to a destination. A router is the default gateway for a LAN, the exit point from the LAN to the WAN. The router (or gateway) is what connects more than one network segment together, whether it be two LANs, two separate WANs and so on. A router will (if programmed correctly) know the topology of a network so that if adjacent routers go down (or the lines that attach to them such as DSL, T1 and so on do), the router will be able to find a new path to send the destination data. This is meant to reconverge the network so that data transmissions can continue. To know the topology, a router keeps a table of the routes it has been programmed to know, or it has learned from a neighboring router. Routers create or maintain a table of the available routes and use this information to determine the best route for a given data packet. In this article we will look at what could go wrong and cause instability in your environment, or just outages in general.
When working in a networked environment, it’s common to have to troubleshoot problems often and quickly. Not only do problems come up often, but they are normally complex, require a lot of abstract thought, involved multiple parties and can be downright confusing. The following flow chart is a quick way to visualize all the steps involved in troubleshooting just about anything:
First you want to establish a baseline of normal operation. If you do not know what your router operates like normally, how will you know if it has a problem?
In this section you want to consider your possibilities for defining what the problem is. If it’s the Flu, then we have to attack it with rest and medication.
Make the plan, have a fallback plan in case the main plan fails.
Test all your plans, see if they work.
After the test, observe to see if the results show that the problem was resolved…
Here is where you have choices. If the problem has been solved, then great. If not, then you will want to go back to the step where you started to consider the possibilities. This will allow you to test until resolved.
Always document the solution to an issue.
Now that you have a general understanding on how to tackle an issue, let’s look at what could happen with a router. Let’s look at how to gather some facts.
Gather your Facts
When gathering facts, consider using some of these pointers when trying to figure out what the problem is, the pointers listed could help you isolate and determine what a possible problem could be:
- Consider the OSI model when troubleshooting. In other words, make sure it’s a routing problem first, and an issue with either a layer 3 protocol or process. A router operates at layer 3 of the OSI model. This can also be confusing when it comes to arp. Arp will cause you many problems on networks if you do not understand it. Arp operates on Layer 2 and resolves MAC addresses (layer 2) to IP address (layer 3), but sometimes clearing the arp cache can help you solve routing problems because the arp cache may be holding the incorrect IP address for an interface or port on the router. Clearing the arp cache (just like on XP or 2003) on the router can help you, so consider it as an option. Now that you know what layer you are operating on (3) and you know that you may still need to do an arp cache clearing to get the data moving again, let’s look at other ways to gather some facts.
- Router specific tools and protocols can help you gather information. CDP (which stands for the Cisco Discovery Protocol) is used to help ‘gather’ information about the network it’s running on. As well as the commands you can use on the router to check interface statistics, routing table information and so on.
- You can use client and server based tools whether they be from Microsoft or Novell, or Linux and UNIX. Tools such as ifconfig, ipconfig, winipcfg and so on can be used to get IP information. Servers also maintain route tables.
- Check other things that may give off the impression that a routing problem is taking place when it may be something else such as a wide-scale DNS issue, or another device causing problems like a switch, firewall or Access Point. For instance, if your helpdesk phones light up because a firewall went down, the problem may appear to be a routing problem when it really isn’t, it’s a device performing a different function.
There are many other actions you can take but this should get you started, you need to gather facts: what are the facts and what do you think the problem may be? The more investigative work you do up front, the less time you will spend later because if you don’t do this part correctly, then you will have to do it again later.
Start to Troubleshoot
Now that you think you know what the problem is, now you have to try to solve it. Let’s take a look at a sample topology.
If you have a network segment off of Router A and a network segment off of Router D, then you would want to see if you could reach from one to the other. The network may be slow and that may be because Router C had a problem and now the network had to reconverge and use links with lesser speeds…
The 56K link could be the cause of the slowdown. Now, I know this example is very basic and most networks are not commonly set up with routers all laid out in a row (unless working within a distributed star topology), but this diagram should prove a point – that it’s very important to assess the routing in your network because there are many paths a packet can take… sometimes the unintended or wrong one.
You should also learn to ping in both directions. Using remote access to remotely manage a workstation I Router D’s network (10.1.4.0), such as a terminal server or something.
Use Tracert and any other tool on your Windows, NetWare of Linux/UNIX arsenal to test with – in both directions! Make sure you see the path from router A and from router D.
To ping, open up the Command Prompt and enter Ping from your local PC to someplace past the final router, which should be router D. Ping a host such as 10.1.4.10. If you can, then you have connectivity, that does not mean it’s correct though.
Routing Table Problems
In Windows, the ROUTE PRINT command will show you the computer’s routing table, whether routing is enabled or not – you will still see a routing table. You can see a routing table on Windows XP with one NIC card; you don’t need RRAS running on Windows Server 2003 to view the route table, although if you do, you will have way more flexibility over what you can do.
A look at an XP desktop shows a simple route table for the APIPA range, the 10.8.x.x segment attached to and the default route, also known as the default gateway address.
A Cisco router routing table looks similar, but has way more detail and more complexity. There is also much more you can do with it to include a massive amount of ‘debugging’ commands that allow you to obtain very detailed and specific information on the internal processes of the router.
You may have problems with your routing table. Commonly, if you clear the routing table (on Windows it would be ROUTE ADD to add a route and ROUTE DELETE to remove one – as well as switches for persistency, etc) you will force the routers to relearn their routes and quite possibly also clear the problems – this is why a lot of times people reboot routers to clear a problem, which by the way is very bad to do. For one reason if not for many others – you clear the logs which are memory and lose them forever. Logs on routers are vital and can help determine many problems so power off such as this should be saved for extremes only.
Routing table problems include (and not limited to):
- Inactive routes
- Unneeded routes
- Black hole routes
- Flapping links (such as Frame Relay links going up and down) which causes the routes to flap
- Invalid route tables
- Invalid arp cache causing incorrect IP assignment
- Problems with administrative distance or any other settings
When working with routers that connect your remote segments make sure you understand how to troubleshoot between your links, your routers may be causing your problems. To work on a network you have to understand the Wide Area Network (WAN) that connects Local Area Networks (LAN) together. The Wide Area Network is normally connected via high end routers that forward data based on how they are configured to. The data is sourced from one location, sent to a default gateway and then sent to another location from that router (based on its tables) to another router which will then forward it to where it believes the destination to be. In sum, make sure you use a good troubleshooting methodology, make sure you baseline your systems so you have a starting range to work with, use all the tools at your disposal (such as ping, traceroute (tracert) and so on) and make sure you use the tools troubleshooting in both directions to accurately determine where your problems lie.
Links and Reference Material
Cisco Introduction to Routing
Microsoft Common Routing Problems | http://www.windowsnetworking.com/articles-tutorials/netgeneral/Router-Troubleshooting-Primer.html | 13 |
24 | The scientific method is the ideal process by which scientists say scientific research should be conducted. It is a standardized process which involves asking questions, searching for answers, guessing a possible answers called a hypothesis, and then evaluating that hypothesis by experiments in a specific, rigid fashion. Most articles in scientific journals are formatted according to its steps.
Science is an active process where systematic descriptions of reality continue to accumulate as scientists ask questions and use improved experiments and analysis to answer those questions. The goal of the scientific method is to test the validity of a hypothesis - a proposed description of reality. It is not a set of directions for making original discoveries and it does not set out the means that scientists must use in order for their research to succeed. The whole point is to compare the hypothesis with the facts.
Hans Storch wrote:
- Data must be accessible to adversaries; joint efforts are needed to agree on test procedures to validate, once again, already broadly accepted insights.
Steps of the Scientific Method
Although by no means conclusive, the following steps are used by a majority of scientists in their work::
- Observation of phenomena
- Formulation of an hypothesis
- Predictions, using this hypothesis
A science educator lists 11 steps and 3 supporting principles.
The scientist observes something interesting, and he wants to know how it happened. He lays down the basic questions as to what is responsible for the phenomena he observed, and from there begins to form his hypothesis.
In asking these questions scientists also look for research that has already been done on their topic to determine if they are duplicating a past experiment, doing something new, or building on a previous experiment. Such research, although tedious and time-consuming, simply builds on the knowledge yet to be gained by the scientist’s questions.
A hypothesis is a statement of what the researcher thinks will happen in the experiment. This is usually an educated guess using current theory and has to be testable and observable. It is a common mistake to let one's own scientific bias stop the process at this point, on the assumption that the idea is so logical that it needs no testing.
When designing the experiment, the researcher carefully controls as many variables as possible. In most experiments there is a control group and a treatment group. The two groups are as similar as possible, but the treatment group is the one that experiences the variable the researcher is studying.
After the data are analyzed and written down, the scientist checks the results against the hypothesis; if the results have proven the hypothesis to be wrong, then it must be discarded. Even if the hypothesis is not correct, conclusions can still be made and significant knowledge gained. If the hypothesis is indicated to be correct, then the results are published and sent to other scientists within the field in question.
Scientists must be able to take such published data and repeat the experiment. This not only confirms the validity of the original hypothesis, but advances it to the level of a “theory”, which in science means an interpretation or explanation that is well-supported by evidence which is testable and tested. A theory can also be falsified by evidence as well.
- A scientific theory summarizes a hypothesis or group of hypotheses that have been supported with repeated testing. A theory is valid as long as there is no evidence to dispute it. Therefore, theories can be disproven. Basically, if evidence accumulates to support a hypothesis, then the hypothesis can become accepted as a good explanation of a phenomenon. One definition of a theory is to say it's an accepted hypothesis.
A classic example of the Scientific Method being used stemmed from a simple bet. In 1872 a railroad baron named Leland Stanford made a wager that a horse’s hooves do not touch the ground at some point in a gallop. To test the hypothesis, photographer Eadweard Muybridge was hired; he installed a series of trip wires which were rigged from a long wall about two inches from the ground, each one tied to a camera’s shutter facing the wall; the experiment called for the horse to run past the wall, tripping the wires and getting a photo at each point. The results were factual and conclusive: a horse at a running gallop does have all four hooves off the ground at some point.
The agreement of an observation or experiment with a hypothesis does not on its own prove the hypothesis correct. It merely makes its correctness more likely. The hypothesis must agree with other aspects of the scientific framework of knowledge, and survive the test of repeated experiments by other people working independently. Over time, the accumulation of data will tend to confirm or refute a hypothesis.
Scientists may be influenced by their world-views to look for certain results that fit a preconception. The test of objectivity and rigor imposed on their work by the need for other scientists to replicate it tends to make the truth-seeking facility of the scientific method prevail in the long run, although this is difficult where the world-view is widespread.
- Department of Physics and Astronomy, University of California, Riverside
- The Scientific Method, from Clermont College
- Introduction to the Scientific Method
- Science Buddies.org
- Biology for Kids
- Example of the Scientific Method used, from NASA
- Campbell, Reece, Taylor, Simon, et al. Biology: Concepts and Connections 5th edition; Pearson Education, Upper Saddle River, NJ (2005) | http://conservapedia.com/Scientific_method | 13 |
10 | May 10, 2012 New results from NASA's Interstellar Boundary Explorer (IBEX) reveal that the bow shock, widely accepted by researchers to precede the heliosphere as it plows through tenuous gas and dust from the galaxy does not exist.
According to a paper published in the journal Science online, the latest refinements in relative speed and local interstellar magnetic field strength prevent the heliosphere, the magnetic "bubble" that cocoons Earth and the other planets, from developing a bow shock. The bow shock would consist of ionized gas or plasma that abruptly and discontinuously changes in density in the region of space that lies straight ahead of the heliosphere.
"The sonic boom made by a jet breaking the sound barrier is an earthly example of a bow shock," says Dr. David McComas, principal investigator of the IBEX mission and assistant vice president of the Space Science and Engineering Division at Southwest Research Institute (SwRI). "As the jet reaches supersonic speeds, the air ahead of it can't get out of the way fast enough. Once the aircraft hits the speed of sound, the interaction changes instantaneously, resulting in a shock wave."
For about a quarter century, researchers believed that the heliosphere moved through the interstellar medium at a speed fast enough to form a bow shock. IBEX data have shown that the heliosphere actually moves through the local interstellar cloud at about 52,000 miles per hour, roughly 7,000 miles per hour slower than previously thought -- slow enough to create more of a bow "wave" than a shock.
"While bow shocks certainly exist ahead of many other stars, we're finding that our Sun's interaction doesn't reach the critical threshold to form a shock, so a wave is a more accurate depiction of what's happening ahead of our heliosphere -- much like the wave made by the bow of a boat as it glides through the water," says McComas.
Another influence is the magnetic pressure in the interstellar medium. IBEX data, as well as earlier Voyager observations, show that the magnetic field is stronger in the interstellar medium requiring even faster speeds to produce a bow shock. Combined, both factors now point to the conclusion that a bow shock is highly unlikely.
The IBEX team combined its data with analytical calculations and modeling and simulations to determine the conditions necessary for creating a bow shock. Two independent global models -- one from a group in Huntsville, Ala., and another from Moscow -- correlated with the analytical findings.
"It's too early to say exactly what this new data means for our heliosphere. Decades of research have explored scenarios that included a bow shock. That research now has to be redone using the latest data," says McComas. "Already, we know there are likely implications for how galactic cosmic rays propagate around and enter the solar system, which is relevant for human space travel."
IBEX's primary mission has been to image and map the invisible interactions occurring at the outer reaches of the solar system. Since its launch in October 2008, the spacecraft has also shed new light on the complex structure and dynamics occurring around Earth and discovered neutral atoms coming off the Moon.
Scientists from SwRI; Moscow State University; the Space Research Centre of the Polish Academy of Sciences; University of Bonn, Germany; University of Alabama, Huntsville; and University of New Hampshire were all involved in this study.
Other social bookmarking and sharing tools:
- D. J. McComas, D. Alexashov, M. Bzowski, H. Fahr, J. Heerikhuisen, V. Izmodenov, M. A. Lee, E. Möbius, N. Pogorelov, N. A. Schwadron, and G. P. Zank. The Heliosphere's Interstellar Interaction: No Bow Shock. Science, 10 May 2012 DOI: 10.1126/science.1221054
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2012/05/120510141957.htm?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+sciencedaily+%28ScienceDaily%3A+Latest+Science+News%29 | 13 |
22 | VIEW THE SLIDE SHOW: Light shines through minerals in a thinly sliced section of Martian meteorite Miller Range 03346, one of many samples in NASA's astromaterials collection. Image: NASA
Soon, the jeep-size rover Curiosity will trace its first six-wheeled tracks over the Martian surface. Over the next 23 months, Curiosity will scoop dust and drill into rocks for clues to the Red Planet's past. Although those samples won't make it back to Earth, some Martian rocks are already catalogued and preserved here as part of NASA's astromaterials collection.
Astromaterials are fragments and particles from planets, asteroids, stars and other extraterrestrial bodies. Scientists gather the materials during manned and unmanned space missions or after they naturally land on Earth as meteorites.
"About one in one thousand of the meteorites that fall on Earth was actually knocked off Mars by ancient impacts," says Carlton Allen, the collection curator based at the NASA Johnson Space Center in Houston. He coordinates the efforts of more than 30 people to document, preserve and distribute the samples for researchers around the world. These samples help scientists understand the formation and development of the solar system.
The collection includes lunar rocks gathered by the Apollo astronauts, particles of cosmic dust collected by special aircraft swooping through the stratosphere, and atoms carried by the solar wind.
Chemical and physical analyses can trace rocks that end up on Earth back to their extraterrestrial sources. Scientists crush small pieces of meteorites to a fine powder and analyze how the particles bend light when suspended in oil. Or, they microscopically view thin sections of the rocks and classify their structures. Then, they compare the results with what they already know about the moon, Mars, asteroids and comets.
The extraterrestrial samples require special laboratories and handling practices to keep their otherworldly purity. For example, during the six Apollo moon-landing missions astronauts photographed lunar rock and soil samples, packaged the materials in uniquely marked, vacuum-tight containers, and returned them to Earth in sterile conditions free from contamination by terrestrial gases. "The Apollo collection comprises some of the most carefully documented geologic material in the world," wrote Allen and colleagues in an article about NASA's extraterrestrial bounty, published in February 2011.
Another NASA mission to return extraterrestrial samples to Earth was the Genesis probe, launched in 2001. The spacecraft circled at one of the Earth–sun Lagrangian points for almost two and a half years, where the combined gravitational pull of the two bodies provides a stable position. The spacecraft's circular arrays gathered solar-wind ions streaming from the sun with wafers made of silicon, germanium and gold-on-sapphire, along with films of diamond-like carbon on silicon and other materials. Despite a crash landing in the Utah desert, which breached the spacecraft's container of over 400 samples of charged particles, some of the collected astromaterials proved pure enough for research. Genesis' data from the captured solar wind helped researchers rethink their understanding of the sun's and early solar system's composition.
Other successful sample return missions include NASA's Stardust spacecraft, which collected dust samples from Comet Wild 2 and the Japan Aerospace Exploration Agency's Hayabusa probe, which gathered asteroid samples in 2010. In the future Allen and his colleagues may preside over samples collected during planned missions to other asteroids and Mars. In addition to providing a window to other worlds and the distant past, these exotic, otherworldly artifacts are simply beautiful. | http://www.scientificamerican.com/article.cfm?id=heaven-on-earth-cosmic-particles | 13 |
21 | We’re less than a month away from one of the most highly anticipated Martian landings of all time.
On Aug. 5 (Pacific Time, Aug. 6 Eastern), NASA’s Mars Science Laboratory rover Curiosity will land in Gale Crater. The incredibly sophisticated rover is a mobile laboratory designed to run tests on soil to determine whether or not the Martian environment ever had the conditions to support life.
But in the 1960s, the future of Mars exploration looked very different. In many instances, there were men aboard the spacecraft that were designed to fly by the red planet rather than land on it.
In the 1960s, NASA considered flyby missions almost as readily as it did landing missions. The proposals, like some of the more interesting missions to Venus, came from Bellcom, a division of AT&T established in 1963 to assist NASA with research, development, and overall documentation of systems integration.
A 1966 Bellcomm proposal cites the weight of a spacecraft bound for Mars as the mission’s limiting factor. That’s unsurprising. It takes a lot of fuel to send a spacecraft into orbit, and more to send it to an interplanetary destination.
But the planets can actually lend a hand on these long distance missions. If a spacecraft passes a planet at the right point, its gravity will slingshot the spacecraft away adding momentum to its interplanetary flight. This is how the Voyager spacecraft managed their impressive tours of the outer solar system. The same gravity assist maneuvers can be equally effective in the inner solar system, and while it might seem counter intuitive, Bellcomm engineers found that a mission to Mars could benefit from flying by Venus on its way to the red planet.
A September 1967 proposal outlines a possible triple-flyby mission that would send a spacecraft to Venus and Mars on. Based on the geometry of the planets — taking advantage of optimal alignment — the ideal launch date for this mission was May 26, 1981. The spacecraft would launch towards Venus, reaching the planet on Dec. 28. It would whip around and head for Mars, making its contact on Oct. 5, 1982. The inbound leg of the journey would take it back by Venus on March 1, 1983 before returning to Earth on July 25. The mission would last 790 days.
The launch window for this proposal was 30 days. Launching on another date in the window would change the duration of the mission, making it last anywhere from 720 days to 850 days.
Three-planet flybys were thought to be rare; the 1981 launch window came as a surprise to the Bellcomm engineers. It inspired them to look for similar opportunities and they found that conditions for triple-flybys are actually fairly common. By October 1967, the company had identified a dual-flyby mission, one that would send a spacecraft to Venus then Mars and back to Earth with the option to revisit Venus on the inbound leg. In this scenario, a launch on Nov. 28, 1978 would take the spacecraft by Venus on May 11, 1979, Mars on Nov. 25, 1979, and Venus again on Jan. 29, 1980 before returning to Earth on Jan. 31, 1981.
For possible crews aboard these missions, they would have a long trip likely filled with astronomical observations punctuated by exciting days spent flying by Venus and Mars. Both proposals sent the crew within 1,200 miles of the surface of Venus; in 1970 this would happen on the day side of the planet while the 1980 opportunity would take them into the planet’s dark side. Of course, infrared sensors and mapping radar would work either way. For the engineers and NASA, this was a cost efficient way to send men to Mars.
These kinds of proposals would probably never gain any serious traction in NASA’s current climate, especially not for manned missions. The duration alone would likely draw criticism, though it’s not much shorter than the roughly 500 day mission most direct missions to Mars are expected to take. But a swing by Venus could return valuable data, and give the crew not one but two fascinating sights during their mission.
Image credit: NASA/JPL | http://news.discovery.com/space/history-of-space/when-curiosity-almost-took-men-on-a-scenic-route-to-mars-120713.htm | 13 |
45 | Pythagoreanism was the system of esoteric and metaphysical beliefs held by Pythagoras and his followers, the Pythagorean cult, who were considerably influenced by mathematics, music and astronomy. Pythagoreanism originated in the 5th century BCE and greatly influenced Platonism. Later revivals of Pythagorean doctrines led to what is now called Neopythagoreanism.
According to tradition, Pythagoreanism developed at some point into two separate schools of thought:
- the mathēmatikoi (μαθηματικοί, Greek for "learners") and
- the akousmatikoi (ἀκουσματικοί, Greek for "listeners").
The mathēmatikoi were supposed to have extended and developed the more mathematical and scientific work begun by Pythagoras. The mathēmatikoi allowed that the akousmatikoi were Pythagorean, but felt that their own group was more representative of Pythagoras.
The akousmatikoi focused on the more religious and ritualistic aspects of his teachings: they claimed that the mathēmatikoi were not genuinely Pythagorean, but followers of the "renegade" Pythagorean Hippasus.
Pythagorean thought was dominated by mathematics, and it was profoundly mystical. In the area of cosmology there is less agreement about what Pythagoras himself actually taught, but most scholars believe that the Pythagorean idea of the transmigration of the soul is too central to have been added by a later follower of Pythagoras. The Pythagorean conception of substance, on the other hand, is of unknown origin, partly because various accounts of his teachings are conflicting. The Pythagorean account actually begins with Anaximander's teaching that the ultimate substance of things is "the boundless," or what Anaximander called the "apeiron." The Pythagorean account holds that it is only through the notion of the "limit" that the "boundless" takes form.
Pythagoras wrote nothing down, and relying on the writings of Parmenides, Empedocles, Philolaus and Plato (people either considered Pythagoreans, or whose works are thought deeply indebted to Pythagoreanism) results in a very diverse picture in which it is difficult to ascertain what the common unifying Pythagorean themes were. Relying on Philolaus, whom most scholars agree is highly representative of the Pythagorean school, one has a very intricate picture. Aristotle explains how the Pythagoreans (by which he meant the circle around Philolaus) developed Anaximander's ideas about the apeiron and the peiron, the unlimited and limited, by writing that:
... for they [the Pythagoreans] plainly say that when the one had been constructed, whether out of planes or of surface or of seed or of elements which they cannot express, immediately the nearest part of the unlimited began to be drawn in and limited by the limit.
Continuing with the Pythagoreans:
The Pythagoreans, too, held that void exists, and that it enters the heaven from the unlimited breath – it, so to speak, breathes in void. The void distinguishes the natures of things, since it is the thing that separates and distinguishes the successive terms in a series. This happens in the first case of numbers; for the void distinguishes their nature.
When the apeiron is inhaled by the peiron it causes separation, which also apparently means that it "separates and distinguishes the successive terms in a series." Instead of an undifferentiated whole we have a living whole of inter-connected parts separated by "void" between them. This inhalation of the apeiron is also what makes the world mathematical, not just possible to describe using maths, but truly mathematical since it shows numbers and reality to be upheld by the same principle. Both the continuum of numbers (that is yet a series of successive terms, separated by void) and the field of reality, the cosmos — both are a play of emptiness and form, apeiron and peiron. What really sets this apart from Anaximander's original ideas is that this play of apeiron and peiron must take place according to harmonia (harmony), about which Stobaeus commentated:
About nature and harmony this is the position. The being of the objects, being eternal, and nature itself admit of divine, not human, knowledge – except that it was not possible for any of the things that exist and are known by us to have come into being, without there existing the being of those things from which the universe was composed, the limited and the unlimited. And since these principles existed being neither alike nor of the same kind, it would have been impossible for them to be ordered into a universe if harmony had not supervened – in whatever manner this came into being. Things that were alike and of the same kind had no need of harmony, but those that were unlike and not of the same kind and of unequal order – it was necessary for such things to have been locked together by harmony, if they are to be held together in an ordered universe.
A musical scale presupposes an unlimited continuum of pitches, which must be limited in some way in order for a scale to arise. The crucial point is that not just any set of limiters will do. One may not simply choose pitches at random along the continuum and produce a scale that will be musically pleasing. The diatonic scale, also known as "Pythagorean," is such that the ratio of the highest to the lowest pitch is 2:1, which produces the interval of an octave. That octave is in turn divided into a fifth and a fourth, which have the ratios of 3:2 and 4:3 respectively and which, when added, make an octave. If we go up a fifth from the lowest note in the octave and then up a fourth from there, we will reach the upper note of the octave. Finally the fifth can be divided into three whole tones, each corresponding to the ratio of 9:8 and a remainder with a ratio of 256:243 and the fourth into two whole tones with the same remainder. This is a good example of a concrete applied use of Philolaus’ reasoning. In Philolaus' terms the fitting together of limiters and unlimiteds involves their combination in accordance with ratios of numbers (harmony). Similarly the cosmos and the individual things in the cosmos do not arise by a chance combination of limiters and unlimiteds; the limiters and unlimiteds must be fitted together in a "pleasing" (harmonic) way in accordance with number for an order to arise.
This teaching was recorded by Philolaus' pupil Archytas in a lost work entitled On Harmonics or On Mathematics, and this is the influence that can be traced in Plato. Plato's pupil Aristotle made a distinction in his Metaphysics between Pythagoreans and "so-called" Pythagoreans. He also recorded the Table of Opposites, and commented that it might be due to Alcmaeon of the medical school at Croton, who defined health as a harmony of the elements in the body.
After attacks on the Pythagorean meeting-places at Croton, the movement dispersed, but regrouped in Tarentum, also in Southern Italy. A collection of Pythagorean writings on ethics collected by Taylor show a creative response to the troubles.
The legacy of Pythagoras, Socrates and Plato was claimed by the wisdom tradition of the Hellenized Jews of Alexandria, on the ground that their teachings derived from those of Moses. Through Philo of Alexandria this tradition passed into the Medieval culture, with the idea that groups of things of the same number are related or in sympathy. This idea evidently influenced Hegel in his concept of internal relations.
The ancient Pythagorean pentagram was drawn with two points up and represented the doctrine of Pentemychos. Pentemychos means "five recesses" or "five chambers," also known as the pentagonas — the five-angle, and was the title of a work written by Pythagoras' teacher and friend Pherecydes of Syros.
The Pythagoreans are known for their theory of the transmigration of souls, and also for their theory that numbers constitute the true nature of things. They performed purification rites and followed and developed various rules of living which they believed would enable their souls to achieve a higher rank among the gods. Much of their mysticism concerning the soul seems inseparable from the Orphic tradition. The Orphics included various purifactory rites and practices as well as incubatory rites of descent into the underworld. Apart from being linked with this, Pythagoras is also closely linked with Pherecydes of Syros, the man ancient commentators tend to credit as the first Greek to teach a transmigration of souls. Ancient commentators agree that Pherecydes was Pythagoras's most "intimate" teacher. Pherecydes expounded his teaching on the soul in terms of a pentemychos ("five-nooks," or "five hidden cavities") — the most likely origin of the Pythagorean use of the pentagram, used by them as a symbol of recognition among members and as a symbol of inner health (eugieia Eudaimonia).
"Wheel of Birth" and scientific contemplation
The Pythagoreans believed that a release from the "wheel of birth" was possible. They followed the Orphic traditions and practices to purify the soul but at the same time they suggested a deeper idea of what such a purification might be. Aristoxenus said that music was used to purify the soul just like medicine was used to purge the body. But in addition to this, Pythagoreans distinguished three kinds of lives: Theoretic, Practical and Apolaustic. Pythagoras is said to have used the example of Olympic games to distinguish between these three kind of lives. Pythagoras suggests that the lowest class of people who come to the games are the people who come to buy or sell. The next higher class comprises people who come to participate in the games. And the highest class contains people who simply come to look on. Thus Pythagoras suggests that the highest purification of a life is in pure contemplation. It is the philosopher who contemplates about science and mathematics who is released from the "cycle of birth." The pure mathematician's life is, according to Pythagoras, the life at the highest plane of existence.
Thus the root of mathematics and scientific pursuits in Pythagoreanism is also based on a spiritual desire to free oneself from the cycle of birth and death. It is this contemplation about the world that forms the greatest virtue in Pythagorean philosophy.
The Pythagoreans were well known in antiquity for their vegetarianism, which they practised for religious, ethical and ascetic reasons, in particular the idea of metempsychosis – the transmigration of souls into the bodies of other animals. "Pythagorean diet" was a common name for the abstention from eating meat and fish, until the coining of "vegetarian" in the 19th century.
The Pythagorean code further restricted the diet of its followers, prohibiting the consumption or even touching of any sort of bean. It is probable that this is due to their belief in the soul, and the fact that beans obviously showed the potential for life. Some, for example Cicero, say perhaps the flatulence caused by beans is an emergency response system, as protection from potential favism, perhaps because they resemble the kidneys and genitalia, but most likely for magico-religious reasons, such as the belief that beans and human beings were created from the same material. Most stories of Pythagoras' murder revolve around his aversion to beans. According to legend, enemies of the Pythagoreans set fire to Pythagoras' house, sending the elderly man running toward a bean field, where he halted, declaring that he would rather die than enter the field – whereupon his pursuers slit his throat. Susceptible persons may develop a hemolytic anemia by eating the beans, or even by walking through a field where the plants are in flower.
Views on women
Women were given equal opportunity to study as Pythagoreans, and learned practical domestic skills in addition to philosophy. Women were held to be different from men, but sometimes in good ways. The priestess, philosopher and mathematician Themistoclea is regarded as Pythagoras' teacher; Theano, Damo and Melissa as female disciples. Pythagoras is also said to have preached that men and women ought not to have sex during the summer, holding that winter was the appropriate time.
Neopythagoreanism was a revival in the 2nd century BC – 2nd century CE period of various ideas traditionally associated with the followers of Pythagoras, the Pythagoreans. Notable Neopythagoreans include Nigidius Figulus, Apollonius of Tyana and Moderatus of Gades. Middle and Neo-Platonists such as Numenius and Plotinus also showed some Neopythagorean influence.
They emphasized the distinction between the soul and the body. God must be worshipped spiritually by prayer and the will to be good. The soul must be freed from its material surroundings by an ascetic habit of life. Bodily pleasures and all sensuous impulses must be abandoned as detrimental to the spiritual purity of the soul. God is the principle of good; Matter the groundwork of Evil. The non-material universe was regarded as the sphere of mind or spirit.
In 1915, a subterranean basilica where 1st century Neo-Pythagoreans held their meetings was discovered near Porta Maggiore on Via Praenestina, Rome. The groundplan shows a basilica with three naves and an apse similar to early Christian basilicas that did not appear until much later, in the 4th century. The vaults are decorated with white stuccoes symbolizing Neopythagorean beliefs but its exact meaning remains a subject of debate.
- The Pythagorean idea that whole numbers and harmonic (euphonic) sounds are intimately connected in music, must have been well known to lute-player and maker Vincenzo Galilei, father of Galileo Galilei. While possibly following Pythagorean modes of thinking, Vincenzo is known to have discovered a new mathematical relationship between string tension and pitch, thus suggesting a generalization of the idea that music and musical instruments can be mathematically quantified and described. This may have paved the way to his son's crucial insight that all physical phenomena may be described quantitatively in mathematical language (as physical "laws"), thus beginning and defining the era of modern physics.
- Pythagoreanism has had a clear and obvious influence on the texts found in the hermetica corpus and thus flows over into hermeticism, gnosticism and alchemy.
- The Pythagorean cosmology also inspired the Arab gnostic Monoimus to combine this system with monism and other things to form his own cosmology.
- The pentagram (five-pointed star) was an important religious symbol used by the Pythagoreans, which is often seen as being related to the elements theorized by Empedocles to comprise all matter.
- The Pythagorean school doubtless had a monumental impact on the development of numerology and number mysticism, an influence that still resonates today. For example, it is from the Pythagoreans that the number 3 acquires its modern reputation as the noblest of all digits.
- The Pythagoreans were advised to "speak the truth in all situations," which Pythagoras said he learned from the Magi of Babylon.
- The Pythagorean theory of harmonic ratios is basis of studies on music theory in the Islamic world, for example the al-Farabi's Kitab al-Musiqa al-kabir.
- Pythagorean philosophy had a marked impact on the thoughts of early modern scholars involved within the Scientific Revolution. Of particular interest is the focus applied to the Platonic Solids derived from the Pythagorean theories of geometry and numbers by Plato. Within the work of Leonardo fascination can be found within manuscripts describing the Platonic Solids, and also within the work of Kepler who supported the Copernican theory of heliocentrism and attempted a theory of the universe based on musical, geometrical harmony.
- Esoteric cosmology
- Incommensurable magnitudes
- Mathematical Beauty
- Orphism (religion)
- Pythagorean tuning
- Sacred geometry
- Unit-point atomism
- On the two schools and these differences, see Charles Kahn, p. 15, Pythagoras and the Pythagoreans, Hackett 2001.
- This is actually a lost book whose contents are preserved in Damascius, de principiis, quoted in Kirk and Raven, The Pre-Socratic Philosophers, Cambridge Univ. Press, 1956, page 55.
- Burnet J. (1892) Early Greek Philosophy A. & C. Black, London, OCLC 4365382, and subsequent editions, 2003 edition published by Kessinger, Whitefish, Montana, ISBN 0-7661-2826-1
- Russell, Bertrand, History of Western Philosophy
- "Vegetarianism". The Oxford Encyclopedia of Food and Drink. OUP. 2004[dead link]
- See for instance the popular treatise by Antonio Cocchi, Del vitto pitagorico per uso della medicina, Firenze 1743, which initiated a debate on the "Pythagorean diet".
- Cicero, On Divination, I xxx 62, quoted in
- Seife, Charles. Zero p 26
- Gabrielle Hatfield, review of Frederick J. Simoons, Plants of Life, Plants of Death, University of Wisconsin Press, 1999. ISBN 0-299-15904-3. In Folklore 111:317-318 (2000).
- Riedweg, Christoph, Pythagoras: his life, teaching, and influence. Ithaca : Cornell University Press, pp. 39, 70. (2005), ISBN 0-8014-4240-0
- Seife, p 38
- Glenn, Cheryl, Rhetoric Retold: Regendering the Tradition from Antiquity Through the Renaissance. Southern Illinois University, 1997. 30–31.
- Seife, Charles. Zero p. 27
- This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Neopythagoreanism". Encyclopædia Britannica (11th ed.). Cambridge University Press.
- Ball Platner, Samuel. "Basilicae". penelope.uchicago.edu.
- Cohen, Mark, Readings In Ancient Greek Philosophy: From Thales To Aristotle. Indianapolis, IN: Hackett Publishing Company, 2005. 15–20.
- Zammattio, Carlo, "Leonardo The Scientist." Maidenhead, England: Mcgraw-Hill Book Company, 1980. p.98-99
- Koestler, Arthur, "The Sleepwalkers." London, England: Penguin Books, 1959. p.250-251
- Jacob, Frank Die Pythagoreer: Wissenschaftliche Schule, religiöse Sekte oder politische Geheimgesellschaft?, in: Jacob, Frank (Hg.): Geheimgesellschaften: Kulturhistorische Sozialstudien/ Secret Societies: Comparative Studies in Culture, Society and History, Globalhistorische Komparativstudien Bd.1, Comparative Studies from a Global Perspective Vol. 1, Königshausen&Neumann, Würzburg 2013, S.17-34.
- O'Meara, Dominic J. Pythagoras Revived: Mathematics and Philosophy in Late Antiquity , Clarendon Press, Oxford, 1989. ISBN 0-19-823913-0
- Riedweg, Christoph Pythagoras : his life, teaching, and influence ; translated by Steven Rendall in collaboration with Christoph Riedweg and Andreas Schatzmann, Ithaca : Cornell University Press, (2005), ISBN 0-8014-4240-0
- Pythagoreanism Web Article
- Pythagoreanism Discussion Group
- Pythagoreanism Web Site
- Pythagoreanism Web Site
- Pythagoreanism entry by Carl Huffman in the Stanford Encyclopedia of Philosophy | http://en.wikipedia.org/wiki/Pythagoreans | 13 |
11 | Copyright © 1996-2005 jsd
Because the earth is spinning and the air is moving, there are significant Coriolis effects.1 You’ll never understand how weather systems work unless you pay attention to this.
Based on their everyday indoor experience, people think they understand how air behaves:
However, when we consider the outdoor airflow patterns that Mother Nature creates, the story changes completely. In a chunk of air that is many miles across, a mile thick, and a mile away from the surface, there can be airflow patterns that last for hours or days, because there is so much more inertia and so much less friction. During these hours or days, the earth will rotate quite a bit, so Coriolis effects will be very important.
We are accustomed to seeing the rotation of storm systems depicted on the evening news, but you should remember that even a chunk of air that appears absolutely still on the weather map is rotating, because of the rotation of the earth as a whole. Any chunk of air that appears to rotate on the map must be rotating faster or slower than the underlying surface. (In particular, the air in a storm generally rotates faster, not slower.)
Note: In this chapter, I will use the § symbol to indicate words that are correct in the northern hemisphere but which need to be reversed in the southern hemisphere. Readers in the northern hemisphere can ignore the § symbol.
Suppose we start out in a situation where there is no wind, and where everything is in equilibrium. We choose the rotating Earth as our reference frame, which is a traditional and sensible choice. In this rotating frame we observe a centrifugal field, as well as the usual gravitational field, but the air has long ago distributed itself so that its pressure is in equilibrium with those fields.
Then suppose the pressure is suddenly changed, so there is a region where the pressure is lower than the aforementioned equilibrium pressure.
In some cases the low pressure region is roughly the same size in every direction, in which case it is called a low pressure center (or simply a low) and is marked with a big “L” on weather maps. In other cases, the low pressure region is quite long and skinny, in which case it is called a trough and is marked “trof” on the maps. See figure 20.1.
In either case, we have a pressure gradient.2 Each air parcel is subjected to an unbalanced force due to the pressure gradient.
Initially, each air parcel moves directly inward, in the direction of the pressure gradient, but whenever it moves it is subject to large sideways Coriolis forces, as shown in figure figure 20.2. Before long, the motion is almost pure counterclockwise§ circulation around the low, as shown in figure 20.3, and this pattern persists throughout most of the life of the low-pressure region. If you face downwind at locations such as the one marked A, the pressure gradient toward the left§ is just balanced by the Coriolis force to the right§, and the wind blows in a straight line parallel to the trough. At locations such as the one marked B, the pressure gradient is stronger than the Coriolis force. The net force deflects the air.
When explaining the counterclockwise§ circulation pattern, it would be diametrically wrong to think it is “because” the Coriolis force is causing a “leftward§” deflection of the motion. In fact the Coriolis force is always rightward§. In the steady motion, as shown in figure 20.3, the Coriolis force is outward from the low pressure center, partially opposing the pressure gradient. The Coriolis force favors counterclockwise§ motion mainly during the initial infall as shown in figure 20.2.
Not all circulation is counterclockwise§; it is perfectly possible for the air to contain a vortex that spins the other way. It depends on scale: A system the size of a hurricane will always be cyclonic, whereas anything the size of tornado (or smaller) can go either way, depending on how it got started.
Terminology: In the northern hemisphere, counterclockwise circulation is called cyclonic, while in the southern hemisphere clockwise circulation is called cyclonic. So in both hemispheres cyclonic circulation is common, and anticyclonic circulation is less common.
Now we must must account for friction (in addition to the other forces just mentioned). The direction of the frictional force will be opposite to the direction of motion. This will reduce the circulatory velocity. This allows the air to gradually spiral inward.
The unsophisticated idea that air should flow from a high pressure region toward a low pressure region is only correct in the very lowest layers of the atmosphere, where friction is dominant. If it weren’t for friction, the low would never get filled in. At any reasonable altitude, friction is negligible — so the air aloft just spins around and around the low pressure region.
The astute reader may have noticed a similarity between the air in figure 20.2 and the bean-bag in figure 19.14. In one case, something gets pulled inwards and increases its circulatory motion “because” of Coriolis force, and in the other case something gets pulled inwards and increases its circulatory motion “because” of conservation of angular momentum. For a bean-bag, you can analyze it either way, and get the same answer. Also for a simple low-pressure center, you can analyze it either way, and get the same answer. For a trough, however, there is no convenient way to apply the conservation argument.
In any case, please do not get the idea that the air spins around a low partly because of conservation of angular momentum and partly because of the Coriolis force. Those are just two ways of looking at the same thing; they are not cumulative.
As mentioned above, whenever the wind is blowing in a more-or-less straight line, there must low pressure on the left§ to balance the Coriolis force to the right§ (assuming you are facing downwind). In particular, the classic cold front wind pattern (shown in figure 20.4) is associated with a trough, (as shown in figure 20.5). The force generated by the low pressure is the only thing that could set up the characteristic frontal flow pattern.
The wind shift is what defines the existence of the front. Air flows one way on one side of the front, and the other way on the other side (as shown in figure 20.4).
Usually the front is oriented approximately north/south, and the whole system is being carried west-to-east by the prevailing westerlies. In this case, we have the classic cold front scenario, as shown in figure 20.4, figure 20.5, and figure 20.6. Ahead of the front, warm moist air flows in from the south§. Behind the front, the cold dry air flows in from the north§. Therefore the temperature drops when the front passes. In between cold fronts, there is typically a non-frontal gradual warming trend, with light winds.
You can use wind patterns to your advantage when you fly cross-country. If there is a front or a pressure center near your route, explore the winds aloft forecasts. Start by choosing a route that keeps the low pressure to your left§. By adjusting your altitude and/or route you can often find a substantial tailwind (or at least a substantially decreased headwind).
By ancient tradition, any wind that is named for a cardinal
direction is named for the direction from whence it comes.
For example, a south wind (or southerly wind)
blows from south to north.
To avoid confusion, it is better to say “wind from the south” rather than “south wind”.
|Almost everything else is named the other way. For example, an onshore breeze is blowing toward onshore points, while an offshore breeze is blowing toward offshore points. An aircraft on a southerly heading is flying toward the south. Physicists and mathematicians name all vectors by the direction toward which they point.|
|The arrow on a real-life weather vane points upwind, i.e. into the wind.||The arrows on a NOAA “850mb analysis” chart and similar charts point downwind, the way a velocity vector should point.|
A warm front is in many ways the same as a cold front. It is certainly not the opposite of a cold front. In particular, it is also a trough, and has the same cyclonic flow pattern.
A warm front typically results when a piece of normal cold front gets caught and spun backwards by the east-to-west flow just north§ of a strong low pressure center, as shown in figure 20.7. That is, near the low pressure center, the wind circulating around the center is stronger than the overall west-to-east drift of the whole system.
If a warm front passes a given point, a cold front must have passed through a day or so earlier. The converse does not hold — cold front passage does not mean you should expect a warm front a day or so later. More commonly, the pressure is more-or-less equally low along most of the trough. There will be no warm front, and the cold front will be followed by fair weather until the next cold front.
Low pressure — including cold fronts and warm fronts — is associated with bad weather for a simple reason. The low pressure was created by an updraft that removed some of the air, carrying it up to the stratosphere. The air cools adiabatically as it rises. When it cools to its dew point, clouds and precipitation result. The latent heat of condensation makes the air warmer than its surroundings, strengthening the updraft.
The return flow down from the stratosphere (high pressure, very dry descending air, and no clouds) generally occurs over a wide area, not concentrated into any sort of front. There is no sudden wind shift, and no sudden change in temperature. This is not considered “significant weather” and is not marked on the charts at all.
Air shrinks when it gets cold. This simple idea has some important consequences. It affects your altimeter, as will be discussed in section 20.2.4. It also explains some basic facts about the winds aloft, which we will discuss now.
Most non-pilots are not very aware of the winds aloft. Any pilot who has every flown westbound in the winter is keenly aware of some basic facts:
A typical situation is shown in figure 20.8. In January, the average temperature in Vero Beach, Florida, is about 15 Centigrade (59 Fahrenheit), while the average temperature in Oshkosh, Wisconsin is about minus 10 Centigrade (14 Fahrenheit). Imagine a day where surface winds are very weak, and the sea-level barometric pressure is the same everywhere, namely 1013 millibars (29.92 inches of mercury).
The pressure above Vero Beach will decrease with altitude. According to the International Standard Atmosphere (ISA), we expect the pressure to be 697 millibars at 10,000 feet.
Of course the pressure above Oshkosh will decrease with altitude, too, but it will not exactly follow the ISA, because the air is 25 centigrade colder than standard. Air shrinks when it gets cold. In the figure, I have drawn a stack of ten boxes at each site. Each box at VRB contains the same number of air molecules as the corresponding box at OSH.3 The pile of boxes is shorter at OSH than it is at VRB.
The fact that the OSH air column has shrunk (while the VRB air column has not) produces a big effect on the winds aloft. As we mentioned above, the pressure at VRB is 697 millibars at 10,000 feet. In contrast, the pressure at OSH is 672 millibars at the same altitude — a difference of 25 millibars.
This puts a huge force on the air. This force produces a motion, namely a wind of 28 knots out of the west. (Once again, the Coriolis effect is at work: during most of the life of this pressure pattern, the wind flows from west to east, producing a Coriolis force toward the south, which just balances the pressure-gradient force toward the north.) This is the average wind at 10,000 feet, everywhere between VRB and OSH.
More generally, suppose surface pressures are reasonably uniform (which usually the case) and temperatures are not uniform (which is usually the case, especially in winter). If you have low temperature on your left§ and high temperature on your right§, you will have a tailwind aloft. The higher you go, the stronger the wind. This is called thermal gradient wind.
The wind speed will be proportional to the temperature gradient. Above a large airmass with uniform temperature, there will be no thermal gradient wind. However, if there is a front between a warm airmass and a cold airmass, there will be a large temperature change over a short distance, and this can lead to truly enormous winds aloft.
In July, OSH warms up considerably, to about 20 centigrade, while VRB only warms up slightly, to about 25 centigrade. This is why the thermal gradient winds are typically much weaker in summer than in winter — only about 5 knots on the average at 10,000 feet.
In reality, the temperature change from Florida to Wisconsin does not occur perfectly smoothly; there may be large regions of relatively uniform temperature separated by rather abrupt temperature gradients — cold fronts or warm fronts. Above the uniform regions the thermal gradient winds will be weak, while above the fronts they will be much stronger.
For simplicity, the foregoing discussion assumed the sea-level pressure was the same everywhere. It also assumed that the temperature profile above any given point was determined by the surface temperature and the “standard atmosphere” lapse rate. You don’t need to worry about such details; as a pilot you don’t need to calculate your own winds-aloft forecasts. The purpose here is to make the official forecasts less surprising, less confusing, and easier to remember.
Several different notions of “altitude” are used in aviation.
We start with true altitude, which is the simplest. This is what non-pilots think of as “the” altitude or elevation, namely height above sea level, as measured with an accurate ruler. True altitude is labelled MSL (referring to Mean Sea Level). For instance, when they say that the elevation of Aspen is 7820 feet MSL, that is a true altitude.
Before proceeding, we need to introduce the notion of international standard atmosphere (ISA). The ISA is a set of formulas that define a certain temperature and pressure as a function of altitude. For example, at zero altitude, the ISA temperature is 15 degrees centigrade, and the ISA pressure is 1013.25 millibars, or equivalently 29.92126 inches of mercury. As the altitude increases, the ISA temperature decreases at a rate of 6.5 degrees centigrade per kilometer, or very nearly 2 degrees C per thousand feet. The pressure at 18,000 feet is very nearly half of the sea-level pressure, and the pressure at 36,000 feet is somewhat less than one quarter of the sea-level pressure – so you can see the pressure is falling off slightly faster than exponentially. If you want additional details on this, a good place to look is the Aviation Formulary web site.
Remember, the ISA is an imaginary, mathematical construction. However, the formulas were chosen so that the ISA is fairly close to the average properties of the real atmosphere.
Now we can define the notion of pressure altitude. This is not really an altitude; it is just a way of describing pressure. Specifically, you measure the pressure, and then figure out how high you would have to go in the international standard atmosphere to find that pressure. That height is called the pressure altitude. One tricky thing is that low pressure corresponds to high pressure altitude and vice versa.
Pressure altitude (i.e. pressure) is worth knowing for several reasons. For one thing, if the pressure altitude is too high, you will have trouble breathing. The regulations on oxygen usage are expressed in terms of pressure altitude. Also, engine performance is sensitive to pressure altitude (among other factors). Thirdly, at high altitudes, pressure altitude is used for vertical separation of air traffic. This works fine, even though the pressure altitude may be significantly different from the true altitude (because on any given day, the actual atmosphere may be different from the ISA). The point is that two aircraft at the same pressure level will be at the same altitude, and two aircraft with “enough” difference in pressure altitude will have “enough” difference in true altitude.
To determine your pressure altitude, set the Kollsman window on your altimeter to the standard value: 29.92 inches, or equivalently 1013 millibars. Then the reading on the instrument will be the pressure altitude (plus or minus nonidealities, as discussed in section 20.2.3).
This brings us to the subject of calibrated altitude and indicated altitude . At low altitudes – when we need to worry about obstacle clearance, not just traffic separation – pressure altitude is not good enough, because the pressure at any given true altitude varies with the weather. The solution is to use indicated altitude, which is based on pressure (which is convenient to measure), but with most of the weather-dependence factored out. To determine your indicated altitude, obtain a so-called altimeter setting from an appropriate nearby weather-reporting station, and dial it into the Kollsman window on your altimeter. Then the reading on the instrument will be the indicated altitude. (Calibrated altitude is the same thing, but does not include nonidealities, whereas indicated altitude is disturbed by nonidealities of the sort discussed in section 20.2.3.)
The altimeter setting is arranged so that right at the reporting station, calibrated altitude agrees exactly with the station elevation. By extension, if you are reasonably close to the station, your calibrated altitude should be a reasonable estimate of your true altitude ... although not necessarily good enough, as discussed in section 20.2.3 and section 20.2.4).
Next we turn to the notion of absolute altitude. This is defined to be the height above the surface of the earth. Here is a useful mnemonic for keeping the names straight: the Absolute Altitude is what you see on the rAdAr altimeter. Absolute altitude is labelled “AGL” (above ground level). It is much less useful than you might have guessed. One major problem is that there may be trees and structures that stick up above the surface of the earth, and absolute altitude does not account for them. Another problem is that the surface of the earth is uneven, and if you tried to maintain a constant absolute altitude, it might require wild changes in your true altitude, which would play havoc with your energy budget. Therefore the usual practice in general aviation is to figure out a suitable indicated altitude and stick to it.
Another type of altitude is altitude above field elevation, where field means airfield, i.e. airport. This is similar to absolute altitude, but much more widely used. For instance, the traffic-pattern altitude might be specified as 1000 feet above field elevation. Also, weather reports give the ceiling in terms of height above field elevation. This is definitely not the same as absolute altitude, because if there are hills near the field, 1000 feet above the field might be zero feet above the terrain. Altitude above field elevation should be labelled “AFE” but much more commonly it is labelled “AGL”. If the terrain is hilly “AGL” is a serious misnomer.
Finally we come to the notion of density altitude. This is not really an altitude; it is just a way of describing density. The official definition works like this: you measure the density, and then figure out how high you would have to go in the ISA to find that density. That height is called the density altitude. Beware that low density corresponds to high density altitude and vice versa.
Operationally, you can get a decent estimate of the density altitude by measuring the pressure altitude and temperature, and then calculating the density altitude using the graphs or tables in your POH. This is only an estimate, because it doesn’t account for humidity, but it is close enough for most purposes.
Density altitude is worth knowing for several reasons. For one thing, the TAS/CAS relationship is determined by density. Secondly, engine performance depends strongly on density (as well as pressure and other factors). Obviously TAS and engine performance are relevant to every phase of flight – sometimes critically important.
As discussed in the previous section, an aircraft altimeter does not measure true altitude. It really measures pressure, which is related to altitude, but it’s not quite the same thing.
In order to estimate the true altitude, the altimeter depends two factors: the pressure, and the altimeter setting in the Kollsman window. The altimeter setting is needed to correct for local variations in barometric pressure. You should set this on the runway before takeoff, and for extended flights you should get updated settings via radio. If you neglect this, you could find yourself at a too-low altitude, if you fly to a region where the barometric pressure is lower. The mnemonic is: “High to low, look out below”.
Altimeters are not perfect. Even if the altimeter and airplane were inspected yesterday, and found to be within tolerances,
The first item could be off in either direction, but the other items will almost certainly be off in the bad direction when you are descending. Also, if the airplane has been in service for a few months since the last inspection, the calibration could have drifted a bit. All in all, it would be perfectly plausible to find that your altimeter was off by 50 feet when parked on the ground, and off by 200 feet in descending flight over hilly terrain.
The altimeter measures a pressure and converts it to a so-called altitude. The conversion is based on the assumption that the actual atmospheric pressure varies with altitude the same way the the standard atmosphere would. The pressure decreases by roughly 3.5% per thousand feet, more or less, depending on temperature.
The problem is that the instrument does not account for nonstandard temperature. Therefore if you set the altimeter to indicate correctly on the runway at a cold place, it will be inaccurate in flight. It will indicate that you are higher than you really are. This could get you into trouble if you are relying on the altimeter for terrain clearance. The mnemonic is HALT — High Altimeter because of Low Temperature.
As an example: Suppose you are flying an instrument approach into Saranac Lake, NY, according to the FAA-approved “Localizer Runway 23” procedure. The airport elevation is 1663 feet. You obtain an altimeter setting from the airport by radio, since you want your altimeter to be as accurate as possible when you reach the runway.
You also learn that the surface temperature is −32 Centigrade, which is rather cold but not unheard-of at this location. That means the atmosphere is about 45 C colder than the standard atmosphere. That in turn means the air has shrunk by about 16%. Throughout the approach, you will be too low by an amount that is 16% of your height above the airport.
The procedure calls for crossing the outer marker at 3600 MSL and then descending to 2820 MSL, which is the Minimum Descent Altitude. That means that on final approach, you are supposed to be 1157 feet above the airport. If you blindly trust your altimeter, you will be 1157 “shrunken feet” above the airport, which is only about 980 real feet. You will be 180 real feet (210 shrunken feet) lower than you think. To put that number in perspective, remember that localizer approaches are designed to provide only 250 feet of obstacle clearance.4
You must combine this HALT error with the ordinary altimetry errors discussed in section 20.2.3. The combination means you could be 400 feet lower than what the altimeter indicates — well below the protected airspace. You could hit the trees on Blue Hill, 3.9 nm northeast of the airport.
Indeed, you may be wondering why there haven’t been lots of crashes already – especially since the Minimum Descent Altitude used to be lower (1117 feet, until mid-year 2001). Possible explanations include:
Even if people don’t “usually” crash, we still need to do something to increase the margin for error.
There is an obvious way to improve the situation: In cold weather, you need to apply temperature compensation to all critical obstacle-clearance altitudes.
You can do an approximate calculation in your head: If it’s cold, add 10%. If it’s really, really cold, add 20%. Approximate compensation is a whole lot better than no compensation.
The percentages here are applied to the height above the field, or, more precisely, to the height above the facility that is giving you your altimeter setting. In the present example, 20% of 1157 is about 230. Add that to 2820 to get 3050, which is the number you want to see on your altimeter during final approach. Note that this number, 3050, represents a peculiar mixture: 1663 real feet plus 1387 shrunken feet.
For better accuracy, you can use the following equation. The indicated altitude you want to see is:
|Ai = F + (Ar−F)|
In this formula, F is the facility elevation, Ar is the true altitude you want to fly (so Ar−F is the height above the facility, in real feet), λ is the standard lapse rate (2 ∘C per thousand feet), Tf is the temperature at the facility, 273.15 is the conversion from Centigrade to absolute temperature (Kelvin), and 15 C = 288.15 K is the sea-level temperature of the standard atmosphere. The denominator (273.15 + Tf) is the absolute temperature observed at the facility, while the numerator (288.15−λ F) is what the absolute temperature would be in standard conditions.
You might want to pre-compute this for a range of temperatures, and tabulate the results. An example is shown in table 20.1. Make a row for each of the critical altitudes, not just the Minimum Descent Altitude. Then, for each flight, find the column that applies to the current conditions and pencil-in each number where it belongs on the approach plate.
|Facility Temp, ∘C||12||0||−10||−20||−30||−40|
|Crossing Outer Marker||3600||3680||3760||3840||3940||4020|
|Minimum Descent Alt||2820||2860||2920||2960||3020||3080|
It is dangerously easy to get complacent about the temperature compensation. You could live in New Jersey for years without needing to think about it – but then you could fly to Saranac Lake in a couple of hours, and get a nasty surprise.
The HALT corrrection is important whenever temperatures are below standard and your height above the terrain is a small fraction of your height above the facility that gave you your altimeter setting. This can happen enroute or on approach:
A parcel of air will have less density if it has
If a parcel of air is less dense than the surrounding air, it will be subject to an upward force.5
As everyone knows, the tropics are hotter and more humid than the polar regions. Therefore there tends to be permanently rising air at the equator, and permanently sinking air at each pole.6 This explains why equatorial regions are known for having a great deal of cloudy, rainy weather, and why the polar regions have remarkably clear skies.
You might think that the air would rise at the equator, travel to the poles at high altitude, descend at the poles, and travel back to the equator at low altitude. The actual situation is a bit more complicated, more like what is shown in figure 20.9. In each hemisphere, there are actually three giant cells of circulation. Roughly speaking, there is rising air at the equator, descending air at 25 degrees latitude, rising air at 55 degrees latitude, and descending air at the poles. This helps explain why there are great deserts near latitude 25 degrees in several parts of the world.
The three cells are named as follows: the Hadley cell (after the person who first surmised that such things existed, way back in 1735), the Ferrel cell, and the polar cell. The whole picture is called the tricellular theory or tricellular model. It correctly describes some interesting features of the real-world situation, but there are other features that it does not describe correctly, so it shouldn’t be taken overly-seriously.
You may be wondering why there are three cells in each hemisphere, as opposed to one, or five, or some other number. The answer has to do with the size of the earth (24,000 miles in circumference), its speed of rotation, the thickness of the atmosphere (a few miles), the viscosity of the air, the brightness of the sun, and so forth. I don’t know how to prove that three is the right answer — so let’s just take it as an observed fact.
Low pressure near 55 degrees coupled with high pressure near 25 degrees creates a force pushing the air towards the north§ in the temperate regions. This force is mostly balanced by the Coriolis force associated with motion in the perpendicular direction, namely from west to east. As shown in figure 20.10, these are the prevailing westerlies that are familiar to people who live in these areas.
According to the same logic, low pressure near the equator coupled with high pressure near 25 degrees creates a force toward the equator. This force is mostly balanced by the Coriolis force associated with motion from east to west. These are the famous trade winds, which are typically found at low latitudes in each hemisphere, as shown in figure 20.10.
In days of old, sailing-ship captains would use the trade winds to travel in one direction and use the prevailing westerlies to travel in the other direction. The regions in between, where there was sunny weather but no prevailing wind, were named the horse latitudes. The region near the equator where there was cloudy weather and no prevailing wind was called the doldrums .
The boundaries of these great circulatory cells move with the sun. That is, they are found in more northerly positions in July and in more southerly positions in January. In certain locales, this can produce a tremendous seasonal shift in the prevailing wind, which is called a monsoon.7
Now let us add a couple more facts:
As a consequence, in temperate latitudes, we find that in summer, the land is hotter than the ocean (other things, such as latitude, being constant), whereas in winter the land is colder than the ocean.
This dissimilar heating of land and water creates huge areas of low pressure, rising air, and cyclonic flow over the oceans in winter, along with a huge area of high pressure and descending air over Siberia. Conversely there are huge areas of high pressure, descending air, and anticyclonic flow over the oceans in summer.
These continental / oceanic patterns are superimposed on the primary circulation patterns. In some parts of the world, one or the other is dominant. In other parts of the world, there is a day-by-day struggle between them.
Very near the surface (where friction dominates), air flows from high pressure to low pressure, just as water flows downhill. Meanwhile, in the other 99% of the atmosphere (where Coriolis effects dominate) the motion tends to be perpendicular to the applied force. The air flows clockwise§ around a high pressure center and counterclockwise§ around a low pressure center, cold front, or warm front.
Although trying to figure out all the details of the atmosphere from first principles is definitely not worth the trouble, it is comforting to know that the main features of the wind patterns make sense. They do not arise by magic; they arise as consequences of ordinary physical processes like thermal expansion and the Coriolis effect.
If you really want to know what the winds are doing at 10,000 feet, get the latest 700 millibar constant pressure analysis chart and have a look. These charts used to be nearly impossible for general-aviation pilots to obtain, but the situation is improving. Now you can get them by computer network or fax. On a trip of any length, this is well worth the trouble when you think of the time and fuel you can save by finding a good tailwind.
A few rules of thumb: eastbound in the winter, fly high. Westbound in the winter, fly lower. In the summer, it doesn’t matter nearly as much. In general, try to keep low pressure to your left§ and high pressure to your right§. | http://www.av8n.com/how/htm/atmo.html | 13 |
11 | Think about a difficult decision you have had to make. After you decided did it work out? Why or why not? Why do you think decisions and choices are hard to make? We make personal decisions and we make decisions as groups. There is a tool you can use to improve your decision making that will help you reach a better outcome.
- Explore the P.A.C.E.D. decision-making model by using it to complete a personal decision. They will also use the model for a group decision activity.
- Practice using the model to make choices within a budget.
- Discover that all resources are scarce and as a result, choices must be made.
- Recognize that when choices are made, something is given up.
- Develop skills using the decision making model in order to improve students' ability to make reasoned decisions.
Ask the students to think about a difficult decision they have had to make. Ask the students the decisions they settled on turned out to be satisfactory. Why or why not? Why do you think decisions and choices are hard to make? Tell the students that we make personal decisions and we make decisions as groups.
Tell the students that children make many decisions every day. How to use an hour of "free" time after school – for homework, play with friends or watch television. Most students have practiced listing alternatives because they are frequently asked, "what are your choices?" Most have never heard of or used criteria in their decision making process. Criteria will help to clarify what factors are important (priorities) – which are most important to you. When criteria are added to the process, they can contribute toward making a better or more informed decision. Tell the students that when they learn the P.A.C.E.D model, they can develop skills in stating the problem (narrowing the scope), they are choosing to solve, explore alternatives, decide what is important to them (criteria) and establishing a format for using criteria to evaluate each alternative.
[Note to teacher: The P.A.C.E.D. Model is a process that provides a structure for decision making the steps are as follows: 1.Define the Problem, 2.List the Alternatives, 3. Identify your criteria, 4.Evaluate the alternatives using the criteria, and 5. Make a decision
Tell the students that the result is often unclear and they may wish to review: Is the problem stated narrow enough to enable them to work towards a solution? Are there any additional alternatives that might be missing? Ask the students if the most important criteria they have stated perhaps be "weighted" in importance with some being twice as important?]
- Practice Activity: Teach the students the P.A.C.E.D. model and then divide the students into groups of twos or threes. When they have finished reading, have them answer the bottom questions and share the results with the group. Did everyone make the same choices? How many of each item were chosen? Have the students explain their choices to the group.
- Family Decision: In this activity students will use the PACED model and make a decision to a family problem.
Decision Making Model
- Matching Activity: This is a quick matching exercise to let you determine whether the students can place the steps in the correct order.
- Your Choice Activity: Students will try the model on their own and make a decision about a "gift certificate." You may choose to use this as a homework assignment. You may also want to reserve this for a closing activity to review the student’s ability to use the model.
Your Choice Activity
- Mission Possible Activity: Divide the students into small groups and give them the "Mission Possible" handout. On the list are items they might desire for their classroom. Four "wild" spaces are provided so that they may add other items. Before they can add other items, they must investigate the approximate yearly cost of the item and check with the teacher to identify it as a) allowed in school, b) not dangerous, c) priced accurately. Before they begin, have them state their problem, and identify the criteria using the choice handout.
Mission Possible Handout
[Note to teacher: Be sure and modify any of the activities to fit your special situation. You may modify the group learning activity by changing the items on the list to be of special interest to your students, perhaps adding a loft, bean bag chairs, games, special books or sports equipment.]
In order to teach this lesson, you want to emphasize the following points – below is a brief review to use in helping to prepare for your class.
- Name something you wanted last year, but no longer want.
- What items do you want now that you didn't want last year?
- Why do wants change?
- Do you ever seem to run out of wants?
If we had unlimited dollars, could all of us purchase everything we wanted?
- What are resources? [Human, Capital, Entrepreneurial skills, Natural resources]
- There is a limit to the amount of resources we have available to produce goods and services we all desire. Because of the limit, we all have to make choices.
- Scarcity: Scarcity is the result of unlimited wants and limited resources colliding.
Every resource has an alternative use. An opportunity cost is the next best alternative that is given up when a choice is made. Example if you have an hour after school, you might be able to a) play a game, b) play with friends, c) do homework or d) read a book. Suppose you rank your choices from highest use to lowest: read a book, play with friends, play a game, do homework. Your first choice would be then to spend the hour reading a book. Your opportunity cost is not all the choices given up, but the next best alternative – in this case it would be playing with friends. Playing with friends would be your opportunity cost.
- Define the Problem
- List the Alternatives
- Identify your criteria
- Evaluate the alternatives using the criteria
- Make a decision
When the students have completed the process:
- Have them compile a list.
- Check to see if they are within their budget.
- Identify which items were picked the most often; also identify the reasons for these selections.
- Have students share their guiding criteria.
Learn the P.A.C.E.D. model and then divide into groups of twos or threes. When you have finished reading, answer the bottom questions and share the results with the group. Did everyone make the same choices? How may of each item were chosen? Explain your choices to the group.
Your Choice Activity:
Complete the model on your own and make a decision about a "gift certificate."
This is a quick matching exercise to let you determine whether you can place the steps in the correct order.
Mission Possible Activity:
Split into small groups and complete the "Mission Possible" handout. On the list are items you might desire for your classroom. Four "wild" spaces are provided so that you may add other items. Before you can add other items, you must investigate the approximate yearly cost of the item and check with the teacher to identify it as a) allowed in school, b) not dangerous, c) priced accurately. Before you begin, state your problem and identify the criteria using the choice handout.
Tell the students that most of them have practiced thinking of "alternatives" to the choices they are making, but the decision making process is generally based on their limited experience or impulse. Explain that the addition of criteria will help them think through the important limitations as well as what really is important. Tell the students that they might want to pose several questions or problems so that their skills can be practiced. Of course, the first time or two will take the most time, but soon it will become a familiar tool. Tell the students that they may differ from one another in the criteria they use. Criteria may not be the same even when they are working through the same problem.
- Ask students to fill in the acronym of P.A.C.E.D. and write a brief explanation of each step.
- Assign them to take home a decision-making problem to solve with their family using the Decision Making Model.
“Clear, concise instructions with internet infusion into the lesson plan. I like it!”
“This was excellent. Very clear and precise.”
“I thought the lesson was thorough and explored all aspects of the P.A.C.E.D. model. I liked the conclusion where you tell the students that the more you use the model, the more familiar with it they become. They will begin to use this decision model in lots of other ways as well. I thought the activities were geared for upper 4th and 5th graders. All handouts are good but geared for 4th and up. I could even use this lesson with middle school kids. This is a very timely lesson.”
Review from EconEdReviews.org
“ This lesson is very timely in today's society. It seems wants and needs are confused by kids. This lesson helps them determine which is which not only in money matters, but in what's important in other areas of life. If you can get kids accustomed to using the P.A.C.E.D. DECISION MODEL, they can apply it in all sorts of areas in their lives. The worksheets are too difficult for third graders, but the fourth to sixth graders would do very well with all the concepts here. I am not sure that outfitting a classroom will engage children, but it provides some work with technology which is great. The background information will help a novice in using economic terms as well as remind the experienced economic teacher what background knowledge to review when using the model. I found this lesson to be a must when teaching wants and needs. The information on opportunity costs could be expanded a bit more, especially if the concept is a new one to students. I would add some activities that help students find the opportunity costs in their decision-making everyday. An example would be what to wear in the morning, or which shoes to wear that day. Showing that decisions are something we all do everyday. This lesson could be adapted to the younger third graders up to the higher grades of fifth or even sixth easily enough. Great lesson!! ”
Review from EconEdReviews.org
“Having the students work together is great and teaching them the PACED decision making, can help them later. This is a grid that can be used anytime. ” | http://www.econedlink.org/lessons/index.php?lid=396&type=educator | 13 |
21 | In mathematics (more specifically geometry), a semicircle is a two-dimensional geometric shape that forms half of a circle. Being half of a circle's full angle, the arc of a semicircle, called a straight angle always measures 180°, π radians, or a half-turn.
A semicircle can be used to construct the arithmetic and geometric means of two lengths using straight-edge and compass. If we make a semicircle with a diameter of a+b, then the length of its radius is the arithmetic mean of a and b (since the radius is half of the diameter). The geometric mean can be found by dividing the diameter into two segments of lengths a and b, and then connecting their common endpoint to the semicircle with a segment perpendicular to the diameter. The length of the resulting segment is the geometric mean, which can be proved using the Pythagorean theorem. This can be used to accomplish quadrature of a rectangle (since a square whose sides are equal to the geometric mean of the sides of a rectangle has the same area as the rectangle), and thus of any figure for which we can construct a rectangle of equal area, such as any polygon (but not a circle). | http://en.wikipedia.org/wiki/Semicircle | 13 |
15 | Unlike spiral galaxies, elliptical galaxies are not supported by rotation. The orbits of the constituent stars are random and often very elongated, leading to a shape for the galaxy determined by the speed of the stars in each direction. Faster moving stars can travel further before they are turned back by gravity, resulting in the creation of the long axis of the elliptical galaxy in the direction these stars are moving.
The size of an elliptical galaxy is measured as an effective radius which corresponds to the size of a circle encompassing half of the light coming from the galaxy. Measurements reveal that elliptical galaxies come in a large range of sizes, from the rare giant ellipticals found in the centres of galaxy clusters and stretching over hundreds of kiloparsecs, to the very common dwarf ellipticals which may have diameters as small as 0.3 kiloparsecs.
Not surprisingly, the luminosities and masses of elliptical galaxies also range enormously. Giant ellipticals can be 1013 times as bright as the Sun and contain up to 1013 solar masses of stars. At the other extreme, dwarf ellipticals are faint (105 times as bright as the Sun) and contain as little as 107 solar masses of stars. In some cases, the density of stars in a dwarf elliptical can be so low that we can see straight through the galaxy!
The above comparisons encompass the different types of elliptical galaxy, however, it should be noted that astronomers are not sure if dwarf ellipticals, ellipticals and giant ellipticals form a continuous physical sequence. This is partially reflected in our theories of how the different galaxies formed.
Giant elliptical galaxies are generally thought to be the result of galaxy mergers. Ordinary elliptical galaxies may also form in this manner, or may have formed from the gravitational collapse of an interstellar gas cloud. In this case, a rapid burst of star formation would convert almost all the gas into stars simultaneously, leaving nothing to form a disk. Dwarf ellipticals may also form in this manner, but others have suggested that they form out of the leftover material of a larger galaxy or in the tidal tails of interacting galaxies.
Whatever the formation mechanism, all of the different types of elliptical galaxies contain significantly less dust and gas than spiral galaxies and irregular galaxies, and certainly not enough to support much star formation at present times. For this reason they are now observed to consist primarily of old, red, population II stars. | http://astronomy.swinburne.edu.au/cms/astro/cosmos/E/Elliptical+Galaxy | 13 |
28 | Discovery of a Volcanic Landscape
Venus is the closest planet to Earth. However, the surface of Venus is obscured by several layers of thick cloud cover.
These clouds are so thick and so persistent that optical telescope observations from Earth are unable to produce clear images of the planet's surface features.
The first detailed information about the surface of Venus was obtained until the early 1990s when the Magellan
spacecraft (also known as the Venus Radar Mapper) used radar imaging to produce detailed topography data for most
of the planet's surface.
That data was used to create images of Venus such as the one shown at the top of the right column.
Researchers expected the topography data to reveal volcanic features on Venus but they were surprised to learn that at least 90% of the
planet's surface was covered by lava flows and broad shield volcanoes. They were also surprised that these
volcanic features on Venus were enormous in size when compared to similar features on Earth.
Enormous Shield Volcanoes
The Hawaiian Islands are often used as examples of large shield volcanoes on Earth. These volcanoes are on the order
of 120 kilometers wide at the base and about 8 kilometers in height. They would be among the tallest volcanoes on
Venus; however, they would not be competitive in width. Large shield volcanoes on Venus are an impressive 700 kilometers
wide at the base but are only about 5.5 kilometers in height.
Olympus Mons: The Largest Shield Volcano on Mars
In summary, the large shield volcanoes on Venus are several times as wide as those on Earth and they have a much gentler
slope. A relative size comparison of volcanoes on the two planets is given in the graphic below - which has a
vertical exaggeration of about 25x.
|This graphic compares the geometry of a large shield volcano from Venus with a large shield volcano from Earth. Shield volcanoes on Venus are usually very broad at the base and have gentler slopes than the shield volcanoes found on Earth. VE=~25
Extensive Lava Flows
Lava flows on Venus are thought to be composed of rocks that are similar to the basalts found on earth. Many of the lava
flows on Venus have lengths of several hundred kilometers. The lava's mobility might be enhanced by the planet's average
surface temperature of about 470 degrees Celsius.
The image of Sapas Mons volcano, in the right column of this page, contains many excellent examples of long lava flows on
Venus. The radial appearance of the volcano is produced by long lava flows extending from the two vents at the peak and from numerous flank eruptions.
Venus has a large number of features that have been called "pancake domes". These are similar to lava domes found on Earth,
but on Venus they are up to 100 times as large. Pancake domes are very broad, with a very flat top and are usually less
than 1000 meters in height. They are thought to form by the extrusion of viscous lava.
|Radar image of three pancake domes on the left and a geologic map of the same area on the right. Anyone interested in learning about the surface features of Venus can obtain radar images from NASA and compare them with geologic maps prepared by USGS.|
When Did the Volcanoes on Venus Form?
Most of the surface of Venus is covered by lava flows that have a very low impact cratering density. This low impact density reveals
that the planets surface is mostly less than 500,000,000 years old. Volcanic activity on Venus can not be detected from Earth but
enhanced radar imaging from the Magellan spacecraft suggest that volcanic activity on Venus still occurs. (see image below)
|Radar images of Idunn Mons Volcano in the Imdr Regio region of Venus. The image on the left is a radar topography image with a vertical exaggeration of about 30x. The image on the right is color-enhanced based upon thermal imaging spectrometer data. The red areas are warmer and thought to be evidence of recent lava flows. Image by NASA.
Other Processes that Shape the Surface of Venus
Asteroid impacts have produced many craters on the surface of Venus. Although these features are numerous they
do not cover more than a few percent of the planet's surface. The resurfacing of Venus with lava flows that is
thought to have occurred about 500,000,000 years ago took place after impact cratering of planets in our solar
system had fallen to a very low level.
Map of Earth's Asteroid Impacts
EROSION AND SEDIMENTATION
The surface temperature of Venus is about 470 degrees Celsius -- much too high for liquid water. Without water, stream
erosion and sedimentation are unable to make significant modifications to the surface of the planet. The only erosional
features observed on the planet have been attributed to flowing lava.
WIND EROSION AND DUNE FORMATION
The atmosphere of Venus is thought to be about 90 times as dense as Earth's. Although this limits wind activity some
dune-shaped features have been identified on Venus. However, the available images do not show wind-modified landscapes covering a significant portion of the planet's surface.
Plate tectonic activity on Venus has not been clearly identified. Plate boundaries have not been recognized. Radar images
and geologic maps produced for the planet do not show linear volcano chains, spreading ridges, subduction zones and transform
faults that provide evidence of plate tectonics on Earth.
Volcanic activity is the dominant process for shaping the landscape of Venus with over 90% of the planet's surface being
covered by lava flows and shield volcanoes.
The shield volcanoes and lava flows on Venus are very large in size when compared to similar features on Earth.
Contributor: Hobart King
New Articles from Geology.com
|Sunstone: a feldspar with aventurescence caused by light reflecting from platy inclusions.||
|Salt Domes are columns of salt that move upwards because of the salt's low specific gravity.||
|What is a Maar? The second most common volcanic landscape feature on Earth.
|A simulated color image of the surface of Venus created by NASA using radar topography data acquired by the Magellan spacecraft. Enlarged views at 900 x 900 pixels or 4000 x 4000 pixels.|
|A simulated color image of Sapas Mons volcano, located on the Atla Regio rise near the equator of Venus. The volcano is about 400 kilometers across and about 1.5 kilometers high. The radial appearance of the volcano at this scale is caused by hundreds of overlapping lava flows - some originating from one of the two summit vents but most originating from flank eruptions. Image created by NASA using radar topography data acquired by the Magellan spacecraft. Enlarged views at 900 x 900 pixels or 3000 x 3000 pixels.
|An oblique view of Sapas Mons volcano, the same volcano shown in the vertical view above. This image views the volcano from the northwest. Features visible in this image can easily be matched to the vertical view above. Lava flows several hundred kilometers in length appear as narrow channels on the flanks of the volcano and spread into broad flows on the plain that surrounds the volcano. Image by NASA. Enlarge image.
|USGS has produced detailed geologic maps for many areas of Venus. These maps have descriptions and correlation charts for the mapped units. They also include symbols for faults, lineaments, domes, craters, lava flow directions, ridges, grabens and many other features. These can be paired with NASA radar images to learn about volcanoes and other surface features of Venus.|
|Information for Volcanoes on Venus|
NASA Image Gallery of Venus, a searchable collection of images that can be downloaded, NASA, accessed January 2013.
USGS Geologic Maps of Venus, a collection of maps in .pdf format, USGS, accessed January 2013.
Volcano Sapas Mons, images and information about the volcano from the Magellan spacecraft program, NASA, 1996.
Venus Global View, computer simulated global view of Venus from the Magellan spacecraft program, NASA, 1996.
NASA-Funded Research Suggests Venus is Geologically Alive, article about recent volcanism on Venus, NASA, 2010.
Volcanoes on Venus, overview article from the Volcano World collection, Oregon State University, 2005. | http://geology.com/stories/13/venus-volcanoes/ | 13 |
19 | Please note: None of the illustrations of the standards for mathematical practice are complete. What is available here is a first draft; please send us your feedback to [email protected] .
The Standards for Mathematical Practice describe varieties of expertise that mathematics educators at all levels should seek to develop in their students. These practices rest on important “processes and proficiencies” with longstanding importance in mathematics education. The first of these are the NCTM process standards of problem solving, reasoning and proof, communication, representation, and connections. The second are the strands of mathematical proficiency specified in the National Research Council's report Adding It Up: adaptive reasoning, strategic competence, conceptual understanding (comprehension of mathematical concepts, operations and relations), procedural fluency (skill in carrying out procedures flexibly, accurately, efficiently and appropriately), and productive disposition (habitual inclination to see mathematics as sensible, useful, and worthwhile, coupled with a belief in diligence and one's own efficacy).
"Does this make sense?"
Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. They analyze givens, constraints, relationships, and goals. They make conjectures about the form and meaning of the solution and plan a solution pathway rather than simply jumping into a solution attempt. They consider analogous problems, and try special cases and simpler forms of the original problem in order to gain insight into its solution. They monitor and evaluate their progress and change course if necessary.
Older students might, depending on the context of the problem, transform algebraic expressions or change the viewing window on their graphing calculator to get the information they need. Mathematically proficient students can explain correspondences between equations, verbal descriptions, tables, and graphs or draw diagrams of important features and relationships, graph data, and search for regularity or trends.
Younger students might rely on using concrete objects or pictures to help conceptualize and solve a problem. Mathematically proficient students check their answers to problems using a different method, and they continually ask themselves, “Does this make sense?” They can understand the approaches of others to solving complex problems and identify correspondences between different approaches.
This video shows an excerpt of a conversation between two students comparing approaches to solving a problem and trying to understand why they got different answers and where one of them made an error.
"The ability to contextualize and decontextualize."
Mathematically proficient students make sense of quantities and their relationships in problem situations. They bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize—to abstract a given situation and represent it symbolically and manipulate the representing symbols as if they have a life of their own, without necessarily attending to their referents—and the ability to contextualize, to pause as needed during the manipulation process in order to probe into the referents for the symbols involved.
Quantitative reasoning entails habits of creating a coherent representation of the problem at hand; considering the units involved; attending to the meaning of quantities, not just how to compute them; and knowing and flexibly using different properties of operations and objects.
These videos show a student solving a problem using an approach that reflects the knowledge and skill described in 4.OA.3 and another student both solving the same problem but using the knowledge and skill described in 8.EE.8. In both cases, the students move fluidly between the context and the mathematics and back again.
"Distinguish correct logic or reasoning from that which is flawed."
Mathematically proficient students understand and use stated assumptions, definitions, and previously established results in constructing arguments. They make conjectures and build a logical progression of statements to explore the truth of their conjectures. They are able to analyze situations by breaking them into cases, and can recognize and use counterexamples. They justify their conclusions, communicate them to others, and respond to the arguments of others. They reason inductively about data, making plausible arguments that take into account the context from which the data arose.
Mathematically proficient students are also able to compare the effectiveness of two plausible arguments, distinguish correct logic or reasoning from that which is flawed, and—if there is a flaw in an argument—explain what it is. Elementary students can construct arguments using concrete referents such as objects, drawings, diagrams, and actions. Such arguments can make sense and be correct, even though they are not generalized or made formal until later grades. Later, students learn to determine domains to which an argument applies. Students at all grades can listen or read the arguments of others, decide whether they make sense, and ask useful questions to clarify or improve the arguments.
"Analyze relationships mathematically."
Mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation. In middle grades, a student might apply proportional reasoning to plan a school event or analyze a problem in the community. By high school, a student might use geometry to solve a design problem or use a function to describe how one quantity of interest depends on another.
Mathematically proficient students who can apply what they know are comfortable making assumptions and approximations to simplify a complicated situation, realizing that these may need revision later. They are able to identify important quantities in a practical situation and map their relationships using such tools as diagrams, two-way tables, graphs, flowcharts and formulas. They can analyze those relationships mathematically to draw conclusions. They routinely interpret their mathematical results in the context of the situation and reflect on whether the results make sense, possibly improving the model if it has not served its purpose.
In elementary school, students begin to think about how numbers and operations can describe the world. Deciding which operations apply to a particular context, and why, is an import step toward being able to do increasingly more sophisticated modeling problems in later grades: 3.OA Analyzing Word Problems Involving Multiplication
In order to model with mathematics, students need to make simplifying assumptions about a context. It is important for students to have opportunities to do this in very familiar contexts before they are asked to do the more complex task of making such assumptions (based on appropriate research) for unfamiliar or scientifically complex contexts, as they will be asked to do in high school: 7.RP Sale!
Full-blown modeling tasks require many different skills, including sifting through information and deciding what is relevant, interpreting graphs, locating information needed to solve a problem, and making simplifying assumptions. Students need opportunities to work on this skills a few at a time as well as in concert: N-Q Ice Cream Van
"Explore and deepen understanding of concepts using tools."
Mathematically proficient students consider the available tools when solving a mathematical problem. These tools might include pencil and paper, concrete models, a ruler, a protractor, a calculator, a spreadsheet, a computer algebra system, a statistical package, or dynamic geometry software. Proficient students are sufficiently familiar with tools appropriate for their grade or course to make sound decisions about when each of these tools might be helpful, recognizing both the insight to be gained and their limitations.
For example, mathematically proficient high school students analyze graphs of functions and solutions generated using a graphing calculator. They detect possible errors by strategically using estimation and other mathematical knowledge. When making mathematical models, they know that technology can enable them to visualize the results of varying assumptions, explore consequences, and compare predictions with data. Mathematically proficient students at various grade levels are able to identify relevant external mathematical resources, such as digital content located on a website, and use them to pose or solve problems. They are able to use technological tools to explore and deepen their understanding of concepts.
This video shows students using concrete models to solve a problem that aligns with 4.OA.3.
Mathematically proficient students try to communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose, including using the equal sign consistently and appropriately. They are careful about specifying units of measure, and labeling axes to clarify the correspondence with quantities in a problem. They calculate accurately and efficiently, express numerical answers with a degree of precision appropriate for the problem context. In the elementary grades, students give carefully formulated explanations to each other. By the time they reach high school they have learned to examine claims and make explicit use of definitions.
This video shows students using concrete models to solve a problem that aligns with 4.NF.3.c.
"Shift perspectives to discern a pattern or structure."
Mathematically proficient students look closely to discern a pattern or structure. Young students, for example, might notice that three and seven more is the same amount as seven and three more, or they may sort a collection of shapes according to how many sides the shapes have. Later, students will see $7\times8$ equals the well remembered $7\times5+7\times3$, in preparation for learning about the distributive property. In the expression $x^2 + 9x + 14$, older students can see the $14$ as $2\times7$ and the $9$ as $2 + 7$.
They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. They also can step back for an overview and shift perspective. They can see complicated things, such as some algebraic expressions, as single objects or as being composed of several objects. For example, they can see $5-3(x-y)^2$ as $5$ minus a positive number times a square and use that to realize that its value cannot be more than $5$ for any real numbers $x$ and $y$.
In this video students use geometric structure in different ways to solve a problem that aligns with 6.G.1.
"Maintain oversight of the process, while attending to the details."
Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. As they work to solve a problem, mathematically proficient students maintain oversight of the process, while attending to the details. They continually evaluate the reasonableness of their intermediate results.
Upper elementary students might notice when dividing $25$ by $11$ that they are repeating the same calculations over and over again, and conclude they have a repeating decimal. By paying attention to the calculation of slope as they repeatedly check whether points are on the line through $(1, 2)$ with slope $3$, middle school students might abstract the equation $(y-2)/(x-1) = 3$. Noticing the regularity in the way terms cancel when expanding $(x-1)(x+1)$, $(x-1)(x^2+x+1)$, and $(x-1)(x^3+x^2+x+1)$ might lead them to the general formula for the sum of a geometric series.
In this video students use geometric structure in different ways to solve a problem that aligns with 6.G.1. | http://www.illustrativemathematics.org/standards/practice | 13 |
13 | Tips for Excel, Word, PowerPoint and Other Applications
Introduction to Logical Statements and Logical Math
Why It Matters To You
Logical Statements are both underutilized and used inefficiently. Understanding why and how they work can greatly improve the integrity of your analysis and streamline the flow of your worksheets. Today, we'll focus on the former and address the latter another day. As a note, this article is intended to introduce you to some new ways using logical functions. It will not walk you step-by-step through each function, form tool, and calculation.
By logical statements, I mean functions that contain an IF arguments, that return one value when the evaluated test is TRUE and another result when the test comes back FALSE. Most of the time I see IF statements, they are used to check a calculated result against a static criteria. In the case of nested IF statements, the author may be checking up to 6 static criteria. However, there are tons of other ways you can use logical statements.
- Check to see if a cell is blank (ISBLANK)
- Check to see if a cell is returning an error (ISERROR)
- Check to see if a cell contains text or a value (combined with CELL)
- Make the text subject to a dynamic condition, such as the current day of the week
- ... and much much more
Another use I rarely see are calculations based on the return of TRUE or FALSE. Recognizing that TRUE and FALSE are equivalent to 1 and 0 mathematically gives you some very nice means to create inclusion or exclusion in your worksheets, for example, "only total sales from Wednesdays".
Logical statements generate or test for whether an argument (or set of arguments) is TRUE or FALSE, then returns the value of TRUE or FALSE. When testing an argument, if no criterion is specified, Excel defaults to testing whether the value is 0 (FALSE) or not 0 (TRUE). Therefore, the test:
will return TRUE if cell F5 is 5 and FALSE for all other values. In the test
=AND(F5, F6, F7)
Since no criteria are specified, Excel tests the cells F5, F6, and F7 to see if they are 0. If every cell is 0, then the test returns TRUE, and FALSE for any other combination.
Sometimes you will want to run a calculation only if several criteria are met or want the ability to model whether several companies will participate in a project or not or you're only willing to consider candidates who meet x criteria. Logical statements provide you the ability to do this.
There are several logical statements available in Excel. Here’s a short recap:
IF. Tests an argument and generates one result if TRUE, another if FALSE.
AND. Tests a series of arguments. If all are TRUE, then the function returns TRUE. If any argument returns FALSE, the entire function returns FALSE.
Example: AND(logical1,logical2, ...)
OR. Tests a series of arguments. If any argument is TRUE, then the function returns TRUE. Only when all arguments are FALSE does the function return FALSE.
NOT. Reverses the logic of a statement. Basically, any result that would return TRUE now returns FALSE and visa versa.
Other TRUE/FALSE Generating Functions (Not specifically addressed today)
|Function||Returns TRUE if:|
|ISBLANK||Value refers to an empty cell|
|ISERR||Value refers to any error value except #N/A|
|ISERROR||Value refers to any error value (#N/A, #VALUE!, #REF!, #DIV/0!, #NUM!, #NAME?, or #NULL!)|
|ISNA||Value refers to the #N/A (value not available) error value|
|ISNONTEXT||Value refers to any item that is not text (Note that this function returns TRUE if value refers to a blank cell)|
|ISNUMBER||Value refers to a number|
|ISREF||Value refers to a reference|
|ISTEXT||Value refers to text|
TRUE/FALSE results can be used in conditional formatting, formulas, and calculations. When used in calculations, TRUE takes a value of 1 and FALSE takes a value of 0. For example:
- TRUE + TRUE = 2
- Any number x FALSE = 0
- Any number x TRUE = the original number.
See the ‘Logical statements’ and ‘Calculation’ tabs in logical_math.xls under the logical statements tab for more examples of how these functions can be used.
|Application Version||Excel 2003|
|Related Tips||SUMPRODUCT, COUNTIF, SUMIF| | http://www.kan.org/tips/excel_logical_statements.php | 13 |
11 | Learn something new every day More Info... by email
An integer array is a set of integers or "whole numbers" used for various computational purposes. An integer is a number that does not include a fraction. Integers include both positive and negative whole numbers, such as zero, one, and negative one. An integer array is just a set of these numbers defined in mathematics or computer programming.
Arrays of integers are commonly used in mathematics. The array might be described inside of a set of parentheses for use in some calculation or equation. The array can be as large or as small as necessary.
The array is one of the most frequently used mathematical constructs in computer programming. Many computer languages include an array for various kinds of functions where programmers find it helpful to have a set of variables on hand. The integer array is one of these data array types. Another kind of common array is the string array, where the included set of variables is composed of words rather than integers.
Programmers can do various things with an integer array in computer languages like Java, C+, and MS Visual Basic. A programmer can change one element of an array by referring to it specifically in code. Using a "for" loop, programmers can create a function that searches each element of the array individually and in sequence.
An integer array will hold a series of whole numbers for any purpose within a specific program or code module. Each computer programming language has its conventions for efficient expression of an integer array or other data array. In MS Visual Basic, for example, an array often has a text name, such as FirstArray, with parentheses holding a reference to a specific element of that array. In this case, the first integer in the array would be called by something that looks like this: FirstArray(1).
When programmers have constructed an integer array, by a process often called dimensioning or "dimming", he or she can also reference any element of that array through using another variable, where FirstArray(x) would refer to the (x)th element of that array. By this method, programmers can create functions that automatically update and change each element of the array according to any given need for change. These actions can be part of many different kinds of software coding for doing in-depth computation.
Programmers use integer arrays and similar constructions for software that operates in nearly any field. From manufacturing to hospitalities, all kinds of industries use software and analysis tools to streamline a business operation. In addition, this kind of programming provides functionality for all of the other technical analysis that is so important in the scientific world. Simple tools like integer arrays have greatly expanded what a computer is able to do in helping humans to manipulate large amounts of data. | http://www.wisegeek.com/what-is-an-integer-array.htm | 13 |