score
int64 10
1.34k
| text
stringlengths 296
618k
| url
stringlengths 16
1.13k
| year
int64 13
18
|
---|---|---|---|
77 | Gravity is the force of attraction between massive particles. Weight is determined by the mass of an object and its location in a gravitational field. While a great deal is known about the properties of gravity, the ultimate cause of the gravitational force remains an open question. General relativity is the most successful theory of gravitation to date. It postulates that mass and energy curve space-time, resulting in the phenomenon known as gravity. The effect of the bending of spacetime is often misunderstood as most people seem to prefer to think of a falling object as accelerating when the facts do not support that assumption. Ask any skydiver if he feels any acceleration (other than from wind resistance).
Gravity, simply put, is acceleration. F=ma means that there must be a force that causes a mass to accelerate. For a rocket ship, that is the rocket motor. For the earth, that is the compression of the mass between something on the surface of the earth and the earth's center of mass. The acceleration is in relation to spacetime in that the weight you feel is your resistance to deviating from your path in spacetime. The same holds true in the rocket ship except that a rocket motor supplies the force to accelerate you from your spacetime path. There is no difference between weight you feel because of gravity or the rocket.
Newton's law of universal gravitation
Newton's law of universal gravitation states the following:
- Every object in the Universe attracts every other object with a force directed along the line of centers of mass for the two objects. This force is proportional to the product of their masses and inversely proportional to the square of the separation between the centers of mass of the two objects.
Given that the force is along the line through the two masses, the law can be stated symbolically as follows.
- F is the magnitude of the (repulsive) gravitational force between two objects
- G is the gravitational constant, that is approximately : G = 6.67 × 10−11 N m2 kg-2
- m1 is the mass of first object
- m2 is the mass of second object
- r is the distance between the objects
It can be seen that this repulsive force F is always negative, and this means that the net attractive force is positive. The minus sign is used to hold the same value meaning as in the Coulomb's Law, where a positive force as result means repulsion between two charges.
Thus gravity is proportional to the mass of each object, but has an inverse square relationship with the distance between the centres of each mass.
Strictly speaking, this law applies only to point-like objects. If the objects have spatial extent, the force has to be calculated by integrating the force (in vector form, see below) over the extents of the two bodies. It can be shown that for an object with a spherically-symmetric distribution of mass, the integral gives the same gravitational attraction on masses outside it as if the object were a point mass.1
This law of universal gravitation was originally formulated by Isaac Newton in his work, the Principia Mathematica (1687). The history of gravitation as a physical concept is considered in more detail below.
Newton's law of universal gravitation can be written as a vector equation to account for the direction of the gravitational force as well as its magnitude. In this formulation, quantities in bold represent vectors.
- F12 is the force on object 1 due to object 2
- G is the gravitational constant
- m1 and m2 are the masses of the objects 1 and 2
- r21 = | r2 − r1 | is the distance between objects 2 and 1
- is the unit vector from object 2 to 1
It can be seen, that the vector form of the equation is the same as the scalar form, except for the vector value of F and the unit vector. Also, it can be seen that F12 = − F21.
Gravitational acceleration is given by the same formula except for one of the factors m:
The gravitational field is a vector field that describes the gravitational force an object of given mass experiences in any given place in space.
It is a generalization of the vector form, which becomes particularly useful if more than 2 objects are involved (such as a rocket between the Earth and the Moon). For 2 objects (e.g. object 1 is a rocket, object 2 the Earth), we simply write instead of and m instead of m1 and define the gravitational field as:
so that we can write:
This formulation is independent of the objects causing the field. The field has units of force divided by mass; in SI, this is N·kg−1.
Problems with Newton's theory
Although Newton's formulation of gravitation is quite accurate for most practical purposes, it has a few problems:
- There is no prospect of identifying the mediator of gravity. Newton himself felt the inexplicable action at a distance to be unsatisfactory (see "Newton's reservations" below).
- Newton's theory requires that gravitational force is transmitted instantaneously. Given classical assumptions of the nature of space and time, this is necessary to preserve the conservation of angular momentum observed by Johannes Kepler. However, it is in direct conflict with Einstein's theory of special relativity which places an upper limit—the speed of light in vacuum—on the velocity at which signals can be transmitted.
Disagreement with observation
- Newton's theory does not fully explain the precession of the perihelion of the orbit of the planet Mercury. There is a 43 arcsecond per century discrepancy between the Newtonian prediction (resulting from the gravitational tugs of the other planets) and the observed precessionTemplate:Fn.
- The predicted deflection of light by gravity is only half as much as observations of this deflection, which were made after General Relativity was developed in 1915.
- The observed fact that gravitational and inertial masses are the same for all bodies is unexplained within Newton's system. General relativity takes this as a postulate. See equivalence principle.
It's important to understand that while Newton was able to formulate his law of gravity in his monumental work, he was deeply uncomfortable with the notion of "action at a distance" which his equations implied. He never, in his words, "assigned the cause of this power". In all other cases, he used the phenomenon of motion to explain the origin of various forces acting on bodies, but in the case of gravity, he was unable to experimentally identify the motion that produces the force of gravity. Moreover, he refused to even offer a hypothesis as to the cause of this force on grounds that to do so was contrary to sound science.
He lamented the fact that "philosophers have hitherto attempted the search of nature in vain" for the source of the gravitational force, as he was convinced "by many reasons" that there were "causes hitherto unknown" that were fundamental to all the "phenomena of nature". These fundamental phenomena are still under investigation and, though hypotheses abound, the definitive answer is yet to be found. While it is true that Einstein's hypotheses are successful in explaining the effects of gravitational forces more precisely than Newton's in certain cases, he too never assigned the cause of this power, in his theories. It is said that in Einstein's equations, "matter tells space how to curve, and space tells matter how to move", but this new idea, completely foreign to the world of Newton, does not enable Einstein to assign the "cause of this power" to curve space any more than the Law of Universal Gravitation enabled Newton to assign its cause. In Newton's own words:
- I wish we could derive the rest of the phenomena of nature by the same kind of reasoning from mechanical principles; for I am induced by many reasons to suspect that they may all depend upon certain forces by which the particles of bodies, by some causes hitherto unknown, are either mutually impelled towards each other, and cohere in regular figures, or are repelled and recede from each other; which forces being unknown, philosophers have hitherto attempted the search of nature in vain.
If science is eventually able to discover the cause of the gravitational force, Newton's wish could eventually be fulfilled as well.
It should be noted that here, the word "cause" is not being used in the same sense as "cause and effect" or "the defendant caused the victim to die". Rather, when Newton uses the word "cause," he (apparently) is referring to an "explanation". In other words, a phrase like "Newtonian gravity is the cause of planetary motion" means simply that Newtonian gravity explains the motion of the planets. See Causality and Causality (physics).
Einstein's theory of gravitation
Einstein's theory of gravitation answered the problems with Newton's theory noted above. In a revolutionary move, his theory of general relativity (1915) stated that the presence of mass, energy, and momentum causes spacetime to become curved. Because of this curvature, the paths that objects in inertial motion follow can "deviate" or change direction over time. This deviation appears to us as an acceleration towards massive objects, which Newton characterized as being gravity. In general relativity however, this acceleration or free fall is actually inertial motion. So objects in a gravitational field appear to fall at the same rate due to their being in inertial motion while the observer is the one being accelerated. (This identification of free fall and inertia is known as the Equivalence principle.)
The relationship between the presence of mass/energy/momentum and the curvature of spacetime is given by the Einstein field equations. The actual shapes of spacetime are described by solutions of the Einstein field equations. In particular, the Schwarzschild solution (1916) describes the gravitational field around a spherically symmetric massive object. The geodesics of the Schwarzschild solution describe the observed behavior of objects being acted on gravitationally, including the anomalous perihelion precession of Mercury and the bending of light as it passes the Sun.
Arthur Eddington found observational evidence for the bending of light passing the Sun as predicted by general relativity in 1919. Subsequent observations have confirmed Eddington's results, and observations of a pulsar which is occulted by the Sun every year have permitted this confirmation to be done to a high degree of accuracy. There have also in the years since 1919 been numerous other tests of general relativity, all of which have confirmed Einstein's theory.
Units of measurement and variations in gravity
Gravitational phenomena are measured in various units, depending on the purpose. The gravitational constant is measured in newtons times metre squared per kilogram squared. Gravitational acceleration, and acceleration in general, is measured in metres per second squared or in non-SI units such as galileos, gees, or feet per second squared.
The acceleration due to gravity at the Earth's surface is approximately 9.8 m/s2, more precise values depending on the location. A standard value of the Earth's gravitational acceleration has been adopted, called gn. When the typical range of interesting values is from zero to tens of metres per second squared, as in aircraft, acceleration is often stated in multiples of gn. When used as a measurement unit, the standard acceleration is often called "gee", as g can be mistaken for g, the gram symbol. For other purposes, measurements in millimetres or micrometres per second squared (mm/s² or µm/s²) or in multiples of milligals or milligalileos (1 mGal = 1/1000 Gal), a non-SI unit still common in some fields such as geophysics. A related unit is the eotvos, which is a cgs unit of the gravitational gradient.
Mountains and other geological features cause subtle variations in the Earth's gravitational field; the magnitude of the variation per unit distance is measured in inverse seconds squared or in eotvoses.
A larger variation in the effect of gravity occurs when we move from the equator to the poles. The effective force of gravity decreases as the distance from the equator decreases, due to the rotation of the Earth, and the resulting centrifugal force and flattening of the Earth. The centrifugal force causes an effective force 'up' which effectively counteracts gravity, while the flattening of the Earth causes the poles to be closer to the center of mass of the Earth. It is also related to the fact that the Earth's density changes from the surface of the planet to its centre.
The sea-level gravitational acceleration is 9.780 m/s² at the equator and 9.832 m/s² at the poles, so an object will exert about 0.5% more force due to gravity at sea level at the poles than at sea level at the equator .
Comparison with electromagnetic force
The gravitational interaction of protons is approximately a factor 1036 weaker than the electromagnetic repulsion. This factor is independent of distance, because both interactions are inversely proportional to the square of the distance. Therefore on an atomic scale mutual gravity is negligible. However, the main interaction between common objects and the Earth and between celestial bodies is gravity, because at this scale matter is electrically neutral: even if in both bodies there were a surplus or deficit of only one electron for every 1018 protons and neutrons this would already be enough to cancel gravity (or in the case of a surplus in one and a deficit in the other: double the interaction). However, the main interactions between the charged particles in cosmic plasma (that makes up over 99% of the universe by volume), are electromagnetic forces.
The relative weakness of gravity can be demonstrated with a small magnet picking up pieces of iron. The small magnet is able to overwhelm the gravitational interaction of the entire Earth. Similarly, when doing a chin-up, the electromagnetic interaction within your muscle cells is able to overcome the force induced by Earth on your entire body.
Gravity is small unless at least one of the two bodies is large or one body is very dense and the other is close by, but the small gravitational interaction exerted by bodies of ordinary size can fairly easily be detected through experiments such as the Cavendish torsion bar experiment.
- Jefimenko, Oleg D., "Causality, electromagnetic induction, and gravitation : a different approach to the theory of electromagnetic and gravitational fields". Star City [West Virginia] : Electret Scientific Co., c1992. ISBN 0917406095
- Heaviside, Oliver, "A gravitational and electromagnetic analogy". The Electrician, 1893.
Gravity and quantum mechanics
It is strongly believed that three of the four fundamental forces (the strong nuclear force, the weak nuclear force, and the electromagnetic force) are manifestations of a single, more fundamental force. Combining gravity with these forces of quantum mechanics to create a theory of quantum gravity is currently an important topic of research amongst physicists. General relativity is essentially a geometric theory of gravity. Quantum mechanics relies on interactions between particles, but general relativity requires no exchange of particles in its explanation of gravity.
Scientists have theorized about the graviton (a messenger particle that transmits the force of gravity) for years, but have been frustrated in their attempts to find a consistent quantum theory for it. Many believe that string theory holds a great deal of promise to unify general relativity and quantum mechanics, but this promise has yet to be realized.
It is notable that in general relativity gravitational radiation (which under the rules of quantum mechanics must be composed of gravitons) is only created in situations where the curvature of spacetime is oscillating, such as for co-orbiting objects. The amount of gravitational radiation emitted by the solar system and its planetary systems is far too small to measure. However, gravitational radiation has been indirectly observed as an energy loss over time in binary pulsar systems such as PSR1913+16). It is believed that neutron star mergers and black hole formation may create detectable amounts of gravitational radiation. Gravitational radiation observatories such as LIGO have been created to study the problem. No confirmed detections have been made of this hypothetical radiation, but as the science behind LIGO is refined and as the instruments themselves are endowed with greater sensitivity over the next decade, this may change.
Experimental tests of theories
Today General Relativity is accepted as the standard description of gravitational phenomena. (Alternative theories of gravitation exist but are more complicated than General Relativity.) General Relativity is consistent with all currently available measurements of large-scale phenomena. For weak gravitational fields and bodies moving at slow speeds at small distances, Einstein's General Relativity gives almost exactly the same predictions as Newton's law of gravitation.
Crucial experiments that justified the adoption of General Relativity over Newtonian gravity were the classical tests: the gravitational redshift, the deflection of light rays by the Sun, and the precession of the orbit of Mercury.
More recent experimental confirmations of General Relativity were the (indirect) deduction of gravitational waves being emitted from orbiting binary stars, the existence of neutron stars and black holes, gravitational lensing, and the convergence of measurements in observational cosmology to an approximately flat model of the observable Universe, with a matter density parameter of approximately 30% of the critical density and a cosmological constant of approximately 70% of the critical density.
The equivalence principle, the postulate of general relativity that presumes that inertial mass and gravitational mass are the same, is also under test. Past, present, and future tests are discussed in the equivalence principle section.
Even to this day, scientists try to challenge General Relativity with more and more precise direct experiments. The goal of these tests is to shed light on the yet unknown relationship between Gravity and Quantum Mechanics. Space probes are used to either make very sensitive measurements over large distances, or to bring the instruments into an environment that is much more controlled than it could be on Earth. For example, in 2004 a dedicated satellite for gravity experiments, called Gravity Probe B, was launched to test general relativity's predicted frame-dragging effect, among others. Also, land-based experiments like LIGO and a host of "bar detectors" are trying to detect gravitational waves directly. A space-based hunt for gravitational waves, LISA_(astronomy), is in its early stages. It should be sensitive to low frequency gravitational waves from many sources, perhaps including the Big Bang.
Speed of gravity: Einstein's theory of relativity predicts that the speed of gravity (defined as the speed at which changes in location of a mass are propagated to other masses) should be consistent with the speed of light. In 2002, the Fomalont-Kopeikin experiment produced measurements of the speed of gravity which matched this prediction. However, this experiment has not yet been widely peer-reviewed, and is facing criticism from those who claim that Fomalont-Kopeikin did nothing more than measure the speed of light in a convoluted manner.
The Pioneer anomaly is an empirical observation that the positions of the Pioneer 10 and Pioneer 11 space probes differ very slightly from what would be expected according to known effects (gravitational or otherwise). The possibility of new physics has not been ruled out, despite very thorough investigation in search of a more prosaic explanation.
Recent Alternative theories
- Brans-Dicke theory of gravity
- Rosen bi-metric theory of gravity
- In the modified Newtonian dynamics (MOND), Mordehai Milgrom proposes a modification of Newton's Second Law of motion for small accelerations.
Historical Alternative theories
- Nikola Tesla challenged Albert Einstein's theory of relativity, announcing he was working on a Dynamic theory of gravity (which began between 1892 and 1894) and argued that a "field of force" was a better concept and focused on media with electromagnetic energy that fill all of space.
- In 1967 Andrei Sakharov proposed something similar, if not essentially identical. His theory has been adopted and promoted by Messrs. Haisch, Rueda and Puthoff who, among other things, explain that gravitational and inertial mass are identical and that high speed rotation can reduce (relative) mass. Combining these notions with those of T. T. Brown, it is relatively easy to conceive how field propulsion vehicles such as "flying saucers" could be engineered given a suitable source of power.
- Georges-Louis LeSage proposed a gravity mechanism, now commonly called LeSage gravity, based on a fluid-based explanation where a light gas fills the entire universe.
A self-gravitating system is a system of masses kept together by mutual gravity. An example is a binary star.
Special applications of gravity
A weight hanging from a cable over a pulley provides a constant tension in the cable, also in the part on the other side of the pulley.
Molten lead, when poured into the top of a shot tower, will coalesce into a rain of spherical lead shot, first separating into droplets, forming molten spheres, and finally freezing solid, undergoing many of the same effects as meteoritic tektites, which will cool into spherical, or near-spherical shapes in free-fall.
Comparative gravities of different planets and Earth's moon
The standard acceleration due to gravity at the Earth's surface is, by convention, equal to 9.80665 metres per second squared. (The local acceleration of gravity varies slightly over the surface of the Earth; see gee for details.) This quantity is known variously as gn, ge (sometimes this is the normal equatorial value on Earth, 9.78033 m/s²), g0, gee, or simply g (which is also used for the variable local value). The following is a list of the gravitational accelerations (in multiples of g) at the Sun, the surfaces of each of the planets in the solar system, and the Earth's moon :
Note: The "surface" is taken to mean the cloud tops of the gas giants (Jupiter, Saturn, Uranus and Neptune) in the above table. It is usually specified as the location where the pressure is equal to a certain value (normally 75 kPa?). For the Sun, the "surface" is taken to mean the photosphere.
For spherical bodies surface gravity in m/s2 is 2.8 × 10−10 times the radius in m times the average density in kg/m3.
When flying from Earth to Mars, climbing against the field of the Earth at the start is 100 000 times heavier than climbing against the force of the sun for the rest of the flight.
Mathematical equations for a falling body
These equations describe the motion of a falling body under acceleration g near the surface of the Earth.
Here, the acceleration of gravity is a constant, g, because in the vector equation above, r21 would be a constant vector, pointing straight down. In this case, Newton's law of gravitation simplifies to the law
- F = mg
The following equations ignore air resistance and the rotation of the Earth, but are usually accurate enough for heights not exceeding the tallest man-made structures. They fail to describe the Coriolis effect, for example. They are extremely accurate on the surface of the Moon, where the atmosphere is almost nil. Astronaut David Scott demonstrated this with a hammer and a feather. Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, effectively slowing down the acceleration enough so that he could measure the time as the ball rolled down a known distance down the ramp. He used a water clock to measure the time; by using an "extremely accurate balance" to measure the amount of water, he could measure the time elapsed. 2
- For Earth, in Metric units: in Imperial units:
For other planets, multiply by the ratio of the gravitational accelerations shown above.
|Distance d traveled by a falling object |
under the influence of gravity for a time t:
|Elapsed time t of a falling object |
under the influence of gravity for distance d:
|Average velocity va of a falling object|
under constant acceleration g for any given time:
|Average velocity va of a falling object|
under constant acceleration g traveling distance d:
|Instantaneous velocity vi of a falling object|
under constant acceleration g for any given time:
|Instantaneous velocity vi of a falling object|
under constant acceleration g, traveling distance d:
Note: "Average" means average in time.
Note: Distance traveled, d, and time taken, t, must be in the same system of units as acceleration g. See dimensional analysis. To convert metres per second to kilometres per hour (km/h) multiply by 3.6, and to convert feet per second to miles per hour (mph) multiply by 0.68 (or, precisely, 15/22).
For any mass distribution there is a scalar field, the gravitational potential (a scalar potential), which is the gravitational potential energy per unit mass of a point mass, as function of position. It is
where the integral is taken over all mass. Minus its gradient is the gravity field itself, and minus its Laplacian is the divergence of the gravity field, which is everywhere equal to -4πG times the local density.
Thus when outside masses the potential satisfies Laplace's equation (i.e., the potential is a harmonic function), and when inside masses the potential satisfies Poisson's equation with, as right-hand side, 4πG times the local density.
Acceleration relative to the rotating Earth
The acceleration measured on the rotating surface of the Earth is not quite the same as the acceleration that is measured for a free-falling body because of the centrifugal force. In other words, the apparent acceleration in the rotating frame of reference is the total gravity vector minus a small vector toward the north-south axis of the Earth, corresponding to staying stationary in that frame reference.
History of gravitational theory
The first mathematical formulation of gravity was published in 1687 by Sir Isaac Newton. His law of universal gravitation was the standard theory of gravity until work by Albert Einstein and others on general relativity. Since calculations in general relativity are complicated, and Newtonian gravity is sufficiently accurate for calculations involving weak gravitational fields (e.g., launching rockets, projectiles, pendulums, etc.), Newton's formulae are generally preferred.
Although the law of universal gravitation was first clearly and rigorously formulated by Isaac Newton, the phenomenon was observed and recorded by others. Even Ptolemy had a vague conception of a force tending toward the center of the Earth which not only kept bodies upon its surface, but in some way upheld the order of the universe. Johannes Kepler inferred that the planets move in their orbits under some influence or force exerted by the Sun; but the laws of motion were not then sufficiently developed, nor were Kepler's ideas of force sufficiently clear, to make a precise statement of the nature of the force. Christiaan Huygens and Robert Hooke, contemporaries of Newton, saw that Kepler's third law implied a force which varied inversely as the square of the distance. Newton's conceptual advance was to understand that the same force that causes a thrown rock to fall back to the Earth keeps the planets in orbit around the Sun, and the Moon in orbit around the Earth.
Newton was not alone in making significant contributions to the understanding of gravity. Before Newton, Galileo Galilei corrected a common misconception, started by Aristotle, that objects with different mass fall at different rates. To Aristotle, it simply made sense that objects of different mass would fall at different rates, and that was enough for him. Galileo, however, actually tried dropping objects of different mass at the same time. Aside from differences due to friction from the air, Galileo observed that all masses accelerate the same. Using Newton's equation, F = ma, it is plain to us why:
The above equation says that mass m1 will accelerate at acceleration a1 under the force of gravity, but divide both sides of the equation by m1 and:
Nowhere in the above equation does the mass of the falling body appear. When dealing with objects near the surface of a planet, the change in r divided by the initial r is so small that the acceleration due to gravity appears to be perfectly constant. The acceleration due to gravity on Earth is usually called g, and its value is about 9.8 m/s2 (or 32 ft/s2). Galileo didn't have Newton's equations, though, so his insight into gravity's proportionality to mass was invaluable, and possibly even affected Newton's formulation on how gravity works.
However, across a large body, variations in r can create a significant tidal force.
- Note 1: Proposition 75, Theorem 35: p.956 - I.Bernard Cohen and Anne Whitman, translators: Isaac Newton, The Principia: Mathematical Principles of Natural Philosophy. Preceded by A Guide to Newton's Principia, by I.Bernard Cohen. University of California Press 1999 ISBN 0-520-08816-6 ISBN 0-520-08817-4
- Note 2: See the works of Stillman Drake, for a comprehensive study of Galileo and his times, the Scientific Revolution.
- Template:Fnb Max Born (1924), Einstein's Theory of Relativity (The 1962 Dover edition, page 348 lists a table documenting the observed and calculated values for the precession of the perihelion of Mercury, Venus, and Earth.)
- Gravity wave
- Gravitational binding energy
- Gravity Research Foundation
- Standard gravitational parameter
- n-body problem
- Pioneer anomaly
- Table of velocities required for a spacecraft to escape a planet's gravitational field
- Application to gravity of the divergence theorem
- Gravity field
- Scalar Gravity
- Halliday, David; Robert Resnick; Kenneth S. Krane (2001). Physics v. 1, New York: John Wiley & Sons. ISBN 0471320579.
- Serway, Raymond A.; Jewett, John W. (2004). Physics for Scientists and Engineers (6th ed.), Brooks/Cole. ISBN 0534408427.
- Tipler, Paul (2004). Physics for Scientists and Engineers: Mechanics, Oscillations and Waves, Thermodynamics (5th ed.), W. H. Freeman. ISBN 0716708094.
ca:Gravetat da:Gravitation de:Gravitation es:Gravedad eo:Gravito fa:گرانش fr:Gravitation ko:중력 he:כבידה ia:Gravitate io:Graveso ja:重力 hu:Gravitáció it:Forza di gravità ms:Graviti nl:Zwaartekracht no:Gravitasjon nds:Gravitatschon pl:Grawitacja pt:Gravidade ru:Гравитация sl:gravitacija sr:Гравитација sv:Gravitation zh:引力 | http://www.exampleproblems.com/wiki/index.php?title=Gravity&oldid=31142 | 13 |
217 | The bold plan for an Apollo mission based on LOR held the promise of landing on the moon by 1969, but it presented many daunting technical difficulties. Before NASA could dare attempt any type of lunar landing, it had to learn a great deal more about the destination. Although no one believed that the moon was made of green cheese, some lunar theories of the early 1960s seemed equally fantastic. One theory suggested that the moon was covered by a layer of dust perhaps 50 feet thick. If this were true, no spacecraft would be able to safely land on or take off from the lunar surface. Another theory claimed that the moon's dust was not nearly so thick but that it possessed an electrostatic charge that would cause it to stick to the windows of the lunar landing vehicle, thus making it impossible for the astronauts to see out as they landed. Cornell University astronomer Thomas Gold warned that the moon might even be composed of a spongy material that would crumble upon impact.1
At Langley, Dr. Leonard Roberts, a British mathematician in Clint Brown's Theoretical Mechanics Division, pondered the riddle of the lunar surface and drew an equally pessimistic conclusion. Roberts speculated that because the moon was millions of years old and had been constantly bombarded without the protection of an atmosphere, its surface was most likely so soft that any vehicle attempting to land on it would sink and be buried as if it had landed in quicksand. After the president's commitment to a manned lunar landing in 1961, Roberts began an extensive three year research program to show just what would happen if an exhaust rocket blasted into a surface of very thick powdered sand. His analysis indicated that an incoming rocket would throw up a mountain of sand, thus creating a big rim all the way around the outside of the landed spacecraft. Once the spacecraft settled, this huge bordering volume of sand would collapse, completely engulf the spacecraft, and kill its occupants.2
Telescopes revealed little about the nature of the lunar surface. Not even the latest, most powerful optical instruments could see through the earth's atmosphere well enough to resolve the moon's detailed surface features. Even an object the size of a football stadium would not show up on a telescopic photograph, and enlarging the photograph would only increase the blur. To separate fact from fiction and obtain the necessary information about the craters, crevices, and jagged rocks on the lunar surface, NASA would have to send out automated probes to take a closer look.
The first of these probes took off for the moon in January 1962 as part of a NASA project known as Ranger. A small 800-pound spacecraft was to make a "hard landing," crashing to its destruction on the moon. Before Ranger crashed, however, its on-board multiple television camera payload was to send back close views of the surface -views far more detailed than any captured by a telescope. Sadly, the first six Ranger probes were not successful. Malfunctions of the booster or failures of the launch-vehicle guidance system plagued the first three attempts; malfunctions of the spacecraft itself hampered the fourth and fifth probes; and the primary experiment could not take place during the sixth Ranger attempt because the television equipment would not transmit. Although these incomplete missions did provide some extremely valuable high-resolution photographs, as well as some significant data on the performance of Ranger's systems, in total the highly publicized record of failures embarrassed NASA and demoralized the Ranger project managers at JPL. Fortunately, the last three Ranger flights in 1964 and 1965 were successful. These flights showed that a lunar landing was possible, but the site would have to be carefully chosen to avoid craters and big boulders.3
JPL managed a follow-on project to Ranger known as Surveyor. Despite failures and serious schedule delays, between May 1966 and January 1968, six Surveyor spacecraft made successful soft landings at predetermined points on the lunar surface. From the touchdown dynamics, surface-bearing strength measurements, and eye-level television scanning of the local surface conditions, NASA learned that the moon could easily support the impact and the weight of a small lander. Originally, NASA also planned for (and Congress had authorized) a second type of Surveyor spacecraft, which instead of making a soft landing on the moon, was to be equipped for high-resolution stereoscopic film photography of the moon's surface from lunar orbit and for instrumented measurements of the lunar environment. However, this second Surveyor or "Surveyor Orbiter" did not materialize. The staff and facilities of JPL were already overburdened with the responsibilities for Ranger and "Surveyor Lander"; they simply could not take on another major spaceflight project.4
In 1963, NASA scrapped its plans for a Surveyor Orbiter and turned its attention to a lunar orbiter project that would not use the Surveyor spacecraft system or the Surveyor launch vehicle, Centaur. Lunar Orbiter would have a new spacecraft and use the Atlas-Agena D to launch it into space. Unlike the preceding unmanned lunar probes, which were originally designed for general scientific study, Lunar Orbiter was conceived after a manned lunar landing became a national commitment. The project goal from the start was to support the Apollo mission. Specifically, Lunar Orbiter was designed to provide information on the lunar surface conditions most relevant to a spacecraft landing. This meant, among other things, that its camera had to be sensitive enough to capture subtle slopes and minor protuberances and depressions over a broad area of the moon's front side. As an early working group on the requirements of the lunar photographic mission had determined, Lunar Orbiter had to allow the identification of 45-meter objects over the entire facing surface of the moon, 4.5-meter objects in the "Apollo zone of interest," and 1.2-meter objects in all the proposed landing areas.5
Five Lunar Orbiter missions took place. The first launch occurred in August 1966 within two months of the initial target date. The next four Lunar Orbiters were launched on schedule; the final mission was completed in August 1967, barely a year after the first launch. NASA had planned five flights because mission reliability studies had indicated that five might be necessary to achieve even one success. However, all five Lunar Orbiters were successful, and the prime objective of the project, which was to photograph in detail all the proposed landing sites, was met in three missions. This meant that the last two flights could be devoted to photographic exploration of the rest of the lunar surface for more general scientific purposes. The final cost of the program was not slight: it totaled $163 million, which was more than twice the original estimate of $77 million. That increase, however, compares favorably with the escalation in the price of similar projects, such as Surveyor, which had an estimated cost of $125 million and a final cost of $469 million.
In retrospect, Lunar Orbiter must be, and rightfully has been, regarded as an unqualified success. For the people and institutions responsible, the project proved to be an overwhelmingly positive learning experience on which greater capabilities and ambitions were built. For both the prime contractor, the Boeing Company, a world leader in the building of....
.... airplanes, and the project manager, Langley Research Center, a premier aeronautics laboratory, involvement in Lunar Orbiter was a turning point. The successful execution of a risky enterprise became proof positive that they were more than capable of moving into the new world of deep space. For many observers as well as for the people who worked on the project, Lunar Orbiter quickly became a model of how to handle a program of space exploration its successful progress demonstrated how a clear and discrete objective, strong leadership, and positive person-to-person communication skills can keep a project on track from start to finish.6
Many people inside the American space science community believed that neither Boeing nor Langley was capable of managing a project like Lunar Orbiter or of supporting the integration of first-rate scientific experiments and space missions. After NASA headquarters announced in the summer of 1963 that Langley would manage Lunar Orbiter, more than one space scientist was upset. Dr. Harold C. Urey, a prominent scientist from the University of California at San Diego, wrote a letter to Administrator James Webb asking him, "How in the world could the Langley Research Center, which is nothing more than a bunch of plumbers, manage this scientific program to the moon?"7
Urey's questioning of Langley's competency was part of an unfolding debate over the proper place of general scientific objectives within NASA's spaceflight programs. The U.S. astrophysics community and Dr. Homer E. Newell's Office of Space Sciences at NASA headquarters wanted "quality science" experiments incorporated into every space mission, but this caused problems. Once the commitment had been made to a lunar landing mission, NASA had to decide which was more important: gathering broad scientific information or obtaining data required for accomplishing the lunar landing mission. Ideally, both goals could be incorporated in a project without one compromising the other, but when that seemed impossible, one of the two had to be given priority. The requirements of the manned mission usually won out. For Ranger and Surveyor, projects involving dozens of outside scientists and the large and sophisticated Space Science Division at JPL, that meant that some of the experiments would turn out to be less extensive than the space scientists wanted.8 For Lunar Orbiter, a project involving only a few astrogeologists at the U.S. Geological Survey and a very few space scientists at Langley, it meant, ironically, that the primary goal of serving Apollo would be achieved so quickly that general scientific objectives could be included in its last two missions.
Langley management had entered the fray between science and project engineering during the planning for Project Ranger. At the first Senior Council meeting of the Office of Space Sciences (soon to be renamed the Office of Space Sciences and Applications [OSSA]) held at NASA headquarters on 7 June 1962, Langley Associate Director Charles Donlan had questioned the priority of a scientific agenda for the agency's proposed unmanned lunar probes because a national commitment had since been made to a manned lunar landing. The initial requirements for the probes had been set long before Kennedy's announcement, and therefore, Donlan felt NASA needed to rethink them. Based on his experience at Langley and with Gilruth's STG, Donlan knew that the space science people could be "rather unbending" about adjusting experiments to obtain "scientific data which would assist the manned program." What needed to be done now, he felt, was to turn the attention of the scientists to exploration that would have more direct applications to the Apollo lunar landing program.9
Donlan was distressed specifically by the Office of Space Sciences' recent rejection of a lunar surface experiment proposed by a penetrometer feasibility study group at Langley. This small group, consisting of half a dozen people from the Dynamic Loads and Instrument Research divisions, had devised a spherical projectile, dubbed "Moonball," that was equipped with accelerometers capable of transmitting acceleration versus time signatures during impact with the lunar surface. With these data, researchers could determine the hardness, texture, and load-bearing strength of possible lunar landing sites. The group recommended that Moonball be flown as part of the follow-on to Ranger.10
A successful landing of an intact payload required that the landing loads not exceed the structural capabilities of the vehicle and that the vehicle make its landing in some tenable position so it could take off again. Both of these requirements demanded a knowledge of basic physical properties of the surface material, particularly data demonstrating its hardness or resistance to penetration. In the early 1960s, these properties were still unknown, and the Langley penetrometer feasibility study group wanted to identify them. Without the information, any design of Apollo's lunar lander would have to be based on assumed surface characteristics.11
In the opinion of the Langley penetrometer group, its lunar surface hardness experiment would be of "general scientific interest," but it would, more importantly, provide "timely engineering information important to the design of the Apollo manned lunar landing vehicle." 12 Experts at JPL, however, questioned whether surface hardness was an important criterion for any experiment and argued that "the determination of the terrain was more important, particularly for a horizontal landing.''13 In the end, the Office of Space Sciences rejected the Langley idea in favor of making further seismometer experiments, which might tell scientists something basic about the origins of the moon and its astrogeological history.*
For engineer Donlan, representing a research organization like Langley dominated by engineers and by their quest for practical solutions to applied problems, this rejection seemed a mistake. The issue came down to what NASA needed to know now. That might have been science before Kennedy's commitment, but it definitely was not science after it. In Donlan's view, Langley's rejected approach to lunar impact studies had been the correct one. The consensus at the first Senior Council meeting, however, was that "pure science experiments will be able to provide the engineering answers for Project Apollo." 14
Over the next few years, the engineering requirements for Apollo would win out almost totally. As historian R. Cargill Hall explains in his story of Project Ranger, a "melding" of interests occurred between the Office....
....of Space Sciences and the Office of Manned Space Flight followed by a virtually complete subordination of the scientific priorities originally built into the unmanned projects. Those priorities, as important as they were, "quite simply did not rate" with Apollo in importance.15
The sensitive camera eyes of the Lunar Orbiter spacecraft carried out a vital reconnaissance mission in support of the Apollo program. Although NASA designed the project to provide scientists with quantitative information about the moon's gravitational field and the dangers of micrometeorites and solar radiation in the vicinity of the lunar environment, the primary objective of Lunar Orbiter was to fly over and photograph the best landing sites for the Apollo spacecraft. NASA suspected that it might have enough information about the lunar terrain to land astronauts safely without the detailed photographic mosaics of the lunar surface compiled from the orbiter flights, but certainly landing sites could be pinpointed more accurately with the help of high-resolution photographic maps Lunar Orbiter would even help to train the astronauts for visual recognition of the lunar topography and for last-second maneuvering above it before touchdown.
Langley had never managed a deep-space flight project before, and Director Floyd Thompson was not sure that he wanted to take on the burden of responsibility when Oran Nicks, the young director of lunar and planetary programs in Homer Newell's Office of Space Sciences, came to him with the idea early in 1963. Along with Newell's deputy, Edgar M. Cortright, Nicks was the driving force behind the orbiter mission at NASA headquarters. Cortright, however, first favored giving the project to JPL and using Surveyor Orbiter and the Hughes Aircraft Company, which was the prime contractor for Surveyor Lander. Nicks disagreed with this plan and worked to persuade Cortright and others that he was right. In Nicks' judgment, JPL had more than it could handle with Ranger and Surveyor Lander and should not have anything else "put on its plate," certainly not anything as large as the Lunar Orbiter project. NASA Langley, on the other hand, besides having a reputation for being able to handle a variety of aerospace tasks, had just lost the STG to Houston and so, Nicks thought, would be eager to take on the new challenge of a lunar orbiter project. Nicks worked to persuade Cortright that distributing responsibilities and operational programs among the NASA field centers would be "a prudent management decision." NASA needed balance among its research centers. To ensure NASA's future in space, headquarters must assign to all its centers challenging endeavors that would stimulate the development of "new and varied capabilities."16
Cortright was persuaded and gave Nicks permission to approach Floyd Thompson.** This Nicks did on 2 January 1963, during a Senior Council meeting of the Office of Space Sciences at Cape Canaveral. Nicks asked Thompson whether Langley "would be willing to study the feasibility of undertaking a lunar photography experiment," and Thompson answered cautiously that he would ask his staff to consider the idea.17
The historical record does not tell us much about Thompson's personal thoughts regarding taking on Lunar Orbiter. But one can infer from the evidence that Thompson had mixed feelings, not unlike those he experienced about supporting the STG. The Langley director would not only give Nicks a less than straightforward answer to his question but also would think about the offer long and hard before committing the center. Thompson invited several trusted staff members to share their feelings about assuming responsibility for the project. For instance, he went to Clint Brown, by then one of his three assistant directors for research, and asked him what he thought Langley should do. Brown told him emphatically that he did not think Langley should take on Lunar Orbiter. An automated deep-space project would be difficult to manage successfully. The Lunar Orbiter would be completely different from the Ranger and Surveyor spacecraft and being a new design, would no doubt encounter many unforeseen problems. Even if it were done to everyone's satisfaction -and the proposed schedule for the first launches sounded extremely tight -Langley would probably handicap its functional research divisions to give the project all the support that it would need. Projects devoured resources. Langley staff had learned this firsthand from its experience with the STG. Most of the work for Lunar Orbiter would rest in the management of contracts at industrial plants and in the direction of launch and mission control operations at Cape Canaveral and Pasadena. Brown, for one, did not want to be involved.18
But Thompson decided, in what Brown now calls his director's "greater wisdom," that the center should accept the job of managing the project. Some researchers in Brown's own division had been proposing a Langley-directed photographic mission to the moon for some time, and Thompson, too, was excited by the prospect.19 Furthermore, the revamped Lunar Orbiter was not going to be a space mission seeking general scientific knowledge about the moon. It was going to be a mission directly in support of Apollo, and this meant that engineering requirements would be primary. Langley staff preferred that practical orientation; their past work often resembled projects on a smaller scale. Whether the "greater wisdom" stemmed from Thompson's own powers of judgment is still not certain. Some informed Langley veterans, notably Brown, feel that Thompson must have also received some strongly stated directive from NASA headquarters that said Langley had no choice but to take on the project.
Whatever was the case in the beginning, Langley management soon welcomed Lunar Orbiter. It was a chance to prove that they could manage a major undertaking. Floyd Thompson personally oversaw many aspects of the project and for more than four years did whatever he could to make sure that Langley's functional divisions supported it fully. Through most of this period, he would meet every Wednesday morning with the top people in the project office to hear about the progress of their work and offer his own ideas. As one staff member recalls, "I enjoyed these meetings thoroughly. [Thompson was] the most outstanding guy I've ever met, a tremendously smart man who knew what to do and when to do it."20
Throughout the early months of 1963, Langley worked with its counterparts at NASA headquarters to establish a solid and cooperative working relationship for Lunar Orbiter. The center began to draw up preliminary specifications for a lightweight orbiter spacecraft and for the vehicle that would launch it (already thought to be the Atlas-Agena D). While Langley personnel were busy with that, TRW's Space Technologies Laboratories (STL) of Redondo Beach, California, was conducting a parallel study of a lunar orbiter photographic spacecraft under contract to NASA headquarters. Representatives from STL reported on this work at meetings at Langley on 25 February and 5 March 1963. Langley researchers reviewed the contractor's assessment and found that STL's estimates of the chances for mission success closely matched their own. If five missions were attempted, the probability of achieving one success was 93 percent. The probability of achieving two was 81 percent. Both studies confirmed that a lunar orbiter system using existing hardware would be able to photograph a landed Surveyor and would thus be able to verify the conditions of that possible Apollo landing site. The independent findings concluded that the Lunar Orbiter project could be done successfully and should be done quickly because its contribution to the Apollo program would be great. 21
With the exception of its involvement in the X-series research airplane programs at Muroc, Langley had not managed a major project during the period of the NACA. As a NASA center, Langley would have to learn to manage projects that involved contractors, subcontractors, other NASA facilities, and headquarters -a tall order for an organization used to doing all its work in-house with little outside interference. Only three major projects were assigned to Langley in the early 1960s: Scout, in 1960; Fire, in 1961; and Lunar Orbiter, in 1963. Project Mercury and Little Joe, although heavily supported by Langley, had been managed by the independent STG, and Project Echo, although managed by Langley for a while, eventually was given to Goddard to oversee.
To prepare for Lunar Orbiter in early 1963, Langley management reviewed what the center had done to initiate the already operating Scout and Fire projects. It also tried to learn from JPL about inaugurating paperwork for, and subsequent management of, Projects Ranger and Surveyor. After these reviews, Langley felt ready to prepare the formal documents required by NASA for the start-up of the project.22
As Langley prepared for Lunar Orbiter, NASA's policies and procedures for project management were changing. In October 1962, spurred on by its new top man, James Webb, the agency had begun to implement a series of structural changes in its overall organization. These were designed to improve relations between headquarters and the field centers, an area of fundamental concern. Instead of managing the field centers through the Office of Programs, as had been the case, NASA was moving them under the command of the headquarters program directors. For Langley, this meant direct lines of communication with the OART and the OSSA. By the end of 1963, a new organizational framework was in place that allowed for more effective management of NASA projects.
In early March 1963, as part of Webb's reform, NASA headquarters issued an updated version of General Management Instruction 4-1-1. This revised document established formal guidelines for the planning and management of a project. Every project was supposed to pass through four preliminary stages: (1) Project Initiation, (2) Project Approval, (3) Project Implementation, and (4) Organization for Project Management.23 Each step required the submission of a formal document for headquarters' approval.
From the beginning, everyone involved with Lunar Orbiter realized that it had to be a fast-track project. In order to help Apollo, everything about it had to be initiated quickly and without too much concern about the letter of the law in the written procedures. Consequently, although no step was to be taken without first securing approval for the preceding step, Langley initiated the paperwork for all four project stages at the same time. This same no-time-to-lose attitude ruled the schedule for project development. All aspects had to be developed concurrently. Launch facilities had to be planned at the same time that the design of the spacecraft started. The photographic, micrometeoroid, and selenodetic experiments had to be prepared even before the mission operations plan was complete. Everything proceeded in parallel: the development of the spacecraft, the mission design, the operational plan and preparation of ground equipment, the creation of computer programs, as well as a testing plan. About this parallel development, Donald H. Ward, a key member of Langley's Lunar Orbiter project team, remarked, "Sometimes this causes undoing some mistakes, but it gets to the end product a lot faster than a serial operation where you design the spacecraft and then the facilities to support it."24 Using the all-at-once approach, Langley put Lunar Orbiter in orbit around the moon only 27 months after signing with the contractor.
On 11 September 1963, Director Floyd Thompson formally established the Lunar Orbiter Project Office (LOPO) at Langley, a lean organization of just a few people who had been at work on Lunar Orbiter since May. Thompson named Clifford H. Nelson as the project manager. An NACA veteran and head of the Measurements Research Branch of IRD, Nelson was an extremely bright engineer. He had served as project engineer on several flight research programs, and Thompson believed that he showed great promise as a technical manager. He worked well with others, and Thompson knew that skill in interpersonal relations would be essential in managing Lunar Orbiter because so much of the work would entail interacting with contractors.
To help Nelson, Thompson originally reassigned eight people to LOPO: engineers Israel Taback, Robert Girouard, William I. Watson, Gerald Brewer, John B. Graham, Edmund A. Brummer, financial accountant Robert Fairburn, and secretary Anna Plott. This group was far smaller than the staff of 100 originally estimated for this office. The most important technical minds brought in to participate came from either IRD or from the Applied Materials and Physics Division, which was the old PARD. Taback was the experienced and sage head of the Navigation and Guidance Branch of IRD; Brummer, an expert in telemetry, also came from IRD; and two new Langley men, Graham and Watson, were brought in to look over the integration of mission operations and spacecraft assembly for the project. A little later IRD's talented Bill Boyer also joined the group as flight operations manager, as did the outstanding mission analyst Norman L. Crabill, who had just finished working on Project Echo. All four of the NACA veterans were serving as branch heads at the time of their assignment to LOPO. This is significant given that individuals at that level of authority and experience are often too entrenched and concerned about further career development to take a temporary assignment on a high-risk project. The LOPO staff set up an office in a room in the large 16-Foot Transonic Tunnel building in the Langley West Area.
When writing the Request for Proposals, Nelson, Taback, and the others involved could only afford the time necessary to prepare a brief document, merely a few pages long, that sketched out some of the detailed requirements. As Israel Taback remembers, even before the project office was established, he and a few fellow members of what would become LOPO had already talked extensively with the potential contractors. Taback explains, "Our idea was that they would be coming back to us [with details]. So it wasn't like we were going out cold, with a brand new program."25
Langley did need to provide one critical detail in the request: the means for stabilizing the spacecraft in lunar orbit. Taback recalls that an "enormous difference" arose between Langley and NASA headquarters over this issue. The argument was about whether the Request for Proposals should require that the contractors produce a rotating satellite known as a "spinner." The staff of the OSSA preferred a spinner based on STL's previous study of Lunar Orbiter requirements. However, Langley's Lunar Orbiter staff doubted the wisdom of specifying the means of stabilization in the Request for Proposals. They wished to keep the door open to other, perhaps better, ways of stabilizing the vehicle for photography.
The goal of the project, after all, was to take the best possible high-resolution pictures of the moon's surface. To do that, NASA needed to create the best possible orbital platform for the spacecraft's sophisticated camera equipment, whatever that turned out to be. From their preliminary analysis and conversations about mission requirements, Taback, Nelson, and others in LOPO felt that taking these pictures from a three-axis (yaw, pitch, and roll), attitude-stabilized device would be easier than taking them from a spinner. A spinner would cause distortions of the image because of the rotation of the vehicle. Langley's John F. Newcomb of the Aero Space Mechanics Division (and eventual member of LOPO) had calculated that this distortion would destroy the resolution and thus seriously compromise the overall quality of the pictures. This was a compromise that the people at Langley quickly decided they could not live with. Thus, for sound technical reasons, Langley insisted that the design of the orbiter be kept an open matter and not be specified in the Request for Proposals. Even if Langley's engineers were wrong and a properly designed spinner would be most effective, the sensible approach was to entertain all the ideas the aerospace industry could come up with before choosing a design.26
For several weeks in the summer of 1963, headquarters tried to resist the Langley position. Preliminary studies by both STL for the OSSA and by Bell Communications (BellComm) for the Office of Manned Space Flight indicated that a rotating spacecraft using a spin-scan film camera similar to the one developed by the Rand Corporation in 1958 for an air force satellite reconnaissance system ( "spy in the sky" ) would work well for Lunar Orbiter. Such a spinner would be less complicated and less costly than the three-axis-stabilized spacecraft preferred by Langley.27
But Langley staff would not cave in on an issue so fundamental to the project's success. Eventually Newell, Cortright, Nicks, and Scherer in the OSSA offered a compromise that Langley could accept: the Request for Proposals could state that "if bidders could offer approaches which differed from the established specifications but which would result in substantial gains in the probability of mission success, reliability, schedule, and economy," then NASA most certainly invited them to submit those alternatives. The request would also emphasize that NASA wanted a lunar orbiter that was built from as much off-the shelf hardware as possible. The development of many new technological systems would require time that Langley did not have.28
Langley and headquarters had other differences of opinion about the request. For example, a serious problem arose over the nature of the contract. Langley's chief procurement officer, Sherwood Butler, took the conservative position that a traditional cost-plus-a-fixed-fee contract would be best in a project in which several unknown development problems were bound to arise. With this kind of contract, NASA would pay the contractor for all actual costs plus a sum of money fixed by the contract negotiations as a reasonable profit.
NASA headquarters, on the other hand, felt that some attractive financial incentives should be built into the contract. Although unusual up to this point in NASA history, headquarters believed that an incentives contract would be best for Lunar Orbiter. Such a contract would assure that the contractor would do everything possible to solve all the problems encountered and make sure that the project worked. The incentives could be written up in such a way that if, for instance, the contractor lost money on any one Lunar Orbiter mission, the loss could be recouped with a handsome profit on the other missions. The efficacy of a cost-plus-incentives contract rested in the solid premise that nothing motivated a contractor more than making money. NASA headquarters apparently understood this better than Langley's procurement officer who wanted to keep tight fiscal control over the project and did not want to do the hairsplitting that often came with evaluating whether the incentive clauses had been met.29
On the matter of incentives, Langley's LOPO engineers sided against their own man and with NASA headquarters. They, too, thought that incentives were the best way to do business with a contractor -as well as the best way to illustrate the urgency that NASA attached to Lunar Orbiter.30 The only thing that bothered them was the vagueness of the incentives being discussed. When Director Floyd Thompson understood that his engineers really wanted to take the side of headquarters on this issue, he quickly concurred. He insisted only on three things: the incentives had to be based on clear stipulations tied to cost, delivery, and performance, with penalties for deadline overruns; the contract had to be fully negotiated and signed before Langley started working with any contractor (in other words, work could not start under a letter of intent); and all bidding had to be competitive. Thompson worried that the OSSA might be biased in favor of STL as the prime contractor because of STL's prior study of the requirements of lunar orbiter systems.31
In mid-August 1963, with these problems worked out with headquarters, Langley finalized the Request for Proposals and associated Statement of Work, which outlined specifications, and delivered both to Captain Lee R. Scherer, Lunar Orbiter's program manager at NASA headquarters, for presentation to Ed Cortright and his deputy Oran Nicks. The documents stated explicitly that the main mission of Lunar Orbiter was "the acquisition of photographic data of high and medium resolution for selection of suitable Apollo and Surveyor landing sites." The request set out detailed criteria for such things as identifying "cones" (planar features at right angles to a flat surface), "slopes" (circular areas inclined with respect to the plane perpendicular to local gravity), and other subtle aspects of the lunar surface. Obtaining information about the size and shape of the moon and about the lunar gravitational field was deemed less important. By omitting a detailed description of the secondary objectives in the request, Langley made clear that "under no circumstances" could anything "be allowed to dilute the major photo reconnaissance mission."32 The urgency of the national commitment to a manned lunar landing mission was the force driving Lunar Orbiter. Langley wanted no confusion on that point.
Cliff Nelson and LOPO moved quickly in September 1963 to create a Source Evaluation Board that would possess the technical expertise and good judgment to help NASA choose wisely from among the industrial firms bidding for Lunar Orbiter. A large board of reviewers (comprising more than 80 evaluators and consultants from NASA centers and other aerospace organizations) was divided into groups to evaluate the technical feasibility, cost, contract management concepts, business operations, and other critical aspects of the proposals. One group, the so-called Scientists' Panel, judged the suitability of the proposed spacecraft for providing valuable information to the scientific community after the photographic mission had been completed. Langley's two representatives on the Scientists' Panel were Clint Brown and Dr. Samuel Katzoff, an extremely insightful engineering analyst, 27-year Langley veteran, and assistant chief of the Applied Materials and Physics Division.
Although the opinions of all the knowledgeable outsiders were taken .seriously, Langley intended to make the decision.33 Chairing the Source Evaluation Board was Eugene Draley, one of Floyd Thompson's assistant directors. When the board finished interviewing all the bidders, hearing their oral presentations, and tallying the results of its scoring of the proposals (a possible 70 points for technical merit and 30 points for business management), it was to present a formal recommendation to Thompson. He in turn would pass on the findings with comments to Homer Newell's office in Washington.
Five major aerospace firms submitted proposals for the Lunar Orbiter contract. Three were California firms: STL in Redondo Beach, Lockheed Missiles and Space Company of Sunnyvale, and Hughes Aircraft Company of Los Angeles. The Martin Company of Baltimore and the Boeing Company of Seattle were the other two bidders.34
Three of the five proposals were excellent. Hughes had been developing an ingenious spin-stabilization system for geosynchronous communication satellites, which helped the company to submit an impressive proposal for a rotating vehicle. With Hughes's record in spacecraft design and fabrication, the Source Evaluation Board gave Hughes serious consideration. STL also submitted a fine proposal for a spin-stabilized rotator. This came as no surprise, of course, given STL's prior work for Surveyor as well as its prior contractor studies on lunar orbiter systems for NASA headquarters.
The third outstanding proposal -entitled "ACLOPS" (Agena-Class Lunar Orbiter Project) -was Boeing's. The well-known airplane manufacturer had not been among the companies originally invited to bid on Lunar Orbiter and was not recognized as the most logical of contenders. However, Boeing recently had successfully completed the Bomarc missile program and was anxious to become involved with the civilian space program, especially now that the DOD was canceling Dyna-Soar, an air force project for the development of an experimental X-20 aerospace plane. This cancellation released several highly qualified U.S. Air Force personnel, who were still working at Boeing, to support a new Boeing undertaking in space. Company representatives had visited Langley to discuss Lunar Orbiter, and Langley engineers had been so excited by what they had heard that they had pestered Thompson to persuade Seamans to extend an invitation to Boeing to join the bidding. The proposals from Martin, a newcomer in the business of automated space probes, and Lockheed, a company with years of experience handling the Agena space vehicle for the air force, were also quite satisfactory. In the opinion of the Source Evaluation Board, however, the proposals from Martin and Lockheed were not as strong as those from Boeing and Hughes.
The LOPO staff and the Langley representatives decided early in the evaluation that they wanted Boeing to be selected as the contractor; on behalf of the technical review team, Israel Taback had made this preference known both in private conversations with, and formal presentations to, the Source Evaluation Board. Boeing was Langley's choice because it proposed a three axis stabilized spacecraft rather than a spinner. For attitude reference in orbit, the spacecraft would use an optical sensor similar to the one that was being planned for use on the Mariner C spacecraft, which fixed on the star Canopus.
An attitude stabilized orbiter eliminated the need for a focal-length spin camera. This type of photographic system, first conceived by Merton E. Davies of the Rand Corporation in 1958, could compensate for the distortions caused by a rotating spacecraft but would require extensive development. In the Boeing proposal, Lunar Orbiter would carry a photo subsystem designed by Eastman Kodak and used on DOD spy satellites.35 This subsystem worked automatically and with the precision of a Swiss watch. It employed two lenses that took pictures simultaneously on a roll of 70-millimeter aerial film. If one lens failed, the other still worked. One lens had a focal length of 610 millimeters (24 inches) and could take pictures from an altitude of 46 kilometers (28.5 miles) with a high resolution for limited-area coverage of approximately 1 meter. The other, which had a focal length of about 80 millimeters (3 inches), could take pictures with a medium resolution of approximately 8 meters for wide coverage of the lunar surface. The film would be developed on board the spacecraft using the proven Eastman Kodak "Bimat" method. The film would be in contact with a web containing a single solution dry processing chemical, which eliminated the need to use wet chemicals. Developed automatically and wound onto a storage spool, the processed film could then be "read out" and transmitted by the spacecraft's communications subsystem to receiving stations of JPL's worldwide Deep Space Network, which was developed for communication with spacefaring vehicles destined for the moon and beyond. 36
How Boeing had the good sense to propose an attitude-stabilized platform based on the Eastman Kodak camera, rather than to propose a rotator with a yet-to be developed camera is not totally clear. Langley engineers had conversed with representatives of all the interested bidders, so Boeing's people might possibly have picked up on Langley's concerns about the quality of photographs from spinners. The other bidders, especially STL and Hughes, with their expertise in spin-stabilized spacecraft, might also have picked up on those concerns but were too confident in the type of rotationally stabilized system they had been working on to change course in midstream.
Furthermore, Boeing had been working closely with RCA, which for a time was also thinking about submitting a proposal for Lunar Orbiter. RCA's idea was a lightweight (200-kilogram), three axis, attitude stabilized, and camera-bearing payload that could be injected into lunar orbit as part of a Ranger-type probe. A lunar orbiter study group, chaired by Lee Scherer.....
....at NASA headquarters, had evaluated RCA's approach in October 1962, however, and found it lacking. It was too expensive ($20.4 million for flying only three spacecraft), and its proposed vidicon television unit could not cover the lunar surface either in the detail or the wide panoramas NASA wanted.37
Boeing knew all about this rejected RCA approach. After talking to Langley's engineers, the company shrewdly decided to stay with an attitude stabilized orbiter but to dump the use of the inadequate vidicon television. Boeing replaced the television system with an instrument with a proven track record in planetary reconnaissance photography: the Eastman Kodak spy camera.38
On 20 December 1963, two weeks after the Source Evaluation Board made its formal recommendation to Administrator James Webb in Washington, NASA announced that it would be negotiating with Boeing as prime contractor for the Lunar Orbiter project. Along with the excellence of its proposed spacecraft design and Kodak camera, NASA singled out the strength of Boeing's commitment to the project and its corporate capabilities to.....
....complete it on schedule without relying on many subcontractors. Still, the choice was a bit ironic. Only 14 months earlier, the Scherer study group had rejected RCA's approach in favor of a study of a spin-stabilized spacecraft proposed by STL. Now Boeing had outmaneuvered its competition by proposing a spacecraft that incorporated essential features of the rejected RCA concept and almost none from the STL's previously accepted one.
Boeing won the contract even though it asked for considerably more money than any of the other bidders. The lowest bid, from Hughes, was $41,495,339, less than half of Boeing's $83,562,199, a figure that would quickly rise when the work started. Not surprisingly, NASA faced some congressional criticism and had to defend its choice. The agency justified its selection by referring confidently to what Boeing alone proposed to do to ensure protection of Lunar Orbiter's photographic film from the hazards of solar radiation.39
This was a technical detail that deeply concerned LOPO. Experiments conducted by Boeing and by Dr. Trutz Foelsche, a Langley scientist in the Space Mechanics (formerly Theoretical Mechanics) Division who specialized in the study of space radiation effects, suggested that even small doses of radiation from solar flares could fog ordinary high-speed photographic film. This would be true especially in the case of an instrumented probe like Lunar Orbiter, which had thin exterior vehicular shielding. Even if the thickness of the shielding around the film was increased tenfold (from 1 g/cm2 to 10 g/cm2), Foelsche judged that high-speed film would not make it through a significant solar-particle event without serious damage.40 Thus,.....
.....something extraordinary had to be done to protect the high-speed film. A better solution was not to use high-speed film at all.
As NASA explained successfully to its critics, the other bidders for the Lunar Orbiter contract relied on high-speed film and faster shutter speeds for their on-board photographic subsystems. Only Boeing did not. When delegates from STL, Hughes, Martin, and Lockheed were asked at a bidders' briefing in November 1963 about what would happen to their film if a solar event occurred during an orbiter mission, they all had to admit that the film would be damaged seriously. Only Boeing could claim otherwise. Even with minimal shielding, the more insensitive, low-speed film used by the Kodak camera would not be fogged by high-energy radiation, not even if the spacecraft moved through the Van Allen radiation belts.41 This, indeed, proved to be the case. During the third mission of Lunar Orbiter in February 1967, a solar flare with a high amount of optical activity did occur, but the film passed through it unspoiled.42
Negotiations with Boeing did not take long. Formal negotiations began on 17 March 1964, and ended just four days later. On 7 May Administrator Webb signed the document that made Lunar Orbiter an official NASA commitment. Hopes were high. But in the cynical months of 1964, with Ranger's setbacks still making headlines and critics still faulting NASA for failing to match Soviet achievements in space, everyone doubted whether Lunar Orbiter would be ready for its first scheduled flight to the moon in just two years.
Large projects are run by only a handful of people. Four or five key individuals delegate jobs and responsibilities to others. This was certainly true for Lunar Orbiter. From start to finish, Langley's LOPO remained a small organization; its original nucleus of 9 staff members never grew any larger than 50 professionals. Langley management knew that keeping LOPO's staff small meant fewer people in need of positions when the project ended. If all the positions were built into a large project office, many careers would be out on a limb; a much safer organizational method was for a small project office to draw people from other research and technical divisions to assist the project as needed.43
In the case of Lunar Orbiter, four men ran the project: Cliff Nelson, the project manager; Israel Taback, who was in charge of all activities leading to the production and testing of the spacecraft; Bill Boyer, who was responsible for planning and integrating launch and flight operations; and James V. Martin, the assistant project manager. Nelson had accepted the assignment with Thompson's assurance that he would be given wide latitude in choosing the men and women he wanted to work with him in the project office. As a result, virtually all of his top people were hand-picked.
The one significant exception was his chief assistant, Jim Martin. In September 1964, the Langley assistant director responsible for the project office, Gene Draley, brought in Martin to help Nelson cope with some of the stickier details of Lunar Orbiter's management. A senior manager in charge of Republic Aviation's space systems requirements, Martin had a tremendous ability for anticipating business management problems and plenty of experience taking care of them. Furthermore, he was a well-organized and skillful executive who could make schedules, set due dates, and closely track the progress of the contractors and subcontractors. This "paper" management of a major project was troublesome for Cliff Nelson, a quiet people-oriented person. Draley knew about taskmaster Martin from Republic's involvement in Project Fire and was hopeful that Martin's acerbity and business-mindedness would complement Nelson's good-heartedness and greater technical depth, especially in dealings with contractors.
Because Cliff Nelson and Jim Martin were so entirely opposite in personality, they did occasionally clash, which caused a few internal problems in LOPO. On the whole, however, the alliance worked quite well, although it was forced by Langley management. Nelson generally oversaw the whole endeavor and made sure that everybody worked together as a team. For....
....the monitoring of the day-to-day progress of the project's many operations, Nelson relied on the dynamic Martin. For example, when problems arose with the motion-compensation apparatus for the Kodak camera, Martin went to the contractor's plant to assess the situation and decided that its management was not placing enough emphasis on following a schedule. Martin acted tough, pounded on the table, and made the contractor put workable schedules together quickly. When gentler persuasion was called for or subtler interpersonal relationships were involved, Nelson was the person for the job. Martin, who was technically competent but not as technically talented as Nelson, also deferred to the project manager when a decision required particularly complex engineering analysis. Thus, the two men worked together for the overall betterment of Lunar Orbiter.44
Placing an excellent person with just the right specialization in just the right job was one of the most important elements behind the success of Lunar Orbiter, and for this eminently sensible approach to project management, Cliff Nelson and Floyd Thompson deserve the lion's share of credit. Both men cultivated a management style that emphasized direct dealings with people and often ignored formal organizational channels. Both stressed the importance of teamwork and would not tolerate any individual, however talented, willfully undermining the esprit de corps. Before filling any position in the project office, Nelson gave the selection much thought. He questioned whether the people under consideration were Compatible with others already in his project organization. He wanted to know whether candidates were goal-oriented -willing to do whatever was necessary (working overtime or traveling) to complete the project.45 Because Langley possessed so many employees who had been working at the center for many years, the track record of most people was either well known or easy to ascertain. Given the outstanding performance of Lunar Orbiter and the testimonies about an exceptionally healthy work environment in the project office, Nelson did an excellent job predicting who would make a productive member of the project team.46
Considering Langley's historic emphasis on fundamental applied aeronautical research, it might seem surprising that Langley scientists and engineers did not try to hide inside the dark return passage of a wind tunnel rather than be diverted into a spaceflight project like Lunar Orbiter. As has been discussed, some researchers at Langley (and agency-wide) objected to and resisted involvement with project work. The Surveyor project at JPL had suffered from staff members' reluctance to leave their own specialties to work on a space project. However, by the early 1960s the enthusiasm for spaceflight ran so rampant that it was not hard to staff a space project office. All the individuals who joined LOPO at Langley came enthusiastically; otherwise Cliff Nelson would not have had them. Israel Taback, who had been running the Communications and Control Branch of IRD, remembers having become distressed with the thickening of what he calls "the paper forest": the preparation of five-year plans, ten-year plans, and other lengthy documents needed to justify NASA's budget requests. The work he had been doing with airplanes and aerospace vehicles was interesting (he had just finished providing much of the flight instrumentation for the X-15 program), but not so interesting that he wanted to turn down Cliff Nelson's offer to join Lunar Orbiter. "The project was brand new and sounded much more exciting than what I had been doing," Taback remembers. It appealed to him also because of its high visibility both inside and outside the center. Everyone had to recognize the importance of a project directly related to the national goal of landing a man on the moon. 47
Norman L. Crabill, the head of LOPO's mission design team, also decided to join the project. On a Friday afternoon, he had received the word that one person from his branch of the Applied Materials and Physics Division would have to be named by the following Monday as a transfer to LOPO; as branch head, Crabill himself would have to make the choice. That weekend he asked himself, "What's your own future, Crabill? This is space. If you don't step up to this, what's your next chance. You've already decided not to go with the guys to Houston." He immediately knew who to transfer, "It was me." That was how he "got into the space business." And in his opinion, it was "the best thing" that he ever did.48
Cliff Nelson's office had the good sense to realize that monitoring the prime contractor did not entail doing Boeing's work for Boeing. Nelson approached the management of Lunar Orbiter more practically: the contractor was "to perform the work at hand while the field center retained responsibility for overseeing his progress and assuring that the job was done according to the terms of the contract." For Lunar Orbiter, this philosophy meant specifically that the project office would have to keep "a continuing watch on the progress of the various components, subsystems, and the whole spacecraft system during the different phases of designing, fabricating and testing them."49 Frequent meetings would take place between Nelson and his staff and their counterparts at Boeing to discuss all critical matters, but Langley would not assign all the jobs, solve all the problems, or micromanage every detail of the contractor's work.
This philosophy sat well with Robert J. Helberg, head of Boeing's Lunar Orbiter team. Helberg had recently finished directing the company's work on the Bomarc missile, making him a natural choice for manager of Boeing's next space venture. The Swedish-born Helberg was absolutely straightforward, and all his people respected him immensely -as would everyone in LOPO. He and fellow Swede Cliff Nelson got along famously. Their relaxed relationship set the tone for interaction between Langley and Boeing. Ideas and concerns passed freely back and forth between the project offices. Nelson and his people "never had to fear the contractor was just telling [them] a lie to make money," and Helberg and his tightly knit, 220-member Lunar Orbiter team never had to complain about uncaring, papershuffling bureaucrats who were mainly interested in dotting all the i's and crossing all the t's and making sure that nothing illegal was done that could bother government auditors and put their necks in a wringer.50
The Langley/NASA headquarters relationship was also harmonious and effective. This was in sharp contrast to the relationship between JPL and headquarters during the Surveyor project. Initially, JPL had tried to monitor the Surveyor contractor, Hughes, with only a small staff that provided little on-site technical direction; however, because of unclear objectives, the open-ended nature of the project (such basic things as which experiment packages would be included on the Surveyor spacecraft were uncertain), and a too highly diffused project organization within Hughes, JPL's "laissez-faire" approach to project management did not work. As the problems snowballed, Cortright found it necessary to intervene and compelled JPL to assign a regiment of on-site supervisors to watch over every detail of the work being done by Hughes. Thus, as one analyst of Surveyor's management has observed, "the responsibility for overall spacecraft development was gradually retrieved from Hughes by JPL, thereby altering significantly the respective roles of the field center and the spacecraft systems contractors."51
Nothing so unfortunate happened during Lunar Orbiter, partly because NASA had learned from the false steps and outright mistakes made in the management of Surveyor. For example, NASA now knew that before implementing a project, everyone involved must take part in extensive preliminary discussions. These conversations ensured that the project's goals were certain and each party's responsibilities clear. Each office should expect maximum cooperation and minimal unnecessary interference from the others. Before Lunar Orbiter was under way, this excellent groundwork had been laid.
As has been suggested by a 1972 study done by the National Academy of Public Administration, the Lunar Orbiter project can serve as a model of the ideal relationship between a prime contractor, a project office, a field center, a program office, and headquarters. From start to finish nearly everything important about the interrelationship worked out superbly in Lunar Orbiter. According to LOPO's Israel Taback, "Everyone worked together harmoniously as a team whether they were government, from headquarters or from Langley, or from Boeing." No one tried to take advantage of rank or to exert any undue authority because of an official title or organizational affiliation.52 That is not to say that problems never occurred in the management of Lunar Orbiter. In any large and complex technological project involving several parties, some conflicts are bound to arise. The key to project success lies in how differences are resolved.
The most fundamental issue in the premission planning for Lunar Orbiter was how the moon was to be photographed. Would the photography be "concentrated" on a predetermined single target, or would it be "distributed" over several selected targets across the moon's surface? On the answer to this basic question depended the successful integration of the entire mission plan for Lunar Orbiter.
For Lunar Orbiter, as with any other spaceflight program, mission planning involved the establishment of a complicated sequence of events: When should the spacecraft be launched? When does the launch window open and close? On what trajectory should the spacecraft arrive in lunar orbit? How long will it take the spacecraft to get to the moon? How and when should orbital "injection" take place? How and when should the spacecraft get to its target(s), and at what altitude above the lunar surface should it take the pictures? Where does the spacecraft need to be relative to the sun for taking optimal pictures of the lunar surface? Answering these questions also meant that NASA's mission planners had to define the lunar orbits, determine how accurately those orbits could be navigated, and know the fuel requirements. The complete mission profile had to be ready months before launch. And before the critical details of the profile could be made ready, NASA had to select the targeted areas on the lunar surface and decide how many of them were to be photographed during the flight of a single orbiter.53
Originally NASA's plan was to conduct a concentrated mission. The Lunar Orbiter would go up and target a single site of limited dimensions.
Top NASA officials listen to a LOPO briefing at Langley in December 1966. Sitting to the far right with his hand on his chin is Floyd Thompson. To the left sits Dr. George Mueller, NASA associate administrator for Manned Space Flight. On the wall is a diagram of the sites selected for the "concentrated mission. " The chart below illustrates the primary area of photographic interest.
The country's leading astrogeologists would help in the site selection by identifying the smoothest, most attractive possibilities for a manned lunar landing. The U.S. Geological Survey had drawn huge, detailed maps of the lunar surface from the best available telescopic observations. With these maps, NASA would select one site as the prime target for each of the five Lunar Orbiter missions. During a mission, the spacecraft would travel into orbit and move over the target at the "perilune," or lowest point in the orbit (approximately 50 kilometers [31.1 miles] above the surface); then it would start taking pictures. Successive orbits would be close together longitudinally, and the Lunar Orbiter's camera would resume photographing the surface each time it passed over the site. The high-resolution lens would take a 1-meter-resolution picture of a small area (4 x 16 kilometers) while at exactly the same time, the medium-resolution lens would take an 8-meter resolution picture of a wider area (32 x 37 kilometers). The high-resolution lens would photograph at such a rapid interval that the pictures would just barely overlap. The wide-angle pictures, taken by the medium-resolution lens, would have a conveniently wide overlap. All the camera exposures would take place in 24 hours, thus minimizing the threat to the film from a solar flare. The camera's capacity of roughly 200 photographic frames would be devoted to one location. The result would be one area shot in adjacent, overlapping strips. By putting the strips together, NASA had a picture of a central 1-meter-resolution area that was surrounded by a broader 8-meter resolution area -in other words, it would be one large, rich stereoscopic picture of a choice lunar landing site. NASA would learn much about that one ideal place, and the Apollo program would be well served.54
The plan sounded fine to everyone at least in the beginning. Langley's Request for Proposals had specified the concentrated mission, and Boeing had submitted the winning proposal based on that mission plan. Moreover, intensive, short-term photography like that called for in a concentrated mission was exactly what Eastman Kodak's high-resolution camera system had been designed for. The camera was a derivative of a spy satellite photo system created specifically for earth reconnaissance missions specified by the DOD.***
As LOPO's mission planners gave the plan more thought, however, they realized that the concentrated mission approach was flawed. Norman Crabill, Langley's head of mission integration for Lunar Orbiter, remembers the question he began to ask himself, "What happens if only one of these missions is going to work? This was in the era of Ranger failures and Surveyor slippage. When you shoot something, you had only a twenty percent probability that it was going to work. It was that bad." On that premise, NASA planned to fly five Lunar Orbiters, hoping that one would operate as it should. "Suppose we go up there and shoot all we [have] on one site, and it turns out to be no good?" fretted Crabill, and others began to worry as well. What if that site was not as smooth as it appeared on the U.S. Geological Survey maps, or a gravitational anomaly or orbital perturbation was present, making that particular area of the moon unsafe for a lunar lauding? And what if that Lunar Orbiter turned out to be the only one to work? What then?55
In late 1964, over the course of several weeks, LOPO became more convinced that it should not be putting all its eggs in one basket. "We developed the philosophy that we really didn't want to do the concentrated mission; what we really wanted to do was what we called the 'distributed mission,"' recalls Crabill. The advantage of the distributed mission was that it would enable NASA to inspect several choice targets in the Apollo landing zone with only one spacecraft.56
In early 1965, Norm Crabill and Tom Young of the LOPO mission integration team traveled to the office of the U.S. Geological Survey in Flagstaff, Arizona. There, the Langley engineers consulted with U.S. government astrogeologists John F. McCauley, Lawrence Rowan, and Harold Masursky. Jack McCauley was Flagstaff's top man at the time, but he assigned Larry Rowan, "a young and upcoming guy, very reasonable and very knowledgeable," the job of heading the Flagstaff review of the Lunar Orbiter site selection problem. "We sat down with Rowan at a table with these big lunar charts," and Rowan politely reminded the Langley duo that "the dark areas on the moon were the smoothest." Rowan then pointed to the darkest places across the entire face of the moon.57
Rowan identified 10 good targets. When Crabill and Young made orbital calculations, they became excited. In a few moments, they had realized that they wanted to do the distributed mission. Rowan and his colleagues in Flagstaff also became excited about the prospects. This was undoubtedly the way to catch as many landing sites as possible. The entire Apollo zone of interest was ±45° longitude and ±5° latitude, along the equatorial region of the facing, or near side of the moon. Within that zone, the area that could be photographed via a concentrated mission was small. A single Lunar Orbiter that could photograph 10 sites of that size all within that region would be much more effective. If the data showed that a site chosen by the astrogeologists was not suitable, NASA would have excellent photographic coverage of nine other prime sites. In summary, the distributed mode would....
.....give NASA the flexibility to ensure that Lunar Orbiter would provide the landing site information needed by Apollo even if only one Lunar Orbiter mission proved successful.
But there was one big hitch: Eastman Kodak's photo system was not designed for the distributed mission. It was designed for the concentrated mission in which all the photography would involve just one site and be loaded, shot, and developed in 24 hours. If Lunar Orbiter must photograph 10 sites, a mission would last at least two weeks. The film system was designed to sustain operations for only a day or two; if the mission lasted longer than that, the Bimat film would stick together, the exposed parts of it would dry out, the film would get stuck in the loops, and the photographic mission would be completely ruined.
When Boeing first heard that NASA had changed its mind and now wanted to do the distributed mission, Helberg and his men balked. According to LOPO's Norman Crabill, Boeing's representatives said, "Look, we understand you want to do this. But, wait. The system was designed, tested, used, and proven in the concentrated mission mode. You can't change it now because it wasn't designed to have the Bimat film in contact for long periods of time. In two weeks' time, some of the Bimat is just going to go, pfft! It's just going to fail!" Boeing understood the good sense of the distributed mission, but as the prime contractor, the company faced a classic technological dilemma. The customer, NASA, wanted to use the system to do something it was not designed to do. This could possibly cause a disastrous failure. Boeing had no recourse but to advise the customer that what it wanted to do could endanger the entire mission.58
The Langley engineers wanted to know whether Boeing could solve the film problem. "We don't know for sure," the Boeing staff replied, "and we don't have the time to find out." NASA suggested that Boeing conduct tests to obtain quantitative data that would define the limits of the film system. Boeing's response was "That's not in the contract."59 The legal documents specified that the Lunar Orbiter should have the capacity to conduct the concentrated mission. If NASA now wanted to change the requirements for developing the Orbiter, then a new contract would have to be negotiated. A stalemate resulted on this issue and lasted until early 1965. The first launch was only a year away.
If LOPO hoped to persuade Boeing to accept the idea of changing a basic mission requirement, it had to know the difference in reliability between the distributed and concentrated missions. If analysis showed that the distributed mission would be far less reliable, then even LOPO might want to reconsider and proceed with the concentrated mission. Crabill gave the job of obtaining this information to Tom Young, a young researcher from the Applied Materials and Physics Division. Crabill had specifically requested that Young be reassigned to LOPO mission integration because, in his opinion, Young was "the brightest guy [he] knew." On the day Young had reported to work with LOPO, Crabill had given him "a big pile of stuff to read," thinking he would be busy and, as Crabill puts it, "out of my hair for quite a while." But two days later, Young returned, having already made his way through all the material. When given the job of the comparative mission reliability analysis, Young went to Boeing in Seattle. In less than two weeks, he found what he needed to know and figured out the percentages: the reliability for the concentrated mission was an unspectacular 60 percent, but for the distributed mission it was only slightly worse, 58 percent. "It was an insignificant difference," Crabill thought when he heard Young's numbers, especially because nobody then really knew how to do that type of analysis. "We didn't gag on the fact that it was pretty low anyway, but we really wanted to do this distributed mission." The Langley researchers decided that the distributed mission was a sensible choice, if the Kodak system could be made to last for the extra time and if Boeing could be persuaded to go along with the mission change.60
LOPO hoped that Young's analysis would prove to Boeing that no essential difference in reliability existed between the two types of missions, but Boeing continued to insist that the concentrated mission was the legal requirement, not the distributed mission. The dispute was a classic case of implementing a project before even the customer was completely sure of what that project should accomplish. In such a situation, the only sensible thing to do was to be flexible.
The problem for Boeing, of course, was that such flexibility might cost the company its financial incentives. If a Lunar Orbiter mission failed, the company worried that it would not be paid the bonus money promised in the contract. Helberg and Nelson discussed this issue in private conversations. Floyd Thompson participated in many of these talks and even visited Seattle to try to facilitate an agreement. In the end, Langley convinced Helberg that the change from a concentrated to a distributed mission would not impact Boeing's incentives. If a mission failed because of the change, LOPO promised that it would assume the responsibility. Boeing would have done its best according to the government request and instructions -and for that they would not be penalized. 61
The missions, however, would not fail. NASA and Boeing would handle the technical problems involving the camera by testing the system to ascertain the definite limits of its reliable operation. From Kodak, the government and the prime contractor obtained hard data regarding the length of time the film could remain set in one place before the curls or bends in the film around the loops became permanent and the torque required to advance the film exceeded the capability of the motor. From these tests, Boeing and LOPO established a set of mission "rules" that had to be followed precisely. For example, to keep the system working, Lunar Orbiter mission controllers at JPL had to advance the film one frame every eight hours. The rules even required that film sometimes be advanced without opening the door of the camera lens. Mission controllers called these nonexposure shots their "film-set frames" and the schedule of photographs their "film budget."62
As a result of the film rules, the distributed mission turned out to be a much busier operation than a concentrated mission would have been. Each time a photograph was taken, including film-set frames, the spacecraft had to be maneuvered. Each maneuver required a command from mission control. LOPO staff worried about the ability of the spacecraft to execute so many maneuvers over such a prolonged period. They feared something would go wrong during a maneuver that would cause them to lose control of the spacecraft. Lunar Orbiter 1, however, flawlessly executed an astounding number of commands, and LOPO staff were able to control spacecraft attitude during all 374 maneuvers.63
Ultimately, the trust between Langley and Boeing allowed each to take the risk of changing to a distributed mission. Boeing trusted Langley to assume responsibility if the mission failed, and Langley trusted Boeing to put its best effort into making the revised plan a success. Had either not fulfilled its promise to the other, Lunar Orbiter would not have achieved its outstanding record.
Simple as this diagram of Lunar Orbiter (left) may look, no spacecraft in NASA history operated more successfully than Lunar Orbiter. Below, Lunar Orbiter goes I through a final inspection in the NASA Hanger S clean room at Kennedy Space Center prior to launch on 10 August 1966. The spacecraft was mounted on a three-axis test stand with its solar panels deployed and high-gain dish antenna extended from the side.
The switch to the distributed mission was not the only instance during the Lunar Orbiter mission when contract specifications were jettisoned to pursue a promising idea. Boeing engineers realized that the Lunar Orbiter project presented a unique opportunity for photographing the earth. When the LOPO staff heard this idea, they were all for it, but Helberg and Boeing management rejected the plan. Turning the spacecraft around so that its camera could catch a quick view of the earth tangential to the moon's surface entailed technical difficulties, including the danger that, once the spacecraft's orientation was changed, mission controllers could lose command of the spacecraft. Despite the risk, NASA urged Boeing to incorporate the maneuver in the mission plan for Lunar Orbiter 1. Helberg refused.64
In some projects, that might have been the end of the matter. People would have been forced to forget the idea and to live within the circumscribed world of what had been legally agreed upon. Langley, however, was not about to give up on this exciting opportunity. Cliff Nelson,....
.....Floyd Thompson, and Lee Scherer went to mission control at JPL to talk to Helberg and at last convinced him that he was being too cautious -that "the picture was worth the risk." If any mishap occurred with the spacecraft during the maneuver, NASA again promised that Boeing would still receive compensation and part of its incentive for taking the risk. The enthusiasm of his own staff for the undertaking also influenced Helberg in his final decision to take the picture. 65
On 23 August 1966 just ad Lunar Orbiter l was about to pass behind the moon, mission controllers executed the necessary maneuvers to point the camera away from the lunar surface and toward the earth. The result was the world's first view of the earth from space. It was called "the picture of the century'' and "the greatest shot taken since the invention of photography."****
Not even the color photos of the earth taken during the Apollo missions superseded the impact of this first image of our planet as a little island of life floating in the black and infinite sea of space. 66
Lunar Orbiter defied all the probability studies. All five missions worked extraordinarily well, and with the minor exception of a short delay in the launch of Lunar Orbiter I -the Eastman Kodak camera was not ready - all the missions were on schedule. The launches were three months apart with the first taking place in August 1966 and the last in August 1967. This virtually perfect flight record was a remarkable achievement, especially considering that Langley had never before managed any sort of flight program into deep space.
Lunar Orbiter accomplished what it was designed to do, and more. Its camera took 1654 photographs. More than half of these (840) were of the proposed Apollo landing sites. Lunar Orbiters I, II, and III took these site pictures from low-flight altitudes, thereby providing detailed coverage of 22 select areas along the equatorial region of the near side of the moon. One of the eight sites scrutinized by Lunar Orbiters II and III was a very smooth area in the Sea of Tranquility. A few years later, in July 1969, Apollo 11 commander Neil Armstrong would navigate the lunar module Eagle to a landing on this site.67
By the end of the third Lunar Orbiter mission, all the photographs needed to cover the Apollo landing sites had been taken. NASA was then free to redesign the last two missions, move away from the pressing engineering objective imposed by Apollo, and go on to explore other regions of the moon for the benefit of science. Eight hundred and eight of the remaining 814 pictures returned by Lunar Orbiters IV and V focused on the rest of the near side, the polar regions, and the mysterious far side of the moon. These were not the first photographs of the "dark side"; a Soviet space probe, Zond III, had taken pictures of it during a fly-by into a solar orbit a year earlier, in July 1965. But the Lunar Orbiter photos were higher quality than the Russian pictures and illuminated some lunarscapes that had never before been seen by the human eye. The six remaining photos were of the spectacular look back at the distant earth. By the time all the photos were taken, about 99 percent of the moon's surface had been covered.
When each Lunar Orbiter completed its photographic mission, the spacecraft continued its flight to gather clues to the nature of the lunar gravitational environment. NASA found these clues valuable in the planning of the Apollo flights. Telemetry data clearly indicated that the moon's gravitational pull was not uniform. The slight dips in the path of the Lunar Orbiters as they passed over certain areas of the moon's surface were caused by gravitational perturbations, which in turn were caused by the mascons.
The extended missions of the Lunar Orbiters also helped to confirm that radiation levels near the moon were quite low and posed no danger to astronauts unless a major solar flare occurred while they were exposed on the lunar surface. A few months after each Lunar Orbiter mission, NASA deliberately crashed the spacecraft into the lunar surface to study lunar impacts and their seismic consequences. Destroying the spacecraft before it deteriorated and mission controllers had lost command of it ensured that it would not wander into the path of some future mission.68
Whether the Apollo landings could have been made successfully without the photographs from Lunar Orbiter is a difficult question to answer. Without the photos, the manned landings could certainly still have been attempted. In addition to the photographic maps drawn from telescopic observation, engineers could use some good pictures taken from Ranger and Surveyor to guide them. However, the detailed photographic coverage of 22 possible landing sites definitely made NASA's final selection of ideal sites much easier and the pinpointing of landing spots possible.
Furthermore, Lunar Orbiter also contributed important photometric information that proved vital to the Apollo program. Photometry involves the science of measuring the intensity of light. Lunar Orbiter planners had to decide where to position the camera to have the best light for taking the high-resolution photographs. When we take pictures on earth, we normally want to have the sun behind us so it is shining directly on the target. But a photo taken of the lunar surface in these same circumstances produces a peculiar photometric function: the moon looks flat. Even minor topographical features are indistinguishable because of the intensity of the reflecting sunlight from the micrometeorite filled lunar surface. The engineers in LOPO had to determine the best position for photographing the moon. After studying the problem (Taback, Crabill, and Young led the attack on this problem), LOPO's answer was that the sun should indeed be behind the spacecraft, but photographs should be taken when the sun was only 15 degrees above the horizon. 69
Long before it was time for the first Apollo launch, LOPO's handling of the lunar photometric function was common knowledge throughout NASA and the aerospace industry. The BellComm scientists and engineers who reviewed Apollo planning quickly realized that astronauts approaching the moon to make a landing needed, like Lunar Orbiter, to be in the best position for viewing the moon's topography. Although a computer program would pinpoint the Apollo landing site, the computer's choice might not be suitable. If that was the case, astronauts would have to rely on their own eyes to choose a spot. If the sun was in the wrong position, they would not make out craters and boulders, the surface would appear deceptively flat, and the choice might be disastrous. Apollo 11 commander Neil Armstrong did not like the spot picked by the computer for the Eagle landing. Because NASA had planned for him to be in the best viewing position relative to the sun, Armstrong could see that the place was "littered with boulders the size of Volkswagons." So he flew on. He had to go another 1500 meters before he saw a spot where he could set the lunar module down safely.70
NASA might have considered the special photometric functions involved in viewing the moon during Apollo missions without Lunar Orbiter, but the experience of the Lunar Orbiter missions took the guesswork out of the calculations. NASA knew that its astronauts would be able to see what they needed to see to avoid surface hazards. This is a little-known but important contribution from Lunar Orbiter.
In the early 1970s Erasmus H. Kloman, a senior research associate with the National Academy of Public Administration, completed an extensive comparative investigation of NASA's handling of its Surveyor and Lunar Orbiter projects. After a lengthy review, NASA published a shortened and distilled version of Kloman's larger study as Unmanned Space Project Management: Surveyor and Lunar Orbiter. The result even in the expurgated version, with all names of responsible individuals left out -was a penetrating study in "sharp contrasts" that should be required reading for every project manager in business, industry, or government.
Based on his analysis of Surveyor and Lunar Orbiter, Kloman concluded that project management has no secrets of success. The key elements are enthusiasm for the project, a clear understanding of the project's objective, and supportive and flexible interpersonal and interoffice relationships. The history of Surveyor and Lunar Orbiter, Kloman wrote, "serves primarily as a confirmation of old truths about the so-called basic principles of management rather than a revelation of new ones." Kloman writes that Langley achieved Lunar Orbiter's objectives by "playing it by the book." By this, Kloman meant that Langley applied those simple precepts of good management; he did not mean that success was achieved through a thoughtless and strict formula for success. Kloman understood that Langley's project engineers broke many rules and often improvised as they went along. Enthusiasm, understanding, support, and flexibility allowed project staff to adapt the mission to new information, ideas, or circumstances. "Whereas the Surveyor lessons include many illustrations of how 'not to' set out on a project or how to correct for early misdirections," Kloman argued, "Lunar Orbiter shows how good sound precepts and directions from the beginning can keep a project on track."71
Lunar Orbiter, however, owes much of its success to Surveyor. LOPO staff were able to learn from the mistakes made in the Surveyor project. NASA headquarters was responsible for some of these mistakes. The complexity of Surveyor was underestimated, unrealistic manpower and financial ceilings were imposed, an "unreasonably open-ended combination of scientific experiments for the payload" was insisted upon for too long, too many changes in the scope and objectives of the project were made, and the project was tied to the unreliable Centaur launch vehicle.72 NASA headquarters corrected these mistakes. In addition, Langley representatives learned from JPL's mistakes and problems. They talked at great length to JPL staff in Pasadena about Surveyor both before and after accepting the responsibility for Lunar Orbiter. From these conversations, Langley acquired a great deal of knowledge about the design and management of an unmanned space mission. JPL scientists and engineers even conducted an informal "space school" that helped to educate several members of LOPO and Boeing's team about key details of space mission design and operations.
The interpersonal skills of the individuals responsible for Lunar Orbiter, however, appear to have been the essential key to success. These skills centered more on the ability to work with other people than they did on what one might presume to be the more critical and esoteric managerial, conceptual. and technical abilities. In Kloman's words, "individual personal qualities and management capabilities can at times be a determining influence in overall project performance."73 Compatibility among individual managers. Nelson and Helberg, and the ability of those managers to stimulate good working relationships between people proved a winning combination for Lunar Orbiter.
Norman Crabill made these comments about Lunar Orbiter's management: "We had some people who weren't afraid to use their own judgment instead of relying on rules. These people could think and find the essence of a problem, either by discovering the solution themselves or energizing the troops to come up with an alternative which would work. They were absolute naturals at that job."74
Lunar Orbiter was a pathfinder for Apollo, and it was an outstanding contribution by Langley Research Center to the early space program. The old NACA aeronautics laboratory proved not only that it could handle a major deep space mission, but also that it could achieve an extraordinary record of success that matched or surpassed anything yet tried by NASA. When the project ended and LOPO members went back into functional research divisions, Langley possessed a pool of experienced individuals who were ready, if the time came, to plan and manage yet another major...
.....project. That opportunity came quickly in the late 1960s with the inception of Viking, a much more complicated and challenging project designed to send unmanned reconnaissance orbiters and landing probes to Mars. When Viking was approved, NASA headquarters assigned the project to "those plumbers" at Langley. The old LOPO team formed the nucleus of Langley's much larger Viking Project Office. With this team, Langley would once again manage a project that would be virtually an unqualified success.
* Later in Apollo planning, engineers at the Manned Spacecraft Center in Houston thought that deployment of a penetrometer from the LEM during its final approach to landing would prove useful. The penetrometer would "sound" the anticipated target and thereby determine whether surface conditions were conducive to landing. Should surface conditions prove unsatisfactory, the LEM could be flown to another spot or the landing could be aborted. In the end, NASA deemed the experiment unnecessary. What the Surveyor missions found out about the nature of the lunar soil (that it resembled basalt and had the consistency of damp sand) made NASA so confident about the hardness of the surface that it decided this penetrometer experiment could be deleted. For more information, see Ivan D. Ertel and Roland W. Newkirk, The Apollo Spacecraft: A Chronology, vol. 4, NASA SP-4009 (Washington, 1978), p. 24
** Edgar Cortright and Oran Nicks would come to have more than a passing familiarity with the capabilities of Langley Research Center. In 1968, NASA would name Cortright to succeed Thompson as the center's director. Shortly thereafter, Cortright named Nicks as his deputy director. Both men then stayed at the center into the mid-1970s.
*** in the top-secret DOD system, the camera with the film inside apparently would reenter the atmosphere inside a heat-shielded package that parachuted down, was hooked, and was physically retrieved in midair (if all went as planned) by a specially equipped U.S. Air Force C-119 cargo airplane. It was obviously a very unsatisfactory system, but in the days before advanced electronic systems, it was the best high-resolution satellite reconnaissance system that modern technology could provide. Few NASA people were ever privy to many of the details of how the "black box" actually worked, because they did not have "the need to know." However, they figured that it had been designed, as one LOPO engineer has described in much oversimplified layman's terms, "so when a commander said, 'we've got the target', bop, take your snapshots, zap, zap, zap, get it down from orbit, retrieve it and bring it home, rush it off to Kodak, and get your pictures.", (Norman Crabill interview with author, Hampton, Va., 28 August 1991.)
**** The unprecedented photo also provided the first oblique perspectives of the lunar surface. All other photographs taken during the first mission were shot from a position perpendicular to the surface and thus, did not depict the moon in three dimensions. In subsequent missions, NASA made sure to include this sort of oblique photography. Following the first mission, Boeing prepared a booklet entitled Lunar Orbiter I -Photography (NASA Langley, 1965), which gave a detailed technical description of the earth-moon photographs; see especially pp. 64-71. | http://history.nasa.gov/SP-4308/ch10.htm | 13 |
11 | This shaded relief image of Mexico's Yucatan Peninsula show a subtle, but unmistakable, indication of the Chicxulub impact crater. Most scientists now agree that this impact was the cause of the Cretatious-Tertiary Extinction, the event 65 million years ago that marked the sudden extinction of the dinosaurs as well as the majority of life then on Earth.
Most of the peninsula is visible here, along with the island of Cozumel off the east coast. The Yucatan is a plateau composed mostly of limestone and is an area of very low relief with elevations varying by less than a few hundred meters (about 500 feet.) In this computer-enhanced image the topography has been greatly exaggerated to highlight a semicircular trough, the darker green arcing line at the upper left corner of the peninsula. This trough is only about 3 to 5 meters (10 to 15 feet) deep and is about 5 km. wide (3 miles), so subtle that if you walked across it you probably would not notice it, and is a surface expression of the crater's outer boundary. Scientists believe the impact, which was centered just off the coast in the Caribbean, altered the subsurface rocks such that the overlying limestone sediments, which formed later and erode very easily, would preferentially erode on the vicinity of the crater rim. This formed the trough as well as numerous sinkholes (called cenotes) which are visible as small circular depressions.
Two visualization methods were combined to produce the image: shading and color coding of topographic height. The shade image was derived by computing topographic slope in the northwest-southeast direction, so that northwestern slopes appear bright and southeastern slopes appear dark. Color coding is directly related to topographic height, with green at the lower elevations, rising through yellow and tan, to white at the highest elevations.
For a smaller, annotated version of this image, please select Figure 1, below:(Large image: ~1.5 mB jpeg)
Elevation data used in this image were acquired by the Shuttle Radar Topography Mission (SRTM) aboard the Space Shuttle Endeavour, launched on Feb. 11, 2000. SRTM used the same radar instrument that comprised the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) that flew twice on the Space Shuttle Endeavour in 1994. SRTM was designed to collect 3-D measurements of the Earth's surface. To collect the 3-D data, engineers added a 60-meter (approximately 200-foot) mast, installed additional C-band and X-band antennas, and improved tracking and navigation devices. The mission is a cooperative project between NASA, the National Imagery and Mapping Agency (NIMA) of the U.S. Department of Defense and the German and Italian space agencies. It is managed by NASA's Jet Propulsion Laboratory, Pasadena, Calif., for NASA's Earth Science Enterprise, Washington, D.C.
Size: 261 by 162 kilometers (162 by 100 miles)
Location: 20.8 degrees North latitude, 89.3 degrees West longitude
Orientation: North toward the top, Mercator projection
Image Data: shaded and colored SRTM elevation model
Original Data Resolution: SRTM 1 arcsecond (about 30 meters or 98 feet)
Date Acquired: February 2000 | http://photojournal.jpl.nasa.gov/catalog/PIA03379 | 13 |
20 | The following table is not meant to be a complete list of ideas about the concept of multiplication. It is not meant to be definitive but it does include the basic concepts about multiplication for middle school learners. The inclusion of the last two columns about the definition of a prime number and whether or not 1 is considered a prime show that there are definitions adapted to teach school mathematics that teachers in the higher year levels need to revise. Note that branching and grouping which make 1 not a prime number can only model multiplication of whole numbers unlike the rest of the models. Multiplication as repeated addition has launched a math war. Formal mathematics, of course, has a definitive answer on whether 1 is prime or not. According to the Fundamental Theorem of Arithmetic, 1 must not be prime so that each number greater than 1 has a unique prime factorisation.
|If multiplication is …||… then a product is:||… a factor is:||… a prime is:||Is 1 prime?|
|REPEATED ADDITION||a sum (e.g., 2×3=2+2+2 = 3+3)||either an addend or the count of addends||a product that is either a sum of 1’s or itself.||NO: 1 cannot be produced by repeatedly adding any whole number to itself.|
|GROUPING||a set of sets(e.g., 2×3 means either 2 sets of three items or 3 sets of 2)||either the number of items in a set, or the number of sets||a product that can only be made when one of the factor is 1||YES: 1 is one set of one.|
|BRANCHING||the number of end tips on a ‘tree’ produced by a sequence of branchings(think of fractals)||a branching (i.e., to multiply by n, each tip is branched n times)||a tree that can only be produced directly (i.e., not as a combination of branchings)||NO: 1 is a starting place/point … a pre-product as it were.|
|FOLDING||number of discrete regions produced by a series of folds (e.g., 2×3 means do a 2-fold, then a 3-fold, giving 6 regions)||a fold (i.e., to multiply by n, the object is folded in n equal-sized regions using n-1 creases)||a number of regions that can only be folded directly||NO: no folds are involved in generating 1 region|
|ARRAY-MAKING||cells in an m by n array||a dimension||a product that can only be constructed with a unit dimension.||YES: an array with one cell must have a unit dimension|
The table is from the study of Brent Davis and Moshe Renert in their article Mathematics-for-Teaching as Shared Dynamic Participation published in For the Learning of Mathematics. Vol. 29, No. 3. The table was constructed by a group of teachers who were doing a concept analysis about multiplication. Concept analysis involves tracing the origins and applications of a concept, looking at the different ways in which it appears both within and outside mathematics, and examining the various representations and definitions used to describe it and their consequences, (Usiskin et. al, 2003, p.1)
The Multiplication Models (Natural Math: Multiplication) also provides good visual for explaining multiplication.
You may also want to read How should students understand the subtraction operation? | http://math4teaching.com/2012/10/17/the-many-faces-of-multiplication/ | 13 |
26 | How to Measure Angles
Measuring angles is pretty simple: the size of an angle is based on how wide the angle is open. Here are some points and mental pictures that will help you to understand how angle measurement works.
Degree: The basic unit of measure for angles is the degree.
A good way to start thinking about the size and degree-measure of angles is by picturing an entire pizza — that’s 360° of pizza. Cut the pizza into 360 slices, and the angle each slice makes is 1°. For other angle measures, see the following list and figure:
If you cut a pizza into four big slices, each slice makes a 90° angle
If you cut a pizza into four big slices and then cut each of those slices in half, you get eight pieces, each of which makes a 45° angle
If you cut the original pizza into 12 slices, each slice makes a 30° angle
So 1/12 of a pizza is 30°, 1/8 is 45°, 1/4 is 90°, and so on.
The bigger the fraction of the pizza, the bigger the angle.
The fraction of the pizza or circle is the only thing that matters when it comes to angle size. The length along the crust and the area of the pizza slice tell you nothing about the size of an angle. In other words, 1/6 of a 10-inch pizza represents the same angle as 1/6 of a 16-inch pizza, and 1/8 of a small pizza has a larger angle (45°) than 1/12 of a big pizza (30°) — even if the 30° slice is the one you’d want if you were hungry. You can see this in the above figure.
Another way of looking at angle size is to think about opening a door or a pair of scissors or, say, an alligator’s mouth. The wider the mouth is open, the bigger the angle. As the following figure shows, a baby alligator with its mouth opened wide makes a bigger angle than an adult alligator with its mouth opened less wide, even if there’s a bigger gap at the front of the adult alligator’s mouth.
An angle’s sides are both rays, and all rays are infinitely long, regardless of how long they look in a figure. The lengths of an angle’s sides in a diagram aren’t really lengths at all, and they tell you nothing about the angle’s size. Even when a diagram shows an angle with two segments for sides, the sides are still technically infinitely long rays.
Congruent angles are angles with the same degree measure. In other words, congruent angles have the same amount of opening at their vertices. If you were to stack two congruent angles on top of each other with their vertices together, the two sides of one angle would align perfectly with the two sides of the other angle.
You know that two angles are congruent when you know that they both have the same numerical measure (say, they both have a measure of 70°) or when you don’t know their measures but you figure out (or are simply told) that they’re congruent. In figures, angles with the same number of tick marks are congruent to each other, as shown here. | http://www.dummies.com/how-to/content/how-to-measure-angles.navId-407420.html | 13 |
12 | | Forward | Back | Up | Map | Glossary | Information |
One of the tenets of the Big Bang theory is that the universe began as a smooth and homogeneous fireball. So how did the universe get to be so lumpy? Visible matter is clumped into galaxies, clusters of galaxies, and clusters of galaxy clusters, or superclusters. The superclusters are arranged in great sheets or filaments spanning 500 million light years. Scattered amongst these superclusters are great voids containing very little visible matter, as large as 400 million light years in diameter.
The extreme uniformity of the cosmic background radiation, first detected in 1964, puzzled cosmologists. This radiation, a relic from the Big Bang, reflected the state of the universe roughly 300,000 years after the Big Bang when radiation separated from matter. Cosmologists reasoned that at that time there must have been at least some irregularities , however slight, to have sown the seeds of the astounding structures in our present-day universe.
In 1992, exhaustive analysis of data from NASA's Cosmic Background Explorer (COBE) satellite revealed minute irregularities or variations in the matter-energy density (only 17 parts in 1,000,000) of the universe. The variations are actually detected as temperature variations.
COBE Sky Map
The regions which were slightly more dense gravitationally attracted photons and,
as the universe expanded, caused them to lose some energy, or heat. The less dense regions are slightly warmer.
The map below shows "hot" (magenta) and "cold" (blue) regions in the radiation detected in the portion of the sky observed.
JPEG Image (25.7 KB); Caption ; Credits
One theory for the origin of these irregularities is that spontaneous fluctuations in the pre-inflationary epoch were greatly magnified by inflation. In the post-inflationary cosmos, these fluctuations produced regions just slightly denser than their surroundings. The differences in density are in turn amplified by gravity, which pulls matter into the denser regions. This process of amplification, cosmologists believe, sowed the "seeds" on which our present-day structures--including the enormous sheets of galaxies--could have formed.
Cosmologists have finally found tangible evidence for theories seeking to explain how an almost perfectly smooth cosmos could have become so "lumpy."
Now the challenge is to link these minute fluctuations to the formation of structures we see today. COBE was able to measure density fluctuations over an angular scale of seven degrees--about the size of 14 moons lined up, side by side, in the sky. But that patch of sky, zoomed back to the epoch of recombination, corresponds to a size larger than the superclusters we see today. In order to correlate density fluctuations with smaller structures like galaxies or clusters of galaxies, cosmologists must detect much finer density fluctuations--within one part in 1,000,000--over scales as small as one degree or less.
In fact, scientific balloon-borne instruments have measured fluctuations--between one and three parts in 100,000--over angular scales between 0.5 and three degrees. So far, however, these experiments have focused on only a few small slices of the sky.
These new measurements of density fluctuations are the hard data cosmologists need to construct more accurate models of the evolution of our universe. For many years, cosmologists who simulate the universe's birth and history using intensive computer models had to guess at the starting conditions for their simulations. They would, of course, prefer measurements of density fluctuations on all scales--those corresponding to the largest superclusters and voids down to individual galaxies. But, for the time being anyway, they're combining COBE's measurements of large-scale fluctuations with physical theory to figure out the starting conditions for their simulations.
Forward to Mysterious Dark Matter
Return to Footprints of Creation
Up to Our Hierarchical Universe | http://archive.ncsa.illinois.edu/Cyberia/Cosmos/SeedsStructure.html | 13 |
13 | Catalog of Earth Satellite Orbits
Just as different seats in a theater provide different perspectives on a performance, different Earth orbits give satellites varying perspectives, each valuable for different reasons. Some seem to hover over a single spot, providing a constant view of one face of the Earth, while others circle the planet, zipping over many different places in a day.
There are essentially three types of Earth orbits: high Earth orbit, medium Earth orbit, and low Earth orbit. Many weather and some communications satellites tend to have a high Earth orbit, farthest away from the surface. Satellites that orbit in a medium (mid) Earth orbit include navigation and specialty satellites, designed to monitor a particular region. Most scientific satellites, including NASA’s Earth Observing System fleet, have a low Earth orbit.
The height of the orbit, or distance between the satellite and Earth’s surface, determines how quickly the satellite moves around the Earth. An Earth-orbiting satellite’s motion is mostly controlled by Earth’s gravity. As satellites get closer to Earth, the pull of gravity gets stronger, and the satellite moves more quickly. NASA’s Aqua satellite, for example, requires about 99 minutes to orbit the Earth at about 705 kilometers up, while a weather satellite about 36,000 kilometers from Earth’s surface takes 23 hours, 56 minutes, and 4 seconds to complete an orbit. At 384,403 kilometers from the center of the Earth, the Moon completes a single orbit in 28 days.
Changing a satellite’s height will also change its orbital speed. This introduces a strange paradox. If a satellite operator wants to increase the satellite’s orbital speed, he can’t simply fire the thrusters to accelerate the satellite. Doing so would boost the orbit (increase the altitude), which would slow the orbital speed. Instead, he must fire the thrusters in a direction opposite to the satellite’s forward motion, an action that on the ground would slow a moving vehicle. This change will push the satellite into a lower orbit, which will increase its forward velocity.
In addition to height, eccentricity and inclination also shape a satellite’s orbit. Eccentricity refers to the shape of the orbit. A satellite with a low eccentricity orbit moves in a near circle around the Earth. An eccentric orbit is elliptical, with the satellite’s distance from Earth changing depending on where it is in its orbit.
Inclination is the angle of the orbit in relation to Earth’s equator. A satellite that orbits directly above the equator has zero inclination. If a satellite orbits from the north pole (geographic, not magnetic) to the south pole, its inclination is 90 degrees.
Together, the satellite’s height, eccentricity, and inclination determine the satellite’s path and what view it will have of Earth. | http://www.visibleearth.nasa.gov/Features/OrbitsCatalog/ | 13 |
12 | Error graphs display not only a Y-value for each X-value, but a range of Y-values for a given X-Value. Error graphs are typically used to display the variation of each data point around its central value. For each X value in an Error graph, there is an associated Y value and an ``error range'' value that represents the deviation from the Y value.
Just as with line and bar charts, Error graphs can show multiple sets of data. You can use the Data Sets dialog box to change the horizontal tick marks into boxes, circles, etc. You can also use the Skew Data button in the General graph dialog to offset the data sets from each other. This helps in discriminating the data points of one data set from another. The points in each data set can also be connected with line segments or splines. You have the same flexibility in determining the look of the Error graph as you do with Line and Scatter graphs.
Figure 12.16: Error Graph | http://wwwslap.cern.ch/comp/doc/NExS/html/node269.html | 13 |
15 | Data and Variables
A variable is a named piece of memory that you use to store information in your Java program - a piece of data of some description. Each named piece of memory that you define in your program will only be able to store data of one particular type. If you define a variable to store integers, for example, you cannot use it to store a value that is a decimal fraction, such as 0.75. If you have defined a variable that you will use to refer to a Hat object, you can only use it to reference an object of type Hat (or any of its subclasses, as we saw in Chapter 1). Since the type of data that each variable can store is fixed, whenever you use a variable in your program the compiler is able to check that it is not being used in a manner or a context that is inappropriate to its type. If a method in your program is supposed to process integers, the compiler will be able to detect when you inadvertently try to use the method with some other kind of data, for example, a string or a numerical value that is not integral.
Explicit data values that appear in your program are called literals. Each literal will also be of a particular type: 25, for instance, is an integer value of type int. We will go into the characteristics of the various types of literals that you can use as we discuss each variable type.
Before you can use a variable you must specify its name and type in a declaration statement. Before we look at how you write a declaration for a variable, we should consider what flexibility you have in choosing a name.
The name that you choose for a variable, or indeed the name that you choose for anything in Java, is called an identifier. An identifier can be any length, but it must start with a letter, an underscore (_), or a dollar sign ($). The rest of an identifier can include any characters except those used as operators in Java (such as +, -, or *), but you will be generally better off if you stick to letters, digits, and the underscore character.
Java is case sensitive, so the names republican and Republican are not the same. You must not include blanks or tabs in the middle of a name, so Betty May is out, but you could have BettyMay or even Betty_May. Note that you can't have 10Up as a name since you cannot start a name with a numeric digit. Of course, you could use tenUp as an alternative.
Subject to the restrictions we have mentioned, you can name a variable almost anything you like, except for two additional restraints - you can't use keywords in Java as a name for something, and a name can't be anything that is a constant value. Keywords are words that are an essential part of the Java language. We saw some keywords in the previous chapter and we will learn a few more in this chapter. If you want to know what they all are, a complete list appears in Appendix A. The restriction on constant values is there because, although it is obvious why a name can't be 1234 or 37.5, constants can also be alphabetic, such as true and false for example. We will see how we specify constant values later in this chapter. Of course, the basic reason for these rules is that the compiler has to be able to distinguish between your variables and other things that can appear in a program. If you try to use a name for a variable that makes this impossible, then it's not a legal name.
Clearly, it makes sense to choose names for your variables that give a good indication of the sort of data they hold. If you want to record the size of a hat, for example, hatSize is not a bad choice for a variable name whereas qqq would be a bad choice. It is a common convention in Java to start variable names with a lower case letter and, where you have a name that combines several words, to capitalize the first letter of each word, as in hatSize or moneyWellSpent. You are in no way obliged to follow this convention but since almost all the Java world does, it helps to do so.
Note If you feel you need more guidance in naming conventions (and coding conventions in general) take a look at http://www.javasoft.com/docs/codeconv/.
Variable Names and Unicode
Even though you are likely to be entering your Java programs in an environment that stores ASCII, all Java source code is in Unicode (subject to the reservations we noted in Chapter 1). Although the original source that you create is ASCII, it is converted to Unicode characters internally, before it is compiled. While you only ever need ASCII to write any Java language statement, the fact that Java supports Unicode provides you with immense flexibility. It means that the identifiers that you use in your source program can use any national language character set that is defined within the Unicode character set, so your programs can use French, Greek, or Cyrillic variable names, for example, or even names in several different languages, as long as you have the means to enter them in the first place. The same applies to character data that your program defines.
Variables and Types
As we mentioned earlier, each variable that you declare can store values of a type determined by the data type of that variable. You specify the type of a particular variable by using a type name in the variable declaration. For instance, here's a statement that declares a variable that can store integers:
The data type in this case is int, the variable name is numberOfCats, and the semicolon marks the end of the statement. The variable, numberOfCats, can only store values of type int.
Many of your variables will be used to reference objects, but let's leave those on one side for the moment as they have some special properties. The only things in Java that are not objects are variables that correspond to one of eight basic data types, defined within the language. These fundamental types, also called primitive types, allow you to define variables for storing data that fall into one of three categories:
Numeric values, which can be either integer or floating point
Variables which store a single Unicode character
Logical variables that can assume the values true or false
All of the type names for the basic variable types are keywords in Java so you must not use them for other purposes. Let's take a closer look at each of the basic data types and get a feel for how we can use them. | http://www.undergroundnews.com/forum/ubbthreads.php/posts/18628.html | 13 |
22 | Definite integral from a to b is the area contained between f(x) and the x-axis on that interval.
Area between two curves is found by 1) determining where the 2 functions intersect 2) determining which function is the greater function over that interval and 3) evaluating the definite integral over the interval of greater function minus lesser function.
3 Find the area enclosed by the functions 4 5.2 Volumes of Solids Slabs Disks Washers 5 Solids of Revolution Disk Method
A solid may be formed by revolving a curve about an axis.
The volume of this solid may be found by considering the solid sliced into many many round disks.
The area of each disk is the area of a circle. Volume is found by integrating the area. The radius of each circle is f(x) for each x value in the interval.
6 Washer method
If the area between two curves is revolved around an axis a solid is created that is hollow in the center.
When slicing this solid the sections created are washers not solid circles.
The area of the smaller circle must be subtracted from the area of the larger one.
7 5.3 Volumes of Solids of Revolutions Shells
When an area between two curves is revolved about an axis a solid is created.
This solid could be considered as the sum of many many concentric cylinders.
Volume is the integral of the area in this case it is the surface area of the cylinder thus r x and h f(x)
8 Does it matter which method to use
Either method may work. Sketch a picture of the function to determine which method may be easier.
If a specific method is requested that method should be implemented.
9 5.4 Length of a Plane Curve
A plane curve is smooth if it is determined by a pair of parametric equations x f(t) and y g(t) a lttltb where f and g exist and are continuous on ab and f(t) and g(t) are not simultaneously zero on (ab).
If the curve is smooth we can find its length.
10 Approximate curve length by the sum of many many line segments.
To have the actual length you would need infinitely many line segments each whose length is found using the Pythagorean theorem.
The length of a smooth curve defined as xf(t) and yg(t) is
11 What if the function is not parametric but defined as y f(x)
Infinitely many line segments still provide the length. Again use the Pythagorean formula with horizontal component x and vertical component dy/dx for every line segment.
12 5.5 Work Fluid Force
Work Force x Distance
In many cases the force is not constant throughout the entire distance.
To determine total work done add all the amounts of work done throughout the interval INTEGRATE!
If the force is defined as F(x) then work is
13 Fluid Force
If a tank is filled to a depth h with a fluid of density (sigma) then the force exerted by the fluid on a horizontal rectangle of area A on the bottom is equal to the weight of the column of fluid that stands directly over that rectangle.
Let sigma density h(x)depth w(x)width then force is
14 5.6 Moments and Center of Mass
The product of the mass m of a particle and its directed distance from a point (its lever arm) is called the moment of the particle with respect to that point. It measures the tendency of the mass to produce a rotation about the point.
2 masses along a line balance at a point if the sum of their moments with respect to that point is zero.
The center of mass is the balance point.
15 Finding the center of mass let M moment m mass sigma density 16 Centroid For a planar region the center of pass of a homogeneous lamina is the centroid.
Pappuss Theorem If a region R lying on one side of a line in its plane is revolved about that line then the volume of the resulting solid is equal to the area of R multiplied by the distance traveled by its centroid.
17 5.7 Probability and Random Variables
Expectation of a random variable If X is a random variable with a given probability distribution p(Xx) then the expectation of X denoted E(X) also called the mean of X and denoted as mu is
18 Probability Density Function (PDF)
If the outcomes are not finite (discrete) but could be any real number in an interval it is continuous.
Continuous random variables are studied similarly to distribution of mass.
The expected value (mean) of a continuous random variable X is
19 Theorem A
Let X be a continuous random variable taking on values in the interval AB and having PDF f(x) and CDF (cumulative distribution function) F(x). Then
1. F(x) f(x)
2. F(A) 0 and F(B) 1
3. P(altXltb) F(b) F(a)
20 (No Transcript)
PowerShow.com is a leading presentation/slideshow sharing website. Whether your application is business, how-to, education, medicine, school, church, sales, marketing, online training or just for fun, PowerShow.com is a great resource. And, best of all, most of its cool features are free and easy to use.
You can use PowerShow.com to find and download example online PowerPoint ppt presentations on just about any topic you can imagine so you can learn how to improve your own slides and presentations for free. Or use it to find and download high-quality how-to PowerPoint ppt presentations with illustrated or animated slides that will teach you how to do something new, also for free. Or use it to upload your own PowerPoint slides so you can share them with your teachers, class, students, bosses, employees, customers, potential investors or the world. Or use it to create really cool photo slideshows - with 2D and 3D transitions, animation, and your choice of music - that you can share with your Facebook friends or Google+ circles. That's all free as well!
For a small fee you can get the industry's best online privacy or publicly promote your presentations and slide shows with top rankings. But aside from that it's free. We'll even convert your presentations and slide shows into the universal Flash format with all their original multimedia glory, including animation, 2D and 3D transition effects, embedded music or other audio, or even video embedded in slides. All for free. Most of the presentations and slideshows on PowerShow.com are free to view, many are even free to download. (You can choose whether to allow people to download your original PowerPoint presentations and photo slideshows for a fee or free or not at all.) Check out PowerShow.com today - for FREE. There is truly something for everyone! | http://www.powershow.com/view/e2280-YmIwO/Applications_of_the_Integral_powerpoint_ppt_presentation | 13 |
22 | Pick the Network Protocol thats Right for your Device
This article originally appeared in Nuts & Volts.
If you have a project that involves putting a device on a local network or the Internet, one decision youll need to make is how the device will exchange information on the network. Even for small devices, there are more options than you might think.
A device can host web pages, exchange e-mail and files, and run custom applications that use lower-level Ethernet and Internet protocols.
This article will help you decide which protocol or protocols best suit your application. The focus is on options that are practical for small systems, but the information also applies to PCs that perform monitoring and control functions in networks.
The Basics of Networking Protocols
Computers can use a variety of protocols to exchange information on a network. Each protocol defines a set of rules to perform a portion of the job of getting a message from one computer to the program code that will use the message on the destination computer. For example, the Ethernet protocol defines (among other things) how a computer decides when its OK to transmit on the network and how to decide whether to accept or ignore a received Ethernet frame.
Other protocols can work along with Ethernet to make transmissions more efficient
and reliable, to enable communications to travel beyond local networks, and
to provide information that a specific application requires. For example,
every communication on the Internet uses the Internet Protocol (IP) to specify
a destination address on the Internet. Many small systems support these protocols:
|Internet Protocol (IP)||Communicating on the Internet|
|User Datagram Protocol (UDP)||Specifying a destination port for a message, (optional) error-checking|
|Transmission Control Protocol (TCP)||Specifying a destination port for a message, flow control, error checking|
|Hypertext Transfer Protocol (HTTP)||Requesting and sending web pages|
|Post Office Protocol 3 (POP3)||Requesting e-mail messages|
|Simple Mail Transfer Protocol (SMTP)||Sending e-mail messages|
|File Transfer Protocol (FTP)||Exchanging files|
|File Transfer Protocol (FTP)||Exchanging files|
Multiple networking protocols work together by communicating in a layered structure called a stack. The lowest layer is the Ethernet controller or other hardware that connects to the network. The top layer is the end application, such as a web server that responds to requests for web pages or a program that sends and requests e-mail messages.
These are typical layers in a networking stack:
Not every computer needs to support every protocol. Small devices can conserve resources by supporting only what they need.
The program code (or hardware) that makes up each layer has a defined responsibility. Each layer also knows how to exchange information with the layers directly above and below. But a layer doesnt have to know anything else about how the other layers do their jobs.
In transmitting, a message travels down the stack from the application layer that creates the message to the network interface that places the message on the network. In receiving, the message travels up the stack from the network interface to the application layer that uses the data in the received message.
The number of layers a message passes through can vary. Within a local network, an application layer may communi¬cate directly with the Ethernet driver. Messages that travel on the Internet must use the Internet Protocol (IP). Messages that use IP can also use the User Datagram Protocol (UDP) or the Transmission Control Protocol (TCP) to add features such as error checking and flow control.
Vendors of development boards with networking abilities often provide libraries or classes to support popular protocols. This support greatly simplifies how much programming you need to do to get something up and running.
To communicate over a local Ethernet network, the minimum requirement is program code that knows how to talk with the Ethernet controller. In most cases, the controller is a dedicated chip that interfaces to the network hardware and to the devices CPU. The controller chip handles much of the work of sending and receiving Ethernet communications.
To send a message, the devices program code (often called firmware in small devices) typically passes the data to send and a destination address to the controller. The controller places the information in the expected format, adds an error-checking value, sends the message on the network, and makes a status code available to let the CPU know if the transmission succeeded.
In receiving a message, the controller checks the destination address and performs error checking. If the address is a match and no errors are detected, the controller stores the message and uses an interrupt or flag to announce that a message has arrived.
For applications that dont need much more than Ethernet support, a good resource is the interface boards and program code from EDTP Electronics. EDTPs Packet Whacker contains an Ethernet controller, an RJ-45 connector for an Ethernet cable, and a parallel interface for connecting to a microcontroller:
Example code for Microchips PICMicros and Atmels AVR microcontrollers is available.
See Easy Ethernet Controller in the January 2004 Nuts & Volts for more about using the Packet Whacker. Fred Eadys new book, Networking and Internetworking with Microcontrollers (Newnes) has the most detailed explanation around of how to access Ethernet controllers in small systems.
Using Low-level Internet Protocols
A device that communicates on the Internet must support Internet protocols. Devices in local networks often use Internet protocols as well because they add useful capabilities and have wide support.
The essential protocol for Internet communications is IP, which defines the addressing system that identifies computers on the Internet. Each IP datagram includes addressing information, information for use in routing the datagram, and a data portion that contains the message the source wants to transmit to the destination. In a local network, an IP datagram can travel in the data field of an Ethernet frame.
Many Internet communications also use TCP. An important feature of TCP is support for handshaking that enables the sender to verify that the destination has received a message. TCP also enables the sending computer to provide an error-checking value for the message and to name a port to receive the message on the destination computer. Applications that dont require TCPs handshaking may use UDP, a simpler protocol that can be useful for systems with limited resources.
A TCP segment or UDP datagram travels in the data portion of an IP datagram. The data area of the TCP segment or UDP datagram contains the message the source wants to pass to the destination.
To use a PC to communicate with a device using TCP or UDP, you can use just about any programming language. In Visual Basic .NET, you can use the System.Net.Sockets namespace or the UdpClient or TcpClient classes. This example uses the TcpClient class:
' Read data from a remote computer over a TCP connection.
Dim networkStream As networkStream = myTcpClient.GetStream()
If networkStream.CanRead Then
Dim dataReceived(myTcpClient.ReceiveBufferSize) As Byte
' Read the networkStream object into a byte buffer.
' Read can return anything from 0 to numBytesToRead.
' This method blocks until at least one byte is read
' or a receive timeout.
Dim numberOfBytesRead As Integer = networkStream.Read _
MessageBox.Show("You can't read data from this stream.")
' Write data to a remote computer over a TCP connection.
Dim networkStream As networkStream = myTcpClient.GetStream()
Dim dataToSend(7) As Byte
(Place data to send in the byte array.)
If networkStream.CanWrite Then
MessageBox.Show("You can't write data to this stream.")
Visual Basic 6 supports TCP and UDP communications via the Winsock control.
Serving Interactive Web Pages
One of the most popular ways for computers to share information in networks is via web pages. Many web pages are static, unchanging displays of information, but small devices usually want to serve pages that display real-time information or receive and act on user input.
My article Control Your Devices from a Web Page in the March 2004 Nuts & Volts showed one example. I used a Dallas Semiconductor TINI module to serve a page that enables users to monitor and control the device.
Requests for web pages use the Hypertext Transfer Protocol (HTTP). The requests and the responses containing the web pages travel in TCP segments. To serve web pages, a device must support TCP and IP and must know how to respond to received requests.
For creating web pages that display real-time data and respond to user input,
there are several options.
Devices programmed in C often use the Server Side Include (SSI) and Common Gateway Interface (CGI) protocols. Rabbit Semiconductors Dynamic C for its RabbitCore modules supports both.
Devices programmed in Java can use a servlet engine that enables running Java servlets, which extend a servers abilities. Two servlet engines for TINIs and other small systems are the Tynamo from Shawn Silverman and TiniHttpServer from Smart SC Consulting.
A third option is to use a product-specific protocol that defines how a device can insert real-time data in web pages and receive user input. Netmedias SitePlayer is an example of this approach.
Exchanging Messages via E-mail
E-mail is another option that small devices can use to communicate in networks. E-mails original purpose, of course, was to enable humans to exchange messages, but devices can also be programmed to send and receive messages without human intervention.
Just like a person, a device can have its own e-mail account, user name, and password. The device firmware can compose messages to send and process received messages to extract the information inside.
For example, a security system can send a message when an alarm condition occurs. Or a device can receive configuration commands in an e-mail message.
With e-mail, the sender can send a message whenever it wants and recipients can retrieve and read their messages whenever they want. The down side is that recipients may not get information as quickly as needed if they dont check their e-mail or if a server backs up and delays delivery.
To send and receive e-mails on the Internet, a device must have an Internet connection, an e-mail account that provides access to incoming and outgoing mail servers, and support for TCP/IP and the protocols used by the mail servers to send and retrieve e-mail. Two protocols suitable for small systems are the Simple Mail Transfer Protocol (SMTP) for sending e-mail and the Post Office Protocol Ver¬sion 3 (POP3) for retrieving e-mail.
Exchanging Files with FTP
Devices that store information in files can use the File Transfer Protocol (FTP) to exchange files with remote computers. Every FTP communication is between a server, which stores files and responds to commands from remote computers, and a client, which sends commands that request to send or receive files. A device may function as either a server or client.
To use FTP, a device must support a file system, where blocks of information are stored in named entities called files. In a small device, a file system can be as basic as a structure whose members each store a file name, a starting address in memory, and the length of the file stored at that address.
FTP communications travel in TCP segments. A device that supports FTP must also support TCP and IP.
The Internet Protocol Gets an Upgrade
For a couple of decades, version 4 of Internet Protocol (IPv4) has been the workhorse that has helped get messages to their destinations on the Internet. But version 6 (IPv6) is now making its way into networking components and will eventually replace IPv4. Probably the biggest motivation for change was the need for more IP addresses. But IPv6 has other useful enhancements as well, including support for auto-configuring, the ability to request real-time data transfers, and improved security options.
Where to Find IPv6
In the world of desktop computers, recent versions of Windows, OS X, and
Linux all support IPv6. For microcontrollers, Dallas Semiconductors
runtime environment for TINI modules supports IPv6 addressing.
If you dont need IPv6s benefits, upgrading isnt likely to be required any time soon. For the near future, routers that support IPv6 will continue to support IPv4, converting between protocols as needed.
Increasing the Address Space
IPv6 vastly increases the number of IP addresses available to computers on
An IPv4 address is 32 bits. IPv6 addresses are 128 bits, allowing over 300 sextillion values. Using this many bits may seem like overkill, bit IPv6s creators wanted to be very, very sure that the Internet wouldnt run out of addresses for a very long time. Having plenty of bits to work with also makes it easier to create routing domains, which enable a router to store a value that indicates where to send traffic destined for addresses in a defined group. Routing domains allow simpler routing tables and more efficient routing of traffic.
An IPv4 address is usually expressed as four decimal numbers separated by periods:
Each decimal number represents one of the four bytes in the address.
IPv6 addresses are written as 16-bit hexadecimal values separated by colons. The IPv4 address above translates to this:
A double colon can replace a series of 16-bit zero values:
(An address can have no more than one double colon.)
Its also acceptable to express an IPv4 address converted to IPv6 using decimal values instead of hexadecimal:
Even if you dont need IPv6s addressing, other additions to the
protocol can make a switch worthwhile.
Stateless Autoconfiguration frees users and administrators from having to enter IP addresses manually. A computer can generate its own IP address and discover the address of a router without requiring a human to enter the information or requiring the computer to request the information from a server.
Autoconfiguring is especially handy for mobile devices that move around, possibly connecting to a different network each time the device powers up.
IPv6 also adds security features. Two new headers are the Authentication header and the Encapsulating Security Payload (ESP) header. The Authentication header enables a computer to verify who sent a packet, to find out if data was modified in transit, and to protect against replay attacks, where a hacker gains access to a system by capturing and resending packets. The ESP header and trailer provide security for the data payload, including support for encryption.
Every IPv6 header also includes a Flow Label that can help real-time data get to its destination on time. A value in the Flow Label can indicate that a packet is one in a sequence of packets traveling between a source and destination. A source can request priority or other special handling for packets in a flow as they pass through intermediate routers.
To find out more about IPv6, some good sources are:
IP Version 6 (IPv6) introduction and links | http://www.lvr.com/pick_the_network_protocol.htm | 13 |
10 | To verify the laws of reflections of sound.
What is reflection?
Reflection is the change in direction of a wavefront at an interface between two different media so that the wavefront returns into the medium from which it originated. Common examples include the reflection of light, sound and water waves.
Do you know how sound propagates?
Sound propagates through air as a longitudinal wave. The speed of sound is determined by the properties of the air, and not by the frequency or amplitude of the sound. If a sound is not absorbed or transmitted when it strikes a surface, it is reflected. The law for reflection is the same as that of light, ie., the angle of incidence of a sound wave equals the angle of reflection, just as if it were produced by a 'mirror image' of the stimulus on the opposite side of the surface.
How do we describe the reflection of sound?
When sound travels in a given medium, it strikes the surface of another medium and bounces back in some other direction, this phenomenon is called the reflection of sound. The waves are called the incident and reflected sound waves.
What are incident and reflected sound waves?
The sound waves that travel towards the reflecting surface are called the incident sound waves. The sound waves bouncing back from the reflecting surface are called reflected sound waves. For all practical purposes, the point of incidence and the point of reflection are the same point on the reflecting surface.
A perpendicular drawn on the point of incidence is called the normal. The angle which the incident sound waves makes with the normal is called the angle of incidence, "i". The angle which the reflected sound waves makes with the normal is called the angle of reflection, "r".
Let’s look at the two laws of reflection:
The following two laws of reflection of light are applicable to sound waves as well:
- The incident wave, the normal to the reflecting surface and the reflected wave at the point of incidence lie in the same plane.
- The angle of incidence ∠i is equal to the angle of reflection ∠r.
Student will understand the First and Second Laws of Reflection.
First Law of Reflection: The incident wave, the reflected wave, and the normal at the point of incidence lie on the same plane.
Law of Reflection: The angle of incidence is equal to the angle of reflection. | http://amrita.olabs.co.in/?sub=1&brch=1&sim=1&cnt=1&id=0 | 13 |
41 | The CPU is usually the most complicated part of a modern microcomputer. It consists of several important and intricate sections, most of which are not well understood, even by seasoned computer engineers. Yet a CPU can be actually quite simple if you break it down into its fundamental component parts. The main parts of a CPU are essentially:
Registers A CPU has several registers inside it, which are really very tiny memory locations that are referred to by a name, rather than by a number, as normal memory locations are. Of these registers, the most important is the instruction pointer (IP), a register which contains the address of the next memory location which the CPU should get an instruction from. The instruction pointer is sometimes also known as the program counter (PC). Another important register which almost every CPU has is the accumulator, a general-purpose holding space which the CPU uses to temporarily store values it is getting ready to do something with.
If you don't want to build your own registers, there are many chips in the famous 7400 series of chips which are already pre-constructed data registers. In his monumental Magic-1 project, Bill Buzbee used 74273 and 74374 chips for registers.
Instruction decoder and control matrix The most fundamental job of a CPU is to obey instructions. The CPU receives instructions from memory, and then acts on them. In order to use the instructions it receives, the CPU needs a circuit to actuate the instruction (which is really just a string of 1s and 0s) into action. That circuit consists of two fundamental parts: The instruction decoder, a system which triggers one of several possible action circuits based on which instruction it is currently processing; and the control matrix, which takes the output from the instruction decoder and activates a series of control signals based on which instruction is being executed. Note that not all CPUs use instruction decoders and control matrices; an alternative to this system is to simply use a ROM chip which outputs the control signals needed for each opcode. In this case, each instruction is actually a memory location within the instruction ROM. However, in the construction of very simple CPUs, an instruction decoder and control matrix system is typically used because it makes it easier to see the cause-and-effect of the various parts of the CPU working together.
Timing circuitry Every CPU has a clock input. Every time this clock input goes through a cycle, the CPU goes through another cycle as well. The faster this clock input signal is, the more cycles per second the CPU runs at. You can theoretically make a CPU run faster by increasing the speed of the clock input. In the real world, however, there is a limit to how fast even electrons can flow, and CPUs, being physical objects, are limited to certain speed ranges, mainly because of heat. The faster a CPU runs, the more heat it generates, and a CPU has a speed limit beyond which it is liable to overheat and break down.
Arithmetic Logic Unit (ALU) Many people think that a CPU is just a math machine; a calculator, to perform arithmetic operations on numbers at high speed. While this is only part of the CPU's job, the CPU is indeed where numbers are added, subtracted, multiplied, and divided. This is specifically the job of the ALU, an important but distinct part of a CPU.
If you don't want to build your own ALU, there are a great many chips on the market which serve as pre-constructed ALUs so you don't need to learn how to do arithmetic at the circuit level. In his monumental Magic-1 project, Bill Buzbee used 74381 and 74382 chips for ALUs.
Memory input/output CPUs read to and write from the computer's memory. A CPU has several physical wires that connect it to the RAM and ROM in the computer (mainly constituted by the address bus and the data bus), and through these wires, information is stored and retrieved by the CPU.
Now that we're familiar with some of the sub-systems of a CPU, let's go back and review the list again, this time taking a more detailed look at the structure of each sub-system and the technicalities of how it works.
Each of the registers in a CPU is essentially just a series of D flip-flops, with one flip-flop for each bit. The data inputs and outputs of these flip-flops are typically connected directly to the data bus. To make it possible to separate the inputs from the outputs, a tri-state buffer is usually put on each flip-flop's input and output line, and the "Enable" pins on the input buffers are then wired together to a single control input, while the Enable pins on the output buffers go to another control input. In this way, you can enable the inputs and outputs on all bits of a register with a single control line.
There are four fundamental registers that usually exist in even the simplest CPUs. Two have already been mentioned: The instruction pointer/program counter and the accumulator. Another two important registers are the MAR (Memory Address Register), which stores memory addresses which are to be later placed on the address bus, and the instruction register, which stores an instruction that has been fetched from memory. These registers might seem superfluous at first glance, but they are needed because a CPU does things step by step; you might at first think that instead of needing a register to store memory addresses, wouldn't it be simpler and faster to just pipe the memory address directly onto the address bus? The answer is yes, that would be simpler, but it would not work because when a memory address is read from the memory, the address bus is currently being used to indicate the address that the address is being read from. To illustrate this by example, suppose that you are reading in the instruction LDA $1FF (an instruction to load the accumulator with the contents of memory address $1FF) from memory location $3FF. In order for this to happen, the address bus must have $3FF on it, because that is what causes the memory to produce the bytes that constitute the LDA $1FF instruction. Since the CPU is currently looking at that location in memory, you can't simply dump the address $1FF on the address bus; you must wait until the address bus is free. Therefore, a MAR is used to temporarily store memory addresses until they are ready to actually be used when the address bus is not being used for any other purpose. Similarly, when you load an instruction from memory, the CPU cannot actually start executing that instruction instantly; it must first stop whatever else it is doing, and since the very act of pulling the instruction out of memory constitutes activity, a separate instruction register is used to hold the instruction until the CPU is ready to deal with it.
Since all of these registers get their data over the same bus (the data bus), most of these registers will need two "Enable" signals: One for their input, and one for their output. Through these Enable wires, it's possible to achieve the necessary state of allowing only one device to place data on the data bus at a time. Since there are several registers and some will have multiple control lines, it makes sense to name these control lines and give abbreviations to these names, so that you can easily and specifically refer to any one control line. These names can be anything you want (after all, when you design something, you get to apply the names you choose to the various parts of your creation), but for the purposes of this page, some kind of standardization is needed so that you'll know what I'm talking about, so let's try to establish some simple code to refer to all these control lines. I'll list the names that you might apply to the control lines in your own CPU. The actual names that you use are up to you, but these are the names I'm going to be using within this document. Note that some of these names are actually semi-standard and you might see them (or something very similar to them) on other documentation describing CPU design.
The accumulator's input enable signal might be called LA (Load Accumulator), while the accumulator's output enable signal might be called EA (Enable Accumulator).
The program counter's output enable signal might be called EP (Enable Program counter).
The MAR's input enable signal might be called LM (Load MAR).
The instruction register's input enable signal might be called LI (Load Instruction register), while the instruction register's output enable signal might be called EI (Enable Instruction register).
To summarize, let's list the control lines we've created to turn the registers' inputs and outputs on or off:
EA: Accumulator Output
EI: Instruction register Output
EP: Program counter Output
LA: Accumulator Input
LI: Instruction register Input
LM: MAR Input
A "decoder" is actually a generic name for a relatively simple digital logic device. A decoder has a certain number of binary inputs; if we call the number of inputs n, then the decoder has 2^n outputs. The idea is that for every possible combination of inputs, a single output line is activated. For example, in a 2-to-4 decoder (a decoder with 2 inputs and 4 outputs), you have 4 possible input combinations: 00, 01, 10, or 11. Each of these input combinations will trigger a single output wire on the decoder.
An instruction decoder is simply this concept applied to CPU opcodes. The opcodes of a CPU are simply unique binary numbers: A specific number is used for the LDA instruction, while a different number is used for the STA instruction. The instruction decoder takes the electric binary opcode as input, and then triggers a single output circuit. Therefore, each opcode that the CPU supports actually takes the form of a separate physical circuit within the CPU.
If you're good at thinking ahead with logic, you may have already anticipated an interesting problem that arises using the aforementioned instruction decoder: CPU instructions take more than one step to perform. Whether you're loading the accumulator with some value, storing something in memory, or sending data over an I/O line, almost every CPU opcode takes several steps. How can you perform these steps sequentially through the triggering of a single output? The answer lies partially with a key component of CPU control: The ring counter. This device often goes by other names (partly because "ring counter" is actually a generic name for the device that can be used in contexts that are not specific to CPUs), but I'll continue to call it by this name within this context.
A ring counter is somewhat different from a regular digital counter. To review: A regular counter is simply a series of flip-flops that are lined up in such a way that every time the counter's input triggers, the output of the counter goes up by 1. A 4-bit counter has 4 binary outputs, and will count from 0000 to 1111; after that it will loop around to 0000 and start over again. In contrast, a ring counter only has one single output active at any time. Every time the ring counter's input triggers, the active output moves along. The active output simply cycles through a rotating circle of possible locations, hence the name "ring" counter.
As mentioned, all CPUs have clock inputs. It turns out that all these clock inputs do is drive this ring counter. The ring counter, together with the instruction decoder, take care of all the activity within the CPU.
Among different CPUs, ring counters tend to have varying numbers of outputs; the number of outputs needed really depends on how many steps the CPU needs to perform each instruction. Since you can't create new outputs out of thin air (they are actual physical wires), the ring counter needs to have as many outputs as there are steps in the lengthiest CPU opcode. Any instructions which don't need that many steps can "waste" cycles by allowing the ring counter to circle around, but this is obviously inefficient, so a separate "reset" input usually exists on the ring counter so that it can start counting from the beginning if the current instruction has finished.
If you don't want to make your own ring counter circuit, there are two chips in the famous 4000 series of chips which act as ring counters: The 4017 is a 10-stage ring counter, while the 4022 is an 8-stage ring counter.
Having made it this far, we now have an instruction decoder that will give us one active wire representing which opcode the CPU is running, and a ring counter that allows us to proceed step-by-step through this instruction process. Obviously, we need some way of merging these control signals so that the actual work can get done. Where does this take place? In the most hairy and complex realm of the CPU: The control matrix.
The control matrix is where things really get crazy inside a CPU. This is where each output from the instruction decoder and the ring counter meet in a large array of digital logic. The control matrix is typically made of many logic gates which are wired in a highly architecture-specific way to activate the correct control lines that are needed to fulfill the opcode being processed.
For example, suppose the instruction decoder has activated the LDA FROM MEMORY output, meaning the CPU has been given an LDA instruction to load the accumulator with the value from a specific memory address. The first step in executing this instruction might be to increment the instruction pointer so it can retrieve the desired memory address from memory. Thus, the LDA FROM MEMORY output from the instruction decoder and the first output from the ring decoder might go to and AND gate, which of course will then trigger only during the very first step of an LDA FROM MEMORY instruction. The output of this AND gate might go directly to the clock input on the instruction decoder. The second step in executing this instruction might be to load the MAR with the memory address desired which the instruction pointer is now helpfully pointing at, so perhaps the LDA FROM MEMORY output and the second ring counter output meet at another AND gate which goes directly to LM. And so on. This is how instructions are performed inside a CPU.
In smaller hand-made CPUs, the control matrix is often made of an array of diodes and wires, but because of the complexity of the control matrix in a larger CPU with several possible instructions, this quickly becomes impractical for a CPU with more than a dozen or so instructions. Therefore, the control matrix is often actually implemented as a ROM/PROM/EPROM/EEPROM chip. In this arrangement, each "address" of the ROM chip actually becomes a combination of inputs from the instruction decoder and ring counter. The data stored at each "address" of the ROM is the correct combination of control outputs. You can then quickly create and reconfigure your control logic by simply reprogramming the ROM chip. This kind of programming is the lowest-level programming possible in a general computer, and is frequently called "microprogramming," while the code that goes into the ROM is usually called "microcode."
It's fun to create a simple CPU out of a series of electronic components on a breadboard or something similar. However, today people who want to design their own CPU typically do so through software, using a hardware description language like Verilog. Since Verilog is a very powerful language that makes it surprisingly easy to design and model the complex workings of a computer's CPU, it makes sense to examine how we can write a piece of Verilog code that will act like a CPU.
Since this is a section on CPU architecture specifically, we won't be going into the details of Verilog syntax here. If you want to learn a bit more about coding with Verilog, check out my own Verilog section, or another learning resource; several excellent books and websites about Verilog exist, from which you can learn a lot.
Since all the CPU really wants to do is perform instructions, and it needs to check the instruction pointer to know where to look for the next instruction, let's start with the instruction pointer. Assuming we have a 16-bit address bus (as most early microcomputers did) and we can run an instruction from any location in memory, we'll need a 16-bit instruction pointer. We can declare this item thusly:
reg [15:0] IP;
Because the IP gets its instructions from memory, we'll also need some way of accessing memory. In modern computer design, the memory is often integrated into the same programmable chip as the CPU, so that a single chip becomes the entire computer (a system known as a SOC, or System On a Chip). However, for this discussion, we're only making a true CPU, not an entire SOC, so the device we're making has no memory of its own and thus needs external memory buses. The buses are actual, physical wires, so we can define them as such:
output [15:0] abus; //address bus inout [7:0] dbus; //data bus
Now we have a way of specifying a memory address: The abus output.
Many CPUs use idiosyncratic specifications to deal with the problem of deciding where to start getting instructions in memory. Some CPUs have a specific point that they start executing instructions from, others use a reset vector, a location in memory that stores another memory location from which to begin executing instructions. Some people may argue the virtue of one system over another, but for our current discussion, let's keep things simple and just assume that our CPU begins executing instructions from memory address 0, and just proceeds from there. To do this, we'll first need to set our address bus to equal 0, so our memory chips will output whatever instruction is at address 0.
IP = 0;
This line initializes our instruction pointer to its first memory location. From here on, we will need to get our instructions from memory by setting the address bus to reflect whatever is stored in the instruction pointer.
abus = IP;
Assuming the interface between the CPU and the memory now works properly, the data bus should currently be reflecting whatever is at the specified memory address. If the memory has been programmed correctly, this is an opcode, a binary number which corresponds to a specific instruction that the CPU can perform. The CPU needs to be able to act on this opcode and select a course of action based on exactly what that opcode is. This is the job for the instruction decoder, which in Verilog can be programmed quite easily using the case statement.
case(dbus) opcode1: //code for opcode 1 opcode2: //code for opcode 2 endcase
Here's where we need to get creative. We need to invent our own opcodes. Every CPU has a set of opcodes, and each opcode has a specific number associated with it. Since we're not making a world-class CPU right now, we can start with just a very basic set of instructions for the time being. Let's say that the two most fundamental opcodes a CPU can have are LOAD and STORE: An opcode to load the CPU's accumulator with a value from a specific memory address, and an opcode to store the value presently in the accumulator into a specific memory address. (BURY and DISINTER, as Cryptonomicon's Lawrence Pritchard Waterhouse called them.) Since we want to keep things simple, let's furthermore assign the numbers 0 and 1 to opcodes LOAD and STORE, respectively. So, we'd modify the case statement above to look something like this:
case(dbus) 0: //LOAD instruction begin //code for the LOAD instruction goes here! end 1: //STORE instruction begin //code for the STORE instruction goes here! end endcase
Obviously, we need to fill in the actual code to perform the opcodes, but we can get to writing the actual code for the CPU instructions later.
Tying it all together, then, all you really need to make a CPU in Verilog is the initial register and pin declarations, a reset vector (or at least some set location in memory where instruction execution will begin), and then just the code for all of the CPU's individual opcodes. That's really it. You can make a working CPU that simply. You can connect it to a ROM and put some small program in the ROM to test your CPU.
Below is a general idea of what your CPU code might look like overall. This code is obviously just a rough draft and you'll want to improve on it if you plan to use your CPU for anything. This is just to give you an idea of what can be done and how to do it. Be creative!
reg [15:0] IP; //instruction pointer reg [7:0] A; //accumulator input clk; //CPU clock input output [15:0] abus; //address bus inout [7:0] dbus; //data bus output RW; //read/write output so the memory knows whether to read or write IP = 0; //initial location of instruction execution always @(posedge clk) //main CPU loop starts here begin RW = 0; //set read/write low so it reads from memory abus = IP; //put IP on the address bus, so that the memory produces //the next instruction on the data bus. case(dbus) 0: //LOAD instruction begin IP = IP + 1; //increment IP so it points to the memory vector RW = 0; //set read/write low so it reads from memory abus = IP; //examine the memory location it points to A = dbus; //load the accumulator with the value at the address IP = IP + 1; //increment IP so it points to the next instruction end 1: //STORE instruction begin IP = IP + 1; //increment IP so it points to the memory vector RW = 0; //set read/write low so it reads from memory abus = IP; //examine the memory location IP points to RW = 1; //set read/write high so it writes to memory dbus = A; //send the accumulator onto the data bus IP = IP + 1; //increment IP so it points to the next instruction end endcase end
Back to the main page | http://www.reocities.com/SiliconValley/2072/cpudes.htm | 13 |
23 | | ||Summary | What are micronutrients? | Why are micronutrients important? | Micronutrient availability to plants | AESA soil quality benchmark sites | Micronutrient project | Soil sampling and analysis | Results | More information
Copper, Iron, Manganese and Zinc
Four essential micronutrients, copper, iron, manganese and zinc, were measured at the Alberta Environmentally Sustainable Agriculture (AESA) Soil Quality Benchmark Sites. Copper is very important for a plant's reproductive growth stage and affects chlorophyll production. Iron is critical for chlorophyll formation and photosynthesis, and important in plant enzyme systems and respiration. Manganese is important in carbohydrate and nitrogen metabolism. Zinc is essential for sugar regulation and enzymes that control plant growth, especially root growth.
Results from 43 sites across Alberta show some important differences in levels of the micronutrients copper, iron, manganese and zinc based on soil properties, slope position and agricultural ecoregion. Although micronutrient deficiencies were not widespread, 19 per cent of the topsoil samples were deficient in copper, and 11 per cent were deficient in zinc.
At some sites, micronutrient levels ranged from deficient on the upper slope to more than adequate at the lower slope. None of the samples had potentially toxic levels of copper, iron or zinc. The few samples with potentially toxic manganese levels were associated with low pH (acidic) soils.
Soil organic matter, pH and clay content had the greatest influence on micronutrient levels. The strong influence of soil organic matter was evident as both organic matter and micronutrient levels increased from the upper to lower slopes of many sites. This finding highlights the importance of agricultural practices that minimize soil erosion and conserve soil organic matter. It also indicates that micronutrient deficiencies tend to occur in patches rather than throughout a field.
Soil organic matter, pH and clay content also influenced the relationship between micronutrient levels and ecoregions (areas of similar soils, landforms, climate and vegetation). For example, low zinc values occurred most frequently in the Mixed Grasslands Ecoregion in Southern Alberta where the soils generally have low soil organic matter and high pH.
What are Micronutrients?
Nutrients essential for plant growth are categorized as macronutrients (such as nitrogen, phosphorus and potassium) and micronutrients. Micronutrients are just as essential as macronutrients but are required in smaller amounts by plants. There are eight essential micronutrients: copper, zinc, iron, manganese, boron, chloride, molybdenum and nickel.
Why are Micronutrients Important?
Crop growth, quality and/or yield may be affected if any one of the eight essential micronutrients is lacking in the soil or is not adequately balanced with other nutrients.
Micronutrient Availability to Plants
The availability of a micronutrient to plants is determined by both the total amount of the nutrient in the soil and the soil's properties. Other factors, such as crop species and variety, can also influence the degree to which micronutrient levels affect crop production.
The main soil properties affecting the availability of copper (Cu), iron (Fe), manganese (Mn) and zinc (Zn):
- pH - these micronutrients become less available as the soil becomes more alkaline, that is, as soil pH increases.
- soil organic matter content - soil organic matter holds micronutrients in both plant-available and unavailable forms. Low organic matter soils usually have less available copper, iron, manganese and zinc than soils with moderate amounts of organic matter. However high organic matter soils can also have low plant-available micronutrient levels because organic matter can tie up the micronutrients in unavailable forms. In particular, Cu becomes less available as soil organic matter content increases.
Free lime (CaCO3 ), soil temperature and soil moisture also influence micronutrient availability. Free lime precipitates and adsorbs the micronutrients, making them less available to plants. Cool, wet soils can reduce the rate and amount of micronutrients taken up by crops.
- clay content - clay soils are likely to have higher levels of micronutrients, and sandy soils are likely to have lower levels.
Crop type, variety and growing conditions can affect whether or not a micronutrient deficiency will occur. For example, wheat, barley and oat are prone to copper deficiency, and beans and corn are prone to zinc deficiency. As well, some oat varieties are much more prone to manganese deficiency than others. Good growing conditions for crops generally favor nutrient uptake, but high yields also increase the nutrient requirements of crops.
Past research and observations have shown that micronutrient deficiencies are less common in Alberta than in many other parts of the world. Toxic levels are also uncommon in Alberta soils.
AESA Soil Quality Benchmark Sites
The AESA Program, in conjunction with the Province's agri-food industry, initiated the Soil Quality Benchmark Sites in 1998. The benchmarks' objectives are to identify and monitor agricultural impacts on soil resources and to collect soil information to help develop environmentally sustainable agricultural practices.
Forty-three benchmark sites were established and located on typical farm fields throughout the province's agricultural areas, in seven agricultural ecoregions (Figure 1). (The Mixed Boreal Uplands Ecoregion had only one monitoring site, so its results are not included in this summary.)
The AESA Soil Quality Benchmark Sites are located to be representative of different ecoregions and ecodistricts. An ecoregion is an area of similar soils, landforms, climate and vegetation; an ecodistrict is a subdivision of an ecoregion. Using this ecoregion approach, researchers are better able to compare data
and evaluate broad trends.
Figure 1. Locations of AESA Soil Quality benchmark sites, ecoregions and ecodistricts
Information is collected for each benchmark site concerning landforms, soil profile, soil and crop
management practices as well as soil properties. Soil properties are measured at three sampling locations - upper slope, mid slope and lower slope - for each of the 43 sites. This detailed sampling approach allows variations within a field to be assessed along with broad regional trends.
The micronutrient status of the benchmark sites was assessed in 2001. Researchers conducted a one-year project to measure levels of copper, iron, manganese and zinc and to assess the influence of ecoregion, slope position and soil characteristics on the levels of these four micronutrients in the topsoils and subsoils at the 43 sites.
Copper and zinc were selected for analysis because previous research and observations showed that they are the micronutrients most likely to be deficient in Alberta soils. Iron and manganese were included because they can be extracted using the same laboratory procedure as copper and zinc.
Soil Sampling and Analysis
Samples of the topsoil (A horizon) and subsoil (B horizon) were taken at the upper, mid and lower slope positions at each site. The samples were analyzed for copper, iron, manganese, and zinc along with a wide range of other chemical and physical characteristics, including soil organic matter content, soil texture and pH.
A commonly used procedure (called diethylenetriaminepentaacetic acid or DTPA extraction) was used to extract Cu, Fe, Mn and Zn from the samples. The extractable amounts of the micronutrients are an estimate of the plant-available levels. However, they are not identical to plant-available levels because soil properties and other factors affect availability, as noted earlier.
The micronutrient status of each benchmark site and slope position was categorized as deficient, marginal or adequate, based on the extractable concentrations (Table 1). Micronutrient levels are represented in milligrams per kilogram (mg/kg) which is the same as parts per million (ppm).
Table 1. Ranges for extractable micronutrient levels in soils
As expected from previous research, extractable levels of the four micronutrients were most strongly affected by soil organic matter content, pH and clay content (Table 2). The effects of these soil properties can also been seen when the results are considered by ecoregion and slope position.
Table 2. Summary of strong relationships between extractable micronutrient levels and soil properties (mg/kg) Concentration
A. Soil-related trends
- 19 per cent of the topsoil samples and 17 per cent of the subsoil samples were deficient in copper.
- Clay and organic matter content had the greatest influence on extractable copper.
- Extractable copper generally decreased as the clay content decreased. Thus, as expected, sandy soils were much more likely to be copper-deficient than clay soils.
- Extractable copper generally increased as the organic matter content increased. However, for soils with high organic matter levels, some of this extractable copper may be held in forms not available to plants. Thus, both low and high levels of soil organic matter can result in low plant-available copper.
- None of the samples had extractable copper values in the toxic range.
- None of samples had extractable iron in the deficient or marginal ranges.
- Soil organic matter and pH had the strongest influence on iron levels. Iron levels decreased as pH increased and as organic matter decreased.
- None of the samples had extractable iron values in the toxic range.
- None of the topsoil samples and only one subsoil sample was in the deficient range.
- Soil pH had the strongest influence on extractable manganese levels, with manganese decreasing as pH increased.
- Relatively high extractable manganese (>35 mg/ kg) occurred at five sites with low pH soils.
- 11 per cent of the topsoil samples and 28 per cent of the subsoil samples were deficient in zinc.
- Soil organic matter had the strongest influence on extractable zinc, with zinc decreasing as organic matter decreased.
B. Ecoregion trends
- None of the samples had extractable zinc values in the toxic range.
Micronutrient levels and soil properties are summarized for each ecoregion in Table 3. The relationships of micronutrient levels to the ecoregions were not as strong as the relationships of micronutrient levels to soil properties. This difference is because of the relatively large variation in soil properties within each ecoregion.
Table 3. Extractable micronutrient levels and soil properties for each ecoregion
The highest frequency of deficient and marginal copper values occurred in the ecoregions in Central Alberta (Boreal Transition, Aspen Parkland and Moist Mixed Grasslands). These results are consistent with research and observation of a relatively high frequency of copper deficiency on sandy loam and light loam soils in Central Alberta. A common characteristic of copper-deficient soils in the Aspen Parkland and Boreal Transition ecoregions is low clay and/or high organic matter content.
The highest manganese values occurred in the four ecoregions with the lowest pH (Boreal Transition, Aspen Parkland, Moist Mixed Grasslands and Fescue Grasslands).
Figure 2. Copper level by ecoregion
The lowest extractable iron values occurred in the Mixed Grasslands Ecoregion on soils with low organic matter (<2 per cent), high pH (>8.0) and high free lime. In this ecoregion, iron deficiency symptoms are common on some species of trees, shrubs and ornamentals (but iron deficiencies have not been found in field crops in this or any other ecoregion in Alberta).
Figure 3. Iron level by ecoregion
Figure 4. Manganese level by ecoregion
9 of the 14 samples deficient in zinc were from the Mixed Grasslands Ecoregion and had less than 2 per cent soil organic matter. Previous studies have identified zinc deficiency in beans and corn in this ecoregion.
Figure 5. Zinc level by ecoregion
C. Slope position trends
Extractable levels of all four micronutrients tended to increase from the upper to the lower slope position (Table 4). At some sites, this downslope increase was quite large, ranging from deficient at the upper slope to more than adequate at the lower slope.
Table 4. Average micronutrient levels in topsoil samples by slope position
Implications for field management
- On soils with low to moderate organic matter levels, reduce the likelihood of iron, manganese and zinc deficiencies by using practices that increase soil organic matter, such as reducing tillage and applying manure to eroded knolls. Copper is an exception; applying of manure on eroded knolls can increase copper deficiency.
- Practices to decrease soil erosion can reduce both variability within fields and development of micronutrient-deficient areas.
- Copper and zinc are the micronutrients most likely to be deficient.
- Micronutrient deficiencies are not widespread in Alberta, but significant reductions in crop yield and quality can occur on some soils. Because symptoms of these deficiencies are easy to confuse with other problems such as salinity, herbicide injury and disease it is best to, test the soil before considering a micronutrient fertilizer application.
- Large differences in micronutrient levels occurred between upper and lower slope positions, so a composite soil sample from a field may not identify a deficiency. If you suspect a deficiency, collect samples on the field's upper slopes and other areas where the crop shows signs of a possible deficiency. Compare the micronutrient levels in these samples with those from areas where the crop looks healthy.
- If you do have a micronutrient deficiency, select crops and crop varieties less susceptible to the deficiency, or consider applying the appropriate micronutrient fertilizer in a test strip to assess its cost-effectiveness before trying a broader application.
- Toxic levels of the four micronutrients are uncommon in Alberta. However, they may occur under some circumstances: where soils are very acidic or where high rates of amendments with high micronutrient levels (such as municipal and industrial sewage sludges) have been applied.
Soil organic matter content also tended to increase downslope, indicating the strong influence of organic matter on micronutrient levels. It was not possible to determine if the downslope trends in organic matter and micronutrient levels occurred naturally or were caused by soil and crop management practices that accelerated erosion of the upper slopes.
- On acidic soils, watch for manganese toxicity.
D. Topsoil versus subsoil trends
The topsoil samples and subsoil samples had generally similar trends for the relationships of micronutrient levels to soil properties, ecoregion and slope position.
Concentrations of iron, manganese and zinc were generally somewhat higher in the topsoil than in the subsoil. The pattern for copper was more varied, with the subsoil concentrations lower in some cases and higher in others.
For more information on micronutrients and crop growth, contact a professional agronomist or see the following factsheets from Alberta Agriculture, Food and Rural Development: Micronutrient Requirements of Crops in Alberta (Agdex FS531-1), Copper Deficiency: Diagnosis and Correction (Agdex FS532-3) and Minerals for Plants, Animals and Man (Agdex FS531-3).
For copies of this and other fact sheets in the AESA Soil Quality Benchmark Sites factsheet series or for information on the AESA Soil Quality Benchmark Study, call Jason Cathcart at 780-427-3432.
Prepared by Douglas Penney, P.Ag. | http://www1.agric.gov.ab.ca/$department/deptdocs.nsf/all/aesa1851 | 13 |
18 | Understanding Basic Elementary Fractions
Understanding fractions is an important milestone in elementary math. Just like learning to add and subtract, learning how to work with fractions is key to success in mathematics subjects like geometry and algebra. Keep reading to learn how to write, add, subtract and reduce fractions!
What Is a Fraction?
Most things in the world can be divided into parts. Pizzas can be divided into slices, and oranges can be divided into segments. Days are divided into hours, minutes and seconds. Fractions are what we use to show that something is a part of a whole.
Fractions always have two numbers. They are written one on top of the other with a line in between, or side by side, like this: 1/2. The denominator is the number on the bottom of the fraction. In the fraction 1/2, the denominator is two. The fraction's numerator is the number on top. The numerator for 1/2 is one.
Remember that fractions are used to talk about parts of a whole. Here's how they work: the denominator tells you how many total parts the whole has, and the numerator tells you how many of those parts you have. For example, if you have a pizza that has eight slices total, and you take three of them, you have taken 3/8 of the pizza. Since there were eight total slices, and you took three, that means that there are five slices left in the box, or 5/8 of the pizza.
Adding and Subtracting Simple Fractions
Notice that, in the example above, you took 3/8 of the pizza slices and left 5/8 behind. Three plus five equals eight, so if you add those two fractions together, you get 8/8. Eight minus three equals five, which is the number of slices left after you took your pizza. When you add and subtract fractions that have the same denominator, you just change the numerator, while keeping the denominator the same.
Imagine you're in a class of 13 students. Seven of the students, or 7/13, have brown hair. Four of the students, or 4/13, have blond hair, and two students, or 2/13, have red hair. If you want to know what fraction of the students have either blond or red hair, you would add together 4/13 and 2/13. You would leave the denominator (13) the same, and add together four and two to get six. This tells you that 6/13 of the students have either red or blond hair. Here are some other examples:
- 1/6 + 4/6 = 5/6
- 7/11 - 2/11 = 5/11
- 1/31 + 9/31 = 10/31
Finding Equivalent Fractions
Imagine that you have a pie that's divided into four large slices. You take two of the four slices, or 2/4 of the pie, and leave behind two slices, or 2/4, for your friend. This means that you've taken half of the pie and left half of it in the pan, right? Whether you cut the pie into six slices and take three of them (3/6), or cut the pie into eight slices and take four (4/8), you've still taken half of the pie. Here's how a mathematician would write this:
1/2 = 2/4 = 3/6 = 4/8
All of these fractions are equal, because they all represent the same total amount of the pie. One slice is half of two slices, two slices is half of four slices and so on. Here are some other examples of equivalent fractions:
- 1/3 = 2/6 = 3/9 = 4/12
- 1/4 = 2/8 = 3/12 = 4/16
- 2/5 = 4/10 = 6/15 = 8/20
Finding the Simplest Fraction
In math, we like to keep things simple by using the smallest numbers possible to represent fractions. For instance, if we have half of a pie, we'd rather say that we have 1/2 of a pie than 4/8 of a pie. In order to write fractions with the smallest possible numbers, we reduce or simplify them. This means that we find a fraction that is equivalent to the one we're simplifying, but use the smallest possible numbers.
For example, if we wanted to simplify 4/8, we would reduce it to 1/2. We know that 4/8 is equal to 3/6 and 2/4 as well, but 1/2 uses lower numbers to represent the fraction. The fraction 3/12 is equal to 2/8 and 1/4, but we would reduce it to 1/4, since one and four are smaller than two and eight.
How to Reduce Fractions
To find the simplest version of a fraction, think of a number that both the numerator and denominator are divisible by, and divide them both by it. For example, if you're simplifying 5/15, you could divide the numerator and denominator both by five to get 1/3. There are no numbers that both one and three are divisible by, so 1/3 is the simplest version of 5/15.
Let's try to simplify a slightly more complex fraction, like 18/24. Both 18 and 24 are divisible by two, so we can reduce this fraction to 9/12. Both nine and 12 are divisible by three, so we can reduce this fraction further to 3/4. There aren't any numbers that divide evenly into both three and four, so 3/4 is the simplest version of 18/24.
Some larger fractions, like 11/15 and 13/20, can't be reduced at all, since there is no number that divides evenly into both the numerator and the denominator. This is also true of smaller fractions like 2/3 and 4/5.
Other Articles You May Be Interested In
Fractions can be a confusing topic for some students. Read on to learn how you can help your children better understand the uses of fractions.
Children often struggle to master concepts such as addition and subtraction, multiplication and division, fractions, and math involving time and money. Read this article to learn how you can help your elementary school aged children minimize their homework hassles.
We Found 7 Tutors You Might Be Interested In
- What Huntington Learning offers:
- Online and in-center tutoring
- One on one tutoring
- Every Huntington tutor is certified and trained extensively on the most effective teaching methods
- What K12 offers:
- Online tutoring
- Has a strong and effective partnership with public and private schools
- AdvancED-accredited corporation meeting the highest standards of educational management
- What Kaplan Kids offers:
- Online tutoring
- Customized learning plans
- Real-Time Progress Reports track your child's progress
- What Kumon offers:
- In-center tutoring
- Individualized programs for your child
- Helps your child develop the skills and study habits needed to improve their academic performance
- What Sylvan Learning offers:
- Online and in-center tutoring
- Sylvan tutors are certified teachers who provide personalized instruction
- Regular assessment and progress reports
- What Tutor Doctor offers:
- In-Home tutoring
- One on one attention by the tutor
- Develops personlized programs by working with your child's existing homework
- What TutorVista offers:
- Online tutoring
- Student works one-on-one with a professional tutor
- Using the virtual whiteboard workspace to share problems, solutions and explanations | http://mathandreadinghelp.org/understanding_basic_elementary_fractions.html | 13 |
11 | How much is the Earth’s melting land ice contributing to global sea-level rise? A team of scientists based at the University of Colorado, in Boulder, has plugged in to NASA data to try to answer this question as accurately as possible, in an attempt to gauge the future threat from rising sea levels, as well as the impact of climate change on cold-climate parts of the globe.
In what the US space agency has described as the first comprehensive satellite study of its kind, the research team has used data from the NASA/German Aerospace Center Gravity Recovery and Climate Experiment (GRACE), to measure ice loss in all of Earth’s land ice between 2003 and 2010, with a particular focus on glaciers and ice caps outside of Greenland and Antarctica.
According to the NASA website, “the twin GRACE satellites track changes in Earth’s gravity field by noting minute changes in gravitational pull caused by regional variations in Earth’s mass, which for periods of months to years is typically because of movements of water on Earth’s surface. It does this by measuring changes in the distance between its two identical spacecraft to one-hundredth the width of a human hair.”
So what did the researchers find? They found that the total global ice mass lost from Greenland, Antarctica and the Earth’s glaciers and ice caps during the study period was about 4.3 trillion tonnes (1,000 cubic miles – apparently that’s enough ice to cover the US with a 1.5 feet deep layer), which has added about 0.5 inches (12 millimeters) to global sea level.
“Earth is losing a huge amount of ice to the ocean annually, and these new results will help us answer important questions in terms of both sea rise and how the planet’s cold regions are responding to global change,” said University of Colorado Boulder physics professor John Wahr, who helped lead the study. “The strength of GRACE is it sees all the mass in the system, even though its resolution is not high enough to allow us to determine separate contributions from each individual glacier.”
The study – the full results of which were published in the journal Nature early this month (subscription) – found that about a quarter of the average annual ice loss came from glaciers and ice caps outside of Greenland and Antarctica (roughly 148 billion tonnes, or 39 cubic miles); while ice loss from Greenland and Antarctica and their peripheral ice caps and glaciers averaged 385 billion tonnes (100 cubic miles) a year.
One unexpected result from the GRACE study was that the estimated ice loss from high altitude Asian mountain ranges like the Himalaya, the Pamir and the Tien Shan, was about 4 billion tonnes annually – a relatively small amount compared to previous ground-based estimates that have ranged up to 50 billion tonnes annually.
“This study finds that the world’s small glaciers and ice caps in places like Alaska, South America and the Himalayas contribute about 0.02 inches per year to sea level rise,” said Tom Wagner, cryosphere program scientist at NASA Headquarters in Washington. “While this is lower than previous estimates, it confirms that ice is being lost from around the globe, with just a few areas in precarious balance. The results sharpen our view of land-ice melting, which poses the biggest, most threatening factor in future sea level rise.” | http://reneweconomy.com.au/2012/ice-melt-nasa-satellites-point-to-huge-losses-92499 | 13 |
17 | ||This article may contain original research. (October 2010)|
Scientific evidence is evidence which serves to either support or counter a scientific theory or hypothesis. Such evidence is expected to be empirical evidence and in accordance with scientific method. Standards for scientific evidence vary according to the field of inquiry, but the strength of scientific evidence is generally based on the results of statistical analysis and the strength of scientific controls.
Principles of inference
Scientific evidence is evidence that does not concede the dependence of the evidence on principles of inference. This allows the relevancy of observations to a hypothesis to be determined by examining the assumptions made.
A person’s assumptions or beliefs about the relationship between observations and a hypothesis will affect whether that person takes the observations as evidence. These assumptions or beliefs will also affect how a person utilizes the observations as evidence. For example, the Earth's apparent lack of motion may be taken as evidence for a geocentric cosmology. However, after sufficient evidence is presented for heliocentric cosmology and the apparent lack of motion is explained, the initial observation is strongly discounted as evidence.
When rational observers have different background beliefs, they may draw different conclusions from the same scientific evidence. For example, Priestley, working with phlogiston theory, explained his observations about the decomposition of mercuric oxide using phlogiston. In contrast, Lavoisier, developing the theory of elements, explained the same observations with reference to oxygen. Note that a causal relationship between the observations and hypothesis does not exist to cause the observation to be taken as evidence, but rather the causal relationship is provided by the person seeking to establish observations as evidence.
A more formal method to characterize the effect of background beliefs is Bayesian inference. In Bayesian inference, beliefs are expressed as percentages indicating one's confidence in them. One starts from an initial probability (a prior), and then updates that probability using Bayes' theorem after observing evidence. As a result, two independent observers of the same event will rationally arrive at different conclusions if their priors (previous observations that are also relevant to the conclusion) differ. However, if they are allowed to communicate with each other, they will end in agreement (per Aumann's agreement theorem).
The importance of background beliefs in the determination of what observations are evidence can be illustrated using deductive reasoning, such as syllogisms. If either of the propositions is not accepted as true, the conclusion will not be accepted either.
Utility of scientific evidence
Philosophers, such as Karl R. Popper, have provided influential theories of the scientific method within which scientific evidence plays a central role. In summary, Popper provides that a scientist creatively develops a theory which may be falsified by testing the theory against evidence or known facts. Popper’s theory presents an asymmetry in that evidence can prove a theory wrong, by establishing facts that are inconsistent with the theory. In contrast, evidence cannot prove a theory correct because other evidence, yet to be discovered, may exist that is inconsistent with the theory.
Philosophic versus scientific views of scientific evidence
The philosophical community has investigated the logical requirements for scientific evidence by examination of the relationship between evidence and hypotheses, in contrast to scientific approaches which focus on the candidate facts and their context. Bechtel, as an example of a scientific approach, provides factors (clarity of the data, replication by others, consistency with results arrived at by alternative methods and consistency with plausible theories) useful for determination of whether observations may be considered scientific evidence.
There are a variety of philosophical approaches to decide whether an observation may be considered evidence; many of these focus on the relationship between the evidence and the hypothesis. Carnap recommends distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis). Achinstein provides a concise presentation by prominent philosophers on evidence, including Carl Hempel (Confirmation), Nelson Goodman (of grue fame), R. B. Braithwaite, Norwood Russell Hanson, Wesley C. Salmon, Clark Glymour and Rudolf Carnap
Based on the philosophical assumption of the Strong Church-Turing Universe Thesis, a mathematical criterion for evaluation of evidence has been proven, with the criterion having a resemblance to the idea of Occam's Razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."
See also
- Longino, Helen (March 1979). Philosophy of Science, Vol. 46. pp. 37–42.
- Thomas S. Kuhn, The Structure of Scientific Revolution, 2nd Ed. (1970).
- William Talbott "Bayesian Epistemology" Accessed May 13, 2007.
- Thomas Kelly "Evidence". Accessed May 13, 2007.
- George Kenneth Stone, "Evidence in Science"(1966)
- Karl R. Popper,"The Logic of Scientific Discovery" (1959).
- Reference Manual on Scientific Evidence, 2nd Ed. (2000), p. 71. Accessed May 13, 2007.
- Deborah G. Mayo, Philosophy of Science, Vol. 67, Supplement. Proceedings of the 1998 Biennial Meetings of the Philosophy of Science Association. Part II: Symposia Papers. (Sep., 2000), pp. S194.
- William Bechtel, Scientific Evidence: Creating and Evaluating Experimental Instruments and Research Techniques, PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association, Vol. 1 (1990) p. 561.
- Rudolf Carnap, Logical Foundations of Probability (1962) p. 462.
- Peter Achinstein (Ed.) "The Concept of Evidence" (1983).
- Paul M. B. Vitányi and Ming Li; "Minimum Description Length Induction, Bayesianism and Kolmogorov Complexity". | http://en.wikipedia.org/wiki/Scientific_evidence | 13 |
18 | In today's technology, you hear a great deal about microprocessors. A microprocessor is an integrated circuit designed for two purposes: data processing and control.
Computers and microprocessors both operate on a series of electrical pulses called words. A word can be represented by a binary number such as 101100112. The word length is described by the number of digits or BITS in the series. A series of four digits would be called a 4-bit word and so forth. The most common are 4-, 8-, and 16-bit words. Quite often, these words must use binary-coded decimal inputs.
Binary-coded decimal, or BCD, is a method of using binary digits to represent the decimal digits 0 through 9. A decimal digit is represented by four binary digits, as shown below:
You should note in the table above that the BCD coding is the binary equivalent of the decimal digit.
Since many devices use BCD, knowing how to handle this system is important. You must realize that BCD and binary are not the same. For example, 4910 in binary is 1100012, but 4910 in BCD is 01001001BCD. Each decimal digit is converted to its binary equivalent.
You can see by the above table, conversion of decimal to BCD or BCD to decimal is similar to the conversion of hexadecimal to binary and vice versa.
For example, let's go through the conversion of 26410 to BCD. We'll use the block format that you used in earlier conversions. First, write out the decimal number to be converted; then, below each digit write the BCD equivalent of that digit:
The BCD equivalent of 26410 is 001001100100BCD. To convert from BCD to decimal, simply reverse the process as shown:
The procedures followed in adding BCD are the same as those used in binary. There is, however, the possibility that addition of BCD values will result in invalid totals. The following example shows this:
Add 9 and 6 in BCD:
The sum 11112 is the binary equivalent of 1510; however, 1111 is not a valid BCD number. You cannot exceed 1001 in BCD, so a correction factor must be made. To do this, you add 610 (0110BCD) to the sum of the two numbers. The "add 6" correction factor is added to any BCD group larger than 10012. Remember, there is no 10102, 10112, 11002, 11012, 11102, or 11112 in BCD:
The sum plus the add 6 correction factor can then be converted back to decimal to check the answer. Put any carries that were developed in the add 6 process into a new 4-bit word:
Now observe the addition of 6010 and 5510 in BCD:
In this case, the higher order group is invalid, but the lower order group is valid. Therefore, the correction factor is added only to the higher order group as shown:
Convert this total to decimal to check your answer:
Remember that the correction factor is added only to groups that exceed 910 (1001BCD).
Convert the following numbers to BCD and add: | http://www.tpub.com/neets/book13/53s.htm | 13 |
11 | As far back as 1896, the Swedish scientist Svante Arrhenius hypothesized that changes in the concentration of carbon dioxide in Earth’s atmosphere could alter surface temperatures. He also suggested that changes would be especially large at high latitudes.
Arrehenius didn’t get every detail right, but his argument has proven to be pretty sound. Since the mid-20th Century, average global temperatures have warmed about 0.6°C (1.1°F), but the warming has not occurred equally everywhere. Temperatures have increased about twice as fast in the Arctic as in the mid-latitudes, a phenomenon known as “Arctic amplification.”
The map above shows global temperature anomalies for 2000 to 2009. It does not depict absolute temperature, but rather how much warmer or colder a region is compared to the norm for that region from 1951 to 1980. Global temperatures from 2000–2009 were on average about 0.6°C higher than they were from 1951–1980. The Arctic, however, was about 2°C warmer.
Why are temperatures warming faster in the Arctic than the rest of the world? The loss of sea ice is one of the most cited reasons. When bright and reflective ice melts, it gives way to a darker ocean; this amplifies the warming trend because the ocean surface absorbs more heat from the Sun than the surface of snow and ice. In more technical terms, losing sea ice reduces Earth’s albedo: the lower the albedo, the more a surface absorbs heat from sunlight rather than reflecting it back to space.
However, other factors contribute as well, explained Anthony Del Genio, a climatologist from NASA’s Goddard Institute for Space Studies. Thunderstorms, for instance, are much more likely to occur in the tropics than the higher latitudes. The storms transport heat from the surface to higher levels of the atmosphere, where global wind patterns sweep it toward higher latitudes. The abundance of thunderstorms creates a near-constant flow of heat away from the tropics, a process that dampens warming near the equator and contributes to Arctic amplification.
To read more about how climate change and Arctic amplification may be affecting storms, read the feature In a Warming World, Storms May Be Fewer but Stronger.
- Arrhenius, S. (1897, February) On the Influence of Carbonic Acid in the Air Upon Temperature of the Earth. Astronomical Society of the Pacific, 9 (54), 14.
- Lee, S. et al. (2011, August) On the Possible Link between Tropical Convection and the Northern Hemisphere Arctic Surface Air Temperature Change between 1958 and 2001. Journal of Climate, 22 (16), 4350–4367.
- National Snow & Ice Data Center Thermodynamics: Albedo. Accessed May 23, 2013.
- Serreze, M. & Barry, R. (2011, July 19) Processes and impacts of Arctic Amplification Global and Planetary Change, 77 (1-2), 85-96.
- Sherwood, S. et al. (2011, August) Robust Tropospheric Warming Revealed by Iteratively Homogenized Radiosonde Data. Journal of Climate, 22 (20), 5336-5352.
- The Discovery of Global Warming (2013, February) The Carbon Dioxide Greenhouse Effect. Accessed May 24, 2013.
- In situ Measurement
Credit: NASA image by Robert Simmon, based on GISS surface temperature analysis data including ship and buoy data from the Hadley Centre. Caption by Adam Voiland. | http://earthobservatory.nasa.gov/IOTD/view.php | 13 |
11 | The original Super Sun, prior to its nova, was accumulating electrons from the Galaxy consistent with the demands of the environment through which it was passing. As we have explained earlier, the Super Sun became too electro-negative and expelled material violently into its surrounding space. This material could not escape; its expulsion was opposed both by the post-nova Sun and by the Galaxy. It thus formed and filled a sac surrounding the newly created Solaria Binaria.
In the sac was the whole system of Solaria Binaria; the Sun, Super Uranus, the primitive planets, and the plenum (of gases and solids) of solar origin that nurtured the planets.
As the binary widens, the sac becomes conical in shape, narrowing from the size of the Sun at one end to about the size of Super Uranus at the other. A system of similar appearance has been postulated for the binary AM Herculis (Liller, p352). Wickramasinghe and Bessell describe gas flow patterns in X-ray- emitting binary systems. There, one may note a similarity in the shape of their pattern of maximum obscuration to the cone of gases proposed in this work.
Viewed from the outside the ancient plenum would have been opaque to light. Not so with the gas of the Earth's atmosphere today, which is eight kilometers thick if the atmosphere is considered as a column of gas of constant density . This atmospheric layer is of trivial thickness compared to the radius of the Earth, yet its importance to the environment is unquestionable. Even this negligible atmospheric layer removes 18.4 per cent of the incoming sunlight, mostly by diverting it from its original direction of travel.
Some of this scattered light returns to space, but most of it is redirected several times to produce the blue sky so familiar to us. Atmospheric scatter is enhanced near sunset when the incoming light traverses an atmospheric column tens of times longer than near noon. The setting Sun is notably fainter and its color redder because of the increased scatter. If the atmospheric column were as little as 1280 kilometers thick (at the present surface air density) all of the sunlight would be deflected from its incoming direction. Light would still be seen but only after scattering several times; no discernible source could be identified with the light. So it was in the days of Solaria Binaria. To be precise, if, in the last days of Super Uranus, this body were about thirty gigameters from Earth and if Super Uranus was then as bright per square centimeter of surface as today's Sun, it would not have been directly visible unless the gas density in the plenum was close to that deduced today for the Earth's atmosphere at an altitude of eighty kilometers. To see the more distant Sun this density would have to be decreased another fourfold .
In the Age of Urania, Super Uranus was located about as far from the Sun as the orbit of the planet Venus today. This would provide the plenum with a volume of about 10 20 cubic kilometers. If the plenum contained as much as one per cent of the atoms in the present Sun, the gas density would be several times that found at the base of the Earth's atmosphere today. Neither star would be seen directly, and only a dim diffused light could reach the planetary surfaces.
As the binary evolved, the plenum came to contain an increased electrical charge; it expanded, leaving less and less gas in the space between the principals. Thus it became gradually more transparent.
Astronomers see diluting plenum gases elsewhere in evolving binary systems. Batten (1973a, p10), discussing matter flow within binary systems, favors gas densities of the order of 10 13 particles per cubic centimeter. Warner and Nather propose a much higher density for one system (U Geminorum-a dwarf nova system) where they postulate a gas disc with 6 x 10 17 electrons per cubic centimeter. Unless all the gas is ionized, the neutral gas density would be higher than the calculated electron density. The gas densities that they mention are comparable to those necessary to allow the early humans to discern the first celestial orbits.
In the earlier stages of Solaria Binaria the plenum was impenetrable to an outside observer; all detected radiation came from the surface layers of the cone-shaped sac, an area up to fifty-five times the surface of the Sun. The luminosity of the sac would arise from the transaction between in flowing galactic electrons and the gases on the perimeter of the sac.
The plenum, at formation, was electron-rich relative to the stars and the planetary nuclei centered within it. These latter electron-deficient bodies promptly initiated a transaction to obtain more electrons by expelling electron-deficient atoms into the volume of the plenum. The charge differences within the sac were modulated with time. In other words, the plenum was losing electrons from its perimeter to its center. In response, the size of the sac collapsed under cosmic pressure. In time this charge-redistribution might have diminished the volume of the sac by as much as tenfold, compressing the cone of gases into a cylinder or column of smaller diameter.
Running along the axis between the Sun and Super Uranus was an electrical discharge joining the two principals. Moving with this electrical flow was matter from the Sun that was bound for Super Uranus. Some of this matter would be intercepted by and incorporated into the primitive planets.
Induced by the electrical flow a magnetic field was generated which encircled the axis and radially pinched the gases. The pinch effect is self-limiting in that the more the current, the more the pinch. An infinite current in theory pinches the current carriers into an infinitesimal volume, extinguishing it (Blevin, 1964a, p214). Material would be extruded at both ends of the pinched flow by the pressure induced in the pinch.
This circular magnetic field, a magnetic tube, would induce randomly moving ions of the plenum to circulate along the field direction. The circulating motion of the ions eventually would be transferred by collision to the neutral gases. The result would be that in the outer regions flow would be dominated by revolution around the circumference of the tube. Everything here would eventually revolve uniformly. The innermost regions of the column were dominated by flow along the axis. Considerable transaction occurred at the junction of these two separately moving regions of the column, the central and the peripheral.
Some luminosity would arise from the transaction of electrons and ions deep within the magnetic tube. The ions electrically accelerated towards Super Uranus were neutralized at some point along their trajectory. At neutralization X-rays were produced. Some of the ions would be neutralized upon collision within the magnetic tube, most upon reaching Super Uranus; but, because of the pinch phenomenon noted above, some ions would be extruded and neutralized near the perimeter of the sac behind Super Uranus. Despite the high gas density in the original plenum, X-ray emission would be observable from the outside. That such is the case elsewhere is indicated by Brennan.
As the plenum diluted with time (in a manner to be discussed in Chapter Eleven) the outside observer would see deeper and deeper into the system, and eventually all of the X-ray emission would come from the interface between the magnetic tube and the surface of Super Uranus. As in other binary systems, a partial eclipse of the main X-ray source would then be seen as the dumb-bell revolved (see Tananbaum and Hutchings for data on other binaries).
Matsuoka notes a positive correlation between X-ray and optical emission in binaries. Radio-emitting regions surround many binary systems (Wickramasinghe and Bessell). Spangler and his colleagues claim that radio emission from binary stars is noted for stars that are over-luminous. The radio emission is generated by electrons transacting with the magnetic field associated with the inter-star axis. That this emission is enhanced when a stronger transaction occurs between the stars causing the over-luminosity is understandable, using our model.
At the perimeter of the plenum, optical effects would show to an outside observer an apparent absorption shell associated with the hidden binary within. Like many of the close-binary systems, the stars of Solaria Binaria would not be resolvable in a distant telescope, but the binary nature of the system could be known because observable differences would be produced as the dumb-bell revolved.
Gas-containing binary systems as described here, and elsewhere (Batten, 1973b, pp157ff, pp176ff), represent the stake of Solaria Binaria at various epochs, and especially in its last days. As the binary system collapsed, the plenum thinned, allowing direct observation of light produced by sources inside the sac. The gas disc, theoretically implied to surround the stars of other binaries, is waning in the late translucent plenum. The gas streams detected flowing between certain binary components are present in Solaria Binaria along what we call the electrical arc. The gas clouds, whose absorption spectrum leads us to believe that they envelop entire binary systems, correspond to the perimeter of the early opaque plenum. As Solaria Binaria evolved, each of the classes of circumstellar matter noted by astronomers became observable in their turn.
Inferable from the above is the degree of visibility from the Earth's surface, or from any point of the planetary belt within the plenum. Overall there is a translucence. Objects near at hand might be distinguished, certainly after the half-way mark in the million-year history of Solaria Binaria was past. Sky bodies were indistinguishable from Earth.
With passing time, the level of light would increase. In the beginning, the light is scattered and the sky is a dim white. As the plenum thinned electrically, the sky bodies would emerge as diffuse reddish patches. During this process, the sky would brighten and become more blue. Thus, as they emerge, Super Uranus and the Sun brighten and whiten while the sky becomes darker and bluer.
At a time related to the changes soon to be discussed, around fourteen thousand years ago, the Earth is suddenly peopled by humans, and one may investigate whether any memories remain of the plenum. There seem to be several legendary themes that correlate with our deductions about visibility.
Seemingly, aboriginal legends describe the heavens as hard, heavy, marble-like and luminous. Earliest humans were seeing a vault, a dome . Probably in retrospect, to the heaven was ascribed the human qualities of a robe or covering, and, by extension, part of an anthropomorphic god. Thus, the Romans saw Coelus, the Chinese T'ien, the Hindus Varuna, and the Greeks Ouranos. Vail (1905/ 1972) presents ample evidence that day and night were uncertain and that the heavens were continuously translucent. When Hindu myth says that "the World was dark and asleep until the Great => Demiurge appeared", we construe the word "dark" as non-bright relative to the sunlit sky that came later. Heaven and Earth were close together, were spouses, according to Greek and other legends. The global climate of the Earth in the plenum was wet; all is born from the insemination of the fecund Earth by the Sky, said some legends. There was so much moisture in the plenum that, although the ocean basins were not yet structured, the first proto-humans might confuse the waters of the firmament above with the earth-waters. In some legendary beginnings, a supreme deity had dispatched a diver to bring out Earth from the great primordial waters of chaos (Long, 1963).
The earliest condition was referred to as a chaos, not in the present sense of turbulent clouds, disorder, and disaster, but in the sense of lacking precise indicators of order, such as a cycle that would let time be measured. T'ien is the Chinese Heaven, universally present chaos without form. The gods who later give men time, such as Kronos, are specifically celebrated therefore (Plato).
Sky bodies were invisible. Legends of creation do not begin with a bright sky filled with beings, but speak of a time before this. When the first sky-body observations are reported, they are of falling bodies. The earliest fixed heavenly body in legend is not the Sun, the Moon, the planets, nor the stars, but Super Uranus, as will be described later on.
Nor was the radiant perimeter of the sac visible. It lay far beyond discernment as such, and was in any case practically indistinguishable from its luminescence. The electrical arc would have been visible directly only in its decaying days, being likewise sheathed from sight by the dense atmosphere of the tube. That the arc or axis appeared along with the sky bodies before its radiance expired is to be determined in the next chapter, where its composition and operation are discussed.
Notes on Chapter 5
32. The actual atmosphere does not have a constant density throughout its volume. If condensed to constant density it would become an 8-km column of gas at the atmospheric density found presently at the bottom of the atmosphere.
33. The retention of a more dense, thin atmospheric skin surrounding the Earth (and the other planets) would not affect the visibility of the binary components more adversely than does the Earth's atmosphere today.
34. Vail (1905) collected ancient expressions from diverse cultures testifying to perceptions of the heavens as "the Shining Whole", "the Brilliant All", the "firmament", "the vault", "Heaven the Concealer". Heaven was the Deity who came down crushingly on Earth, and the heavens are said to "roll away" and to open to discharge the Heavenly Hosts; great rivers are said to flow out of Heaven. In other places we read of the gods chopping and piercing holes in the celestial ceiling, of a Boreal Hole that is an "Island of Stars", a "star opening", "Mimer's Well". Heaven was perceived to become ever more impalpable and tenuous with time, so that not only the memory of it but also its names, adjectives and metaphors lost their strength of meaning. | http://www.grazian-archive.com/quantavolution/QuantaHTML/vol_05/solaria-binaria_05.htm | 13 |
25 | All 20 major impacts occurred at approximately the same
position on Jupiter relative to the center of the planet, but because
the planet is rotating the impacts occurred at different points in
the atmosphere. The figure at the top of the page shows the viewing
Earth at the time of impact. The impacts took place at approximately 45
degrees south latitude and 6.5 degrees of longitude from the limb,
just out of view from Earth (approximately 15 degrees from the dawn
terminator). Jupiter has a rotation period of 9.84 hours, or a
rotation rate of about 0.01 degrees/sec, so the impacts occurred on
the far side of the planet but the point of impact in the atmosphere
rotated across the limb within about 11 minutes after the impact,
and crossed the dawn terminator within about 25 minutes from the impact.
The comet particles were moving almost exactly
from (Jovian) south to north at the time of the impact (actually at an angle
of 83 degrees to Jupiters equatorial plane), so they
struck the planet at an angle of about 45 degrees to the surface. (The surface
is defined for convenience as the Jovian cloud tops.) The impact velocity
was Jovian escape velocity, 60 km/sec.
Hubble press release on SL-9 collision results (29 September 1994)
Jupiter was approximately 5.7 AU (860 million km) from Earth, so the time for light to travel to the Earth was about 48 minutes. Below is a list of the collision times of the fragments as seen from Earth, as calculated by Chodas and Yeomans. Their methods of estimating the times are given following the list.
Accepted Fragment Date Prediction Impact Time July (HH:MM:SS) & 1-sigma error A 16 20:00:40 20:11:00 (3 min) B 17 02:54:13 02:50:00 (6 min) C 17 07:02:14 07:12:00 (4 min) D 17 11:47:00 11:54:00 (3 min) E 17 15:05:31 15:11:00 (3 min) F 18 00:29:21 00:33:00 (5 min) G 18 07:28:32 07:32:00 (2 min) H 18 19:25:53 19:31:59 (1 min) J 19 02:40 Missing since 12/93 K 19 10:18:32 10:21:00 (4 min) L 19 22:08:53 22:16:48 (1 min) M 20 05:45 Missing since 7/93 N 20 10:20:02 10:31:00 (4 min) P2 20 15:16:20 15:23:00 (7 min) P1 20 16:30 Missing since 3/94 Q2 20 19:47:11 19:44:00 (6 min) Q1 20 20:04:09 20:12:00 (4 min) R 21 05:28:50 05:33:00 (3 min) S 21 15:12:49 15:15:00 (5 min) T 21 18:03:45 18:10:00 (7 min) U 21 21:48:30 21:55:00 (7 min) V 22 04:16:53 04:22:00 (5 min) W 22 07:59:45 08:05:30 (3 min) Post-Crash Impact times for fragments of Comet Shoemaker-Levy 9 Don Yeomans and Paul Chodas There are several sources of information that can be used to estimate the actual impact times of the major fragments of Comet Shoemaker-Levy 9. Astrometric data has been used to determine updated orbits and these orbits have been used to determine the predicted impact times when the computed position of the fragment enters the 1 bar (atmospheric pressure) level of Jupiter's atmosphere. Values for Jupiter's mean radius and obliquity were taken from Reference 1. Because the astrometric data nearest the impact times themselves are the most powerful for reducing the error of the predicted impact times, we were particularly fortunate in receiving recent astrometric data from the European Southern Observatory (Richard West, Olivier Hainaut and colleagues) that were reduced with respect to the Hipparcos star catalog. Extremely valuable sets of astrometric data were received from the U.S. Naval Observatory in Flagstaff (Alice and Dave Monet), McDonald Observatory (A. Whipple, P. Shelus and colleagues), Spacewatch (J. Scotti and colleagues) and a number of other observatories. The final pre-crash impact predictions were sent out on July 16, 1994. As we received astrometric data from Dave Jewitt and Dave Tholen on a few of the trailing fragments on July 19, a revised subset of the July 16, 1994 predictions were issued just before midnight on July 19, 1994. For completeness, the final predicted times of impact are given in column 1 in the table below. Impact times were determined by Andy Ingersoll and Reta Beebe using Hubble Space Telescope information on the location of the northwestern edge of the dark spots resulting from some of the impact events. The longitudes determined for these spots were compared to the longitude predictions given by Chodas and Yeomans and the differences in longitude were converted to time differences between actual and predicted impact times. The errors associated with this technique are a estimated to be a few minutes. For some fragments there are more than one impact time estimate determined from different frames from the Space Telescope. We put the most weight upon those determinations made from the frame taken closest to the impact time. For fragments H and L, the Galileo PPR instrument observed the flash phase of the bolide entry so that for these two cases, we have impact time data that are accurate to +/- 5 seconds. However, since we do not know whether the PPR times correspond to the initial impact or to a subsequent flash, we have assigned uncertainties of +/- 1 minute. There is also a hint of a signal in the PPR data corresponding to the Q1 impact. These data were provided by Terry Martin. By comparing the H and L impact times determined from the PPR data with the respective predicted impact times, we note that the PPR estimate is 6.1 minutes later than the ephemeris prediction for the H fragment and 7.9 minutes later than the prediction for the L fragment. The average of these two differences is 7.0 minutes and this average, when added to the predicted impact time, will give a rough determination of the true impact time. For fragments B, D, K, Q1, and R, we have estimates of both the initial flash times and the subsequent first plume observation. We only considered those plume observations seen in the 2-3 micron region. The time differences between initial flash times and first plume observations were respectively +6, +5, +6, +6, and +8 minutes. An analysis by Andy Ingersoll and John Clarke suggests that for fragment G, there was an 8 minute lag between impact and the rise of the arc-shaped plume to where it could be seen in sunlight. In the absence of other information, the first plume observation minus an average of these values (+6.2 minutes) would give an estimated impact time. From the two fragment impacts observed by the GLL PPR, there is also evidence that the initial flash, as observed by ground-based telescopes, comes about 1 minute after the flash seen by the PPR instrument. The impact times are UTC times received at Earth (light time corrected). In setting forth the accepted impact times given in the final column of the following table, the priority of the various available techniques as as follows: 1. GLL PPR timing (Fragments H & L) 2. When definitive flash times are available, with subsequent plume observations noted about 6 minutes later), we generally took the impact time as one minute before the flash time since the PPR instrument recorded its first signals about one minute before the reported flash times. (Fragments D,G,Q1,Q2,R,S,V, and W) 3. Estimates determined from HST longitudes 4. Estimate determined from first plume observation minus 6.2 minutes 5. Chodas/Yeomans prediction with empirical adjustment of + 7 minutes The impact times for fragments A,C,E,K, and N were determined by considering the ephemeris prediction error (about 7 minutes early for most fragments), the times determined from the HST longitude estimates (uncertainty = 3-4 minutes or more) and the times determined from plume observation times (impact time = plume observation time less 5-8 minutes). An effort was made to consider and balance these three factors and the uncertainties on the estimated impact times reflect our confidence level. For fragment F, the impact time was determined using the ephemeris prediction and the Lowell Observatory estimate of when the F spot was seen on the terminator. In the absence of any quantitative impact time observations for fragments P2, T, and U, only the ephemeris prediction was used (plus 7 minutes). The impact time estimate for fragment B is based upon observatory reports and is relatively uncertain because the impact time occurs before the ephemeris prediction and well before the estimate determined from the HST longitude estimate. References: 1. Explanatory Supplement to the Astronomical Almanac. University Science Books, 1992, p. 404.
The rotational position of the Earth as seen from Jupiter at each of the impact times is shown in a plot by L. Wasserman of Lowell Observatory. Various models of this collision were hypothesized, and there was general agreement that a fragment would travel through the atmosphere to some depth and explode, creating a fireball which would rise back above the cloud tops. The explosion would also produce pressure waves in the atmosphere and "surface waves" at the cloud tops. The rising material may have consisted of a mixture of vaporized comet and Jovian atmosphere, but details about this, the depth of the explosion, the total amount of material ejected above the cloud tops, and almost all other effects of the impact are highly model dependent. Comparisons of the model results and the actual impact data are currently being done.
Other studies of the impact images are still ongoing. The following text was written before the impacts, but the information is still fairly accurate, and many of the results discussed await further detailed analysis of the data, so while the impacts themselves are all "past tense", the scientific results are still very much in the "future".
Reflections of the fireball off the Jovian satellites, the rings, and even the dust coma of the comet may be visible as a ~1% brightness increase. A particularly good opportunity to observe the effect may occur when certain satellites are eclipsed by Jupiter. There are a few impacts which may occur during these relatively infrequent (~4/day) and short-lived (1-4 hour) periods. The rings of Jupiter are always present as a source of reflection, a portion of the rings is always in Jupiter's shadow, and the rings are closer in to the impact site. Unfortunately, the rings are far less opaque and reflective than the satellites. The Jovian ring system consists of a main ring at 1.71 to 1.81 RJ (RJ = Jupiter radius = 71,400 km), a halo at 1.28 - 1.7 RJ, and a gossamer ring which extends from the surface to about 3 RJ. The main ring is a mixture of large and small particles, and the halo and gossamer rings consist of very small particles. There are also two known satellites, Metis and Adrastea, embedded in the rings.
The direct effects of the impact on the atmosphere of Jupiter are highly dependent on aspects of the collision, such as depth of explosion and amount of atmosphere displaced, which are not well constrained. Heating and transport of deeper atmospheric material is expected, which may have observable dynamical and chemical effects, especially in the ionosphere which could last from days to months. Minor comet components reacting with the atmosphere may also be observable and the collision is expected to cause traveling atmospheric waves at the cloud tops. There may also be production of vortices and hazes which could last on the order of weeks. Depending on the depth of explosion, the portion of energy directed downward, and the attenuation, "seismic" waves will be produced in the atmosphere which may be observable some distance from the impact point. These may tell something about the structure of the deeper atmosphere, and will have the effect of causing small motions in the troposphere and stratosphere which may be observable as temperature fluctations on the order of millidegrees Kelvin.
The dust cloud surrounding the fragment string may also have observable effects on the Jovian system, possibly starting months before the arrival of the fragments. The total mass of dust is only about 10 million kg or less, but a significant amount of this dust will not intersect the planet and may affect the rings, satellites, and magnetosphere. The dust will bombard the Jovian satellites, and may produce more dust for the rings, resulting in a noticeable increase in ring brightness. The magnetopause is at 85-100 RJ, so most of the dust will travel through the magnetosphere. Possible effects include aurorae, radio discharges, lightning, changes in the Io torus and surface of Io, a decrease in synchotron emissions, strong field aligned currents, and to a much lesser extent charging of the magnetosphere resulting in radiation. Again the effects of the dust are largely uncertain. | http://nssdc.gsfc.nasa.gov/planetary/impact.html | 13 |
19 | New research concludes that instead of "edges," galaxies have long outskirts of dark matter that extend to nearby galaxies and that the intergalactic space is not empty but filled with dark matter.Researchers at the University of Tokyo’s Institute for the Physics and Mathematics of the Universe (IPMU) and Nagoya University used large-scale computer simulations and recent observational data of gravitational lensing to reveal how dark matter --which makes up about 22 percent of the present-day universe --is distributed around galaxies in a clumpy but organized manner.
Only recently, images of millions of galaxies from Sloan Digital Sky Survey (SDSS) made it possible to derive an averaged mass distribution around the galaxies. Earlier in 2010, an international research group led by Brice Menard then at University Toronto and Masataka Fukugita at IPMU used twenty four million galaxy images from the Sloan Digital Sky Survey (SDSS) and successfully detected gravitational lensing effect caused by dark matter around the galaxies. From the result, they determined the projected matter density distribution over a distance of a hundred million light-years from the center of the galaxies.
Masataka Fukugita and Naoki Yoshida at IPMU, together with Shogo Masaki at Nagoya University, used very large computer simulations of cosmic structure formation to unfold various contributions to the projected matter distribution. They showed that galaxies have extended outskirts of dark matter, well beyond the region where stars exist.
The dark matter distribution is well organized but extended to intergalactic space, whereas luminous components such as stars are bounded within a finite region. More interestingly, the estimated total amount of dark matter in the outskirts of the galaxies explains the gap between the global cosmic mass density and that derived from galaxy number counting weighted by their masses.
A long standing mystery on where the missing dark matter is has been solved by the research. There is no empty space in the universe. The intergalactic space is filled with dark matter.
The Daily Galaxy via Nagoya University
« Image of the Day: Antarctic's South Pole Telescope --Seeking Clues to a New Physics of the Universe | Main | The Violent Birth of the 13-Billion-Year-Old Globular Star Clusters Orbiting the Milky Way » | http://www.dailygalaxy.com/my_weblog/2012/02/no-empty-space-in-the-universe-dark-matter-discovered-to-fill-intergalactic-space-.html | 13 |
14 | Solar sail experiment could test Einstein hypothesis
A physics professor has proposed using a solar sail to confirm a side-effect of Einstein's General Theory of Relativity.
Solar sails that use sunlight pressure instead of fuel to fly through space have long been touted by space exploration advocates, but the novel space travel method could also be tapped to settle an unproven theory by famed scientist Albert Einstein.Skip to next paragraph
Subscribe Today to the Monitor
A gossamer solar sail would be a prime platform for an experiment that would test the so-called frame-dragging hypothesis in Einstein's General Theory of Relativity, said Roman Kezerashvili, a professor of physics at New York City College of Technology. He presented the experiment concept during the International Symposium on Solar Sailing held here July 21 at the college.
Frame dragging is an effect where massive spinning bodies distort the fabric of space time like a whirlpool in water. It was hypothesized by Einstein, but has never been experimentally proven.
According to the theory, a spinning solar sail flying within 4.6 million miles of the sun should rotate differently than it would further out depending on the frame-dragging effect's strength, Kezerashvili said. So a motion sensor attached to a solar sail could test whether the frame-dragging effect exists at all, he added.
Solar sail possibilities
Kezerashvili's experiment is just one of the many possible uses for solar sails presented by scientists and sail advocates during the three-day symposium.
Solar sails are reflective arrays just one-fifth the thickness of saran wrap, but can have an area comparable to half the size of a football field. They harness the pressure generated by the sun's light just as a cloth sail catches the wind, propelling a spacecraft as if it was a sailing ship.
TV's Bill Nye the Science Guy, incoming president of the Planetary Society, said the different solar sail shapes represent a maturing of the technology, with new designs cropping up to suite each mission type.
"We won't see solar sails converge on a single shape," Nye said. "They will be like helicopters. Helicopters come in many different shapes, each for a different job."
New life for space sails
Nye, Kezerashvili and other solar sail advocates have a new optimism for the This was not the first space propulsion concept thanks to Japan's Ikaros spacecraft, the first successful solar sail to fly through deep space.
The Japan Aerospace Exploration Agency (JAXA) launched the Ikaros solar sail in May along with a Venus orbiter called Akatsuki. Since them, Ikaros has successfully deployed itself, snapped photos of its gleaming silver sail and felt the first acceleration from sunlight.
Osamu Mori, IKAROS project leader at JAXA, said that later missions will combine solar sails with ion drives to deliver probes to Jupiter and the Trojan asteroids.
By hybridizing the solar sail propulsion with a more powerful engine like an ion drive, JAXA can enjoy the benefits of a solar sail, namely constant acceleration without any fuel, while still propelling a probe to a distant planet within a reasonable timeframe, Mori said.
And other solar efforts are gaining speed.
The Planetary Society has made several attempts to launch a solar sail in the past and has a new project, called Lightsail-1, slated to launch in early 2011. The mission uses a spare solar sail demonstrator left over from NASA's earlier Nanosail-D solar sail effort.
The NASA and the European Space Agency have plans for separate solar sail missions that could be launched through 2015, representatives from both space agencies said during the meeting. | http://www.csmonitor.com/Science/2010/0728/Solar-sail-experiment-could-test-Einstein-hypothesis | 13 |
35 | Okay, so now that you have the basic idea of what a limit is, we’re going to develop your intuition a little.
Section 2.2: The Limit of a Function
This whole chapter is to show you two things: 1) How a limit works on a function in general, 2) how to deal with limits in practice.
So, let’s start off with the book’s definition of a limit
Verbally, this is “the limit of f(x) as x approaches a equals L.” What it means is that for a function f(x), the closer we take x to a certain value, a, the closer the limit will get to L.
In other words, for some function f(x) = x, as we take x to one value, f(x) approaches some other value. That’s the whole idea of a limit: As the input of a function approaches some value, the output of that function approaches some other value.
Let’s go through the book’s examples and develop some intuition.
Guess the value of .
So, take this mathematical expression and figure out what it approaches as x edges closer and closer to 1. You’ll note that you can’t just put in the 1 and get a value out. That’s what makes this a limit problem. If you wanted the limit as x approaches 1 of the function “f(x) = x+1,” it’d be easy. Just substitute 1 in for x and you get 2.
In the case of this example it doesn’t work, because the function approaches 0/0, which is undefined. So, as we did in previous problems we take x closer and closer to 1 without actually hitting 1 and see what the result seems to approach. In the book, they go as close as .9999 for x, and get a value of 0.500025 for f(x). This suggests that the value is 1/2.
In addition, they do values from the other direction, getting as close as 1.0001. At that x value, the f(x) value is .499975. So, it seems very likely that the limit’s value is 0.5. As we edge closer and closer to x=1, f(x) gets closer and closer to 1/2.
It’s important to note here that 0.5 is the limit, EVEN THOUGH there is not a defined value at x=1. For the graph, we’d put a little hole at x=1 because it’s undefined. However, the limit isn’t a measure of what happens when x equals something. It’s a measure of what happens as x approaches something.
Are you starting to get an intuitive feel for these? Let’s do another example.
Estimate the value of .
Once again, note that you can’t just pop the value t approaches and get a real answer. You’ll get 0/0.
See if you can solve this one yourself, then I’ll give you the answer.
Did you solve it?
Okay, how about now?
As you get closer and closer, you approach 0.1666666…, and the closer you get, the more sixes you can pop on. So, it seems very likely that you’re approaching 0.1 followed by infinity sixes, which is also known as 1/6.
Here, the book makes a side point about the pitfalls of computer-based calculations. For the system they’re using, at about t=0.00005 they start getting a value of 0 for f(x). This is a good example of why it’s always good to know the math “under the hood.” No computer is accurate to infinite decimal places. At a certain point, it’s just rounding. In this case, when the decimal (after you square t) gets to be on the order of billionths, it says “fuck it,” turns it into a 0, and gives you the wrong value.
So far, the limits we’ve done have been pretty intuitive. Let’s look at one that might surprise you:
Guess the value of
What’s your intuitive guess? Maybe you think it’s 0, since you know the numerator goes to 0 as x goes to 0. But, that doesn’t work because the denominator does too. Maybe you think limits don’t make sense for periodic functions. Also wrong – remember, we’re approaching a particular point. It doesn’t matter how the function behaves elsewhere.
So, let’s go back to calculating the actual values. When we do, we find that the closer x gets to 0, the closer the function gets to 1.
This may seem like a small thing, but it’s a big deal in physical calculations. It means that, as the physicists say “for small values of sin(x), sin(x) = x.” That’s a big deal. Imagine you’ve got an ugly equation with the sine of some big pile of variables. Now, imagine you can remove the sin() part. A common physics example is pendular motion. Part of the calculation for how a pendulum moves involves the maximum angle of swing it achieves. If you can just use the value of the angle, rather than the sine of the angle, it massively simplifies things.
Of course, this is a bit of a rule of thumb, so it’s arbitrary as to exactly what “small” means. The version I was taught is that you’re good down to around 15 degrees (π/12 radians).
So, you can already see how limits are helping us out. Hopefully, you can also see how limits can give unintuitive results.
With that in mind, let’s test your intuition again!
What is the value of ?
In this case, as x gets smaller and smaller, you’re taking the sine of a larger and larger number. As you know, sine is a periodic function, meaning that it wobbles up and down as you walk down x. So, in this case, you have a problem. As x goes to 0, the function operates on bigger and bigger values. But, as those values get bigger and bigger, the operation (sine) stays between -1 and 1, wobbling back and forth forever. So, there is no particular value you approaches as π/x gets bigger.
Therefore, we say that the limit does not exist.
One more test of your intuition!
Now, say you make a list of what happens as x goes to 0. You’ll note that it seems to be getting smaller and smaller, approaching 0. So, you might guess that the function approaches 0. BUT YOU’VE BEEN PLAYED FOR THE FOOL, MY FRIEND.
Look at the function again. We know for sure that the left part, simply goes to 0 as x goes to zero. What about the right part? Well, as x goes to zero, cosine goes to 1. So the right part goes to 1/10,000. So, as x gets closer and closer to 0, our function should actually approach 1/10,000. That is to say, it gets very small indeed, but it does not reach 0. You can confirm this by graphing it and seeing if the function ever touches zero. It doesn’t.
The lesson here is this: You can’t just look at a list of numbers and assume they’ll lead you to the limit. That list of numbers is just an intuitive way to look at things. Getting 0 instead of 1/10,000 is pretty good. In fact, it’s only off by 1/10,000. But, in the right context, that might matter quite a bit. What if the equation predicts what percent of people will die of ultra-plague when I release it later this year? At 1/10,000, you’re looking at 600,000 dead people – all of them dead because you didn’t understand the concept of a limit.
All these examples may seem a bit different, but they’re getting at the same idea – limits are what f(x) approaches as x approaches something.
Unfortunately, it’s not always quite that simple…
The Heaviside Function is given by H(t). H(t) is 0 when t is less than zero, and 1 when H(t) is greater than or equal to zero. That bastard got a whole function named after him that’s simple enough to be in a pre-calc text.
Take a look at the link there, which shows a graph of the function. What do you think the limit is as you get closer to 0?
You’ll immediately see a problem. If you approach from one side, the limit is 0. If you approach from the other side, the limit is 1. So, it’s not clear that there’s a single limit. But, you’re not as lost as in example 4, where there was no limit at all. Here, you can at least say there seem to be 2 limits.
And, you’d be right to say that. In fact, many equations have more than 1 limit. That’ll set us up for the next blog, on One-sided Limits. | http://www.theweinerworks.com/?p=675 | 13 |
11 | How to use flow charts,
with James Manktelow & Amy Carlson.
Flow charts are easy-to-understand diagrams showing how steps in a process fit together. This makes them useful tools for communicating how processes work, and for clearly documenting how a particular job is done. Furthermore, the act of mapping a process out in flow chart format helps you clarify your understanding of the process, and helps you think about where the process can be improved.
A flow chart can therefore be used to:
Also, by conveying the information or processes in a step-by-step flow, you can then concentrate more intently on each individual step, without feeling overwhelmed by the bigger picture.
Most flow charts are made up of three main types of symbol:
Within each symbol, write down what the symbol represents. This could be the start or finish of the process, the action to be taken, or the decision to be made.
Symbols are connected one to the other by arrows, showing the flow of the process.
To draw the flow chart, brainstorm process tasks, and list them in the order they occur. Ask questions such as "What really happens next in the process?" and "Does a decision need to be made before the next step?" or "What approvals are required before moving on to the next task?"
Start the flow chart by drawing the elongated circle shape, and labeling it "Start".
Then move to the first action or question, and draw a rectangle or diamond appropriately. Write the action or question down, and draw an arrow from the start symbol to this shape.
Work through your whole process, showing actions and decisions appropriately in the order they occur, and linking these together using arrows to show the flow of the process. Where a decision needs to be made, draw arrows leaving the decision diamond for each possible outcome, and label them with the outcome. And remember to show the end of the process using an elongated circle labeled "Finish".
Finally, challenge your flow chart. Work from step to step asking yourself if you have correctly represented the sequence of actions and decisions involved in the process. And then (if you're looking to improve the process) look at the steps identified and think about whether work is duplicated, whether other steps should be involved, and whether the right people are doing the right jobs.
The example below shows part of a simple flow chart which helps receptionists route incoming phone calls to the correct department in a company:
Flow charts are simple diagrams that map out a process so that it can easily be communicated to other people.
To draw a flowchart, brainstorm the tasks and decisions made during a process, and write them down in order.
Then map these out in flow chart format using appropriate symbols for the start and end of a process, for actions to be taken and for decisions to be made.
With the Mind Tools Club, you get much, much more than you do here for free.
And we'll give you the 4 workbooks above when you join!
Learn on the move with the free Mind Tools iPhone, iPad and Android Apps. Short bursts of business training ideal for busy people. | http://www.mindtools.com/pages/article/newTMC_97.htm | 13 |
16 | Whilst, in the world of Physics, the term Relativity is usually taken to refer to Einstein’s theories of Special and General Relativity, the concept can be more generally applied to the study of how the laws of physics vary or remain the same to different observers, particularly to observers travelling with different velocities. Galilean Relativity, formulated by Galileo Galilei, is the theory that was most widespread before Einstein, and formed part of the basis of Newtonian Mechanics.
Like Einstein, Galileo postulated that the laws of physics remain the same for observers in all inertial (non-accelerating) frames of reference. That is to say that for two bodies moving at different (but constant) velocities, it is impossible to make an absolute determination as to whether one is moving and the other stationary. All that can be determined is their relative velocity. The thought experiment that Galileo used was to consider a passenger in the hold of the ship on a calm see, who cannot look outside. There is no experiment that he can perform within the hold that will allow him to determine whether the ship is moving or not. He formalized the idea as follows:
Any two observers moving at constant speed and direction with respect to one another will obtain the same results for all mechanical experiments.
Where Galilean Relativity primarily differs from Special Relativity is in the calculation of relative velocity. If as observed by an observer in an inertial frame of reference two bodies have velocities of v1 and v2, then their relative velocity v, is:
v = v1 + v2
(Note that if the two bodies are travelling towards each other then one of the velocities will be negative, thus if v1 and v2 are taken simply magnitudes, then the v = v1 – v2 more familiar in school is produced.)
Under Special Relativity, the relative velocity is calculated using the Lorentz Transformation, producing:
where c is the speed of light in a vacuum.
(This is expressed in a simplified non-vector form, assuming that the two velocities are in a single dimension. For vectors, the calculation v1v2 needs to be performed as a dot product.)
Note that at low velocities (where v1v2 is small) then v1v2/c2 is close to zero and so the equation gives the same result as the Galilean formulation. Since c is such a large number (c2 being 9x1016 m2s-2), the Galilean transformation is sufficiently accurate for the everyday situations which humans encounter, and thus the transformation is sometimes regarded as intuitive. Even for the speeds involved in modern space exploration, Lorentzian adjustments are small. It is only with extremely lightweight bodies (i.e. subatomic particles) that high enough speeds can be achieved, and thus devices such as particle accelerators need to take account of the differences.
Thus it is debatable whether the Galilean transformation is actually ‘wrong’ since it is still of practical use in many situations – Special Relativity is a refinement under extreme conditions. Similarly, Newtonian Gravitation is perfectly adequate in many situations – General Relativity is a broadly equivalent refinement.
Many liberals (and others) see an analogy between Einsteinian Relativity and Moral Relativism, arguing that if there is no absolute frame of reference for velocity, then there is no absolute standard for moral behaviour. However, despite the fact that Galilean Relativity maintains exactly the same tenet, the association with Galilean Relativity is not made. This can only be put down to an ignorance of the history of science.
- ↑ http://www.wolframscience.com/reference/notes/1041c
- ↑ http://www.scribd.com/doc/51240234/10/is-for-Salvatius%E2%80%99-Ship-Sailing-along-its-Own-Space-Time-Line
- ↑ http://physics.ucr.edu/~wudka/Physics7/Notes_www/node47.html
- ↑ http://psi.phys.wits.ac.za/teaching/Connell/phys284/2005/lecture-01/lecture_01/node5.html
- ↑ http://hyperphysics.phy-astr.gsu.edu/hbase/relativ/ltrans.html
- ↑ See, e.g., historian Paul Johnson's book about the 20th century, and the article written by liberal law professor Laurence Tribe as allegedly assisted by Barack Obama. | http://www.conservapedia.com/Galilean_Relativity | 13 |
14 | All about... Particle detection
With CERN’s Large Hadron Collider having reached a data milestone and amid reports of the possible discovery of previously unknown particles by the Tevatron, we take a look at the physics of particle detection.
In June, the total data accumulated by the Large Hadron Collider at CERN reached one “inverse femtobarn” of data, equivalent to 70 trillion particle collisions.
Meanwhile, the head of the Tevatron particle accelerator in the US appointed a committee to evaluate the evidence for whether a completely new particle has been discovered.
But how does particle detection work?
All particle detectors, even the early bubble chambers which used ionisation and vapour trails to detect charged particles, work by capturing data from particle collisions. But the volume of data being produced by modern high-energy experiments means that detection methods much more sophisticated than the photography used with the bubble chambers are needed.
In the LHC
The Large Hadron Collider has four main particle detectors: ALICE, ATLAS, CMS and LHCb. These detectors are showered with the particles produced when the two beams of protons circulating around the LHC collide.
Each layer of a detector has a specific function, but the main components are semiconductor detectors which measure charged particles and calorimeters which measure particle energy.
The semiconductor detectors use materials such as silicon to create diodes - components that conduct electrical current in only one direction. Charged particles passing though the large number of strips of silicon placed around the proton beam collision point create a current that can be tracked and measured.
The position where particles originate is also important – if one appears deep within the detector it is likely to have been produced by the decay of another particle produced earlier, possibly within the collision itself.
Calorimeters are calibrated to be either electromagnetic or hadronic. The former will detect particles such as electrons and photons, while the latter will pick up protons and neutrons. Particles entering a calorimeter are absorbed and a particle shower is created by cascades of interactions.
It is the energy deposited into the calorimeter from these interactions that is measured. Stacking calorimeters allows physicists to build a complete picture of the direction in which the particles travelled as well as the energy deposited, or determine the shape of the particle shower produced.
ATLAS is also designed to detect muons, particles much like electrons but 200 times more massive, which pass right through the other detection equipment. A muon spectrometer surrounds the calorimeter, and functions in a similar way to the inner silicon detector.
Left: A representation of a detection event in the CMS detector.
Analysing the data
With so many particle collisions, accelerators such as the LHC generate a huge amount of data, which take a great deal of computing power to capture and analyse. To achieve this, CERN developed the LHC Computing Grid.
The Grid is a tiered network in which data are first processed by the computers at CERN, then sent on to regional sites for further processing, and finally sent to institutions all around the world to be analysed.
The results of experiments can be compared with those predicted by current theories of particle physics to look for any differences. At the LHC, particle physicists are hoping that the collisions will produce a Higgs boson, the as-yet-theoretical particle thought to be responsible for the existence of mass.
At the Tevatron, where they collide protons and antiprotons, the data that one team believed showed an entirely new type of particle appeared as a bump in a graph of experimental data that was not present in the theoretical predictions.
However this was only seen in data from one of the accelerator’s detectors and wasn’t present in the other against which it was compared. It is now believed to have been a phantom signal.
Physicists will have to keep waiting for the first sight of the Higgs – or other new particles.
This article appeared originally on iop.org
The folowing links are external | http://www.physics.org/article-questions.asp?id=73 | 13 |
15 | Conversion Factors and Functions
Earlier we showed how unity factors can be used to express quantities in different units of the same parameter. For example, a density can be expressed in g/cm3 or lb/ft3. Now we will see how conversion factors representing mathematical functions, like D = m/v, can be used to transform quantities into different parameters. For example, what is the volume of a given massA measure of the force required to impart unit acceleration to an object; mass is proportional to chemical amount, which represents the quantity of matter in an object. of gold? Unity factors and conversion factors are conceptually different, and we'll see that the "dimensional analysisA technique in which the cancelling of units is used as a tool to check the correctness of a calculation." we develop for unit conversion problems must be used with care in the case of functions.
When we are referring to the same object or sample of material, it is often useful to be able to convert one parameter into another. For example, in our discussion of fossil-fuel reserves we find that 318 Pg (3.18 × 1017 g) of coal, 28.6 km3 (2.68 × 1010 m3) of petroleum, and 2.83 × 103 km3 (2.83 × 1013 m3) of natural gasA state of matter in which a substance occupies the full volume of its container and changes shape to match the shape of the container. In a gas the distance between particles is much greater than the diameters of the particles themselves; hence the distances between particles can change as necessary so that the matter uniformly occupies its container. (measured at normal atmospheric pressureForce per unit area; in gases arising from the force exerted by collisions of gas molecules with the wall of the container. and 15°C) are available. But none of these quantities tells us what we really want to know ― how much heatEnergy transferred as a result of a temperature difference; a form of energy stored in the movement of atomic-sized particles. energyA system's capacity to do work. could be released by burning each of these reserves? Only by converting the mass of coal and the volumes of petroleum and natural gas into their equivalent energies can we make a valid comparison. When this is done, we find that the coal could release 7.2 × 1021 J, , the petroleum 1.1 × 1021 J, and the gas 1.1 × 1021 J of heat energy. Thus the reserves of coal are more than three times those of the other two fuels combined. It is for this reason that more attention is being paid to the development of new ways for using coal resources than to oil or gas. Conversion of one kind of quantity into another is usually done with what can be called a conversion factor, but the conversion factor is based on a mathematical function (D = m / V) or mathematical equation that relates parameters. Since we have not yet discussed energy or the units (joules) in which it is measured, an example involving the more familiar quantities mass and volume will be used to illustrate the way conversion factors are employed. The same principles apply to finding how much energy would be released by burning a fuel, and that problem will be encountered later.
Suppose we have a rectangular solidA state of matter having a specific shape and volume and in which the particles do not readily change their relative positions. sample of gold which measures 3.04 cm × 8.14 cm × 17.3 cm. We can easily calculate that its volume is 428 cm3 but how much is it worth? The price of gold is about 5 dollars per gramOne thousandth of a kilogram., and so we need to know the mass rather than the volume. It is unlikely that we would have available a scale or balance which could weigh accurately such a large, heavy sample, and so we would have to determine the mass of gold equivalent to a volume of 428 cm3. This can be done by manipulating the equation which defines density, ρ = m / V. If we multiply both sides by V, we obtain
m = V × ρ or mass = volume × density (1)
Taking the density of gold from a reference table, we can now calculate
This is more than 18 lb of gold. At the price quoted above, it would be worth over 40 000 dollars!
The formula which defines density can also be used to convert the mass of a sample to the corresponding volume. If both sides of Eq. (1) are multiplied by 1/ρ, we have
Notice that we used the mathematical function D = m/V to convert parameters from mass to volume or vice versa in these examples. How does this differ from the use of unity factors to change units of one parameter?
An Important Caveat
A mistake sometimes made by beginning students is to confuse density with concentrationA measure of the ratio of the quantity of a substance to the quantity of solvent, solution, or ore. Also, the process of making something more concentrated., which also may have units of g/cm3. By dimensional analysis, this looks perfectly fine. To see the error, we must understand the meaning of the function
C = .
In this case, V refers to the volume of a solutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture., which contains both a soluteThe substance added to a solvent to make a solution. and solventThe substance to which a solute is added to make a solution..
Given a concentration of an alloyA solid that has metallic properties and is made up of two or more elements. is 10g gold in 100 cm3 of alloy, we see that it is wrong (although dimensionally correct as far as conversion factors go) to incorrectly calculate the volume of gold in 20 g of the alloy as follows:
20 g x = 200 cm3
It is only possible to calculate the volume of gold if the density of the alloy is known, so that the volume of alloy represented by the 20 g could be calculated. This volume multiplied by the concentration gives the mass of gold, which then can be converted to a volume with the density function.
The bottom line is that using a simple unit cancellation method does not always lead to the expected results, unless the mathematical function on which the conversion factor is based is fully understood.
A solution of ethanol with a concentration of 0.1754 g / cm3 has a density of 0.96923 g / cm3 and a freezing pointThe temperature at which a liquid becomes a solid; also called melting point. of -9 ° F . What is the volume of ethanol (D = 0.78522 g / cm3 at 25 °C) in 100 g of the solution?
The volume of 100 g of solution is
V = m / D = 100 g /0.96923 g cm-3 = 103.17 cm3.
The mass of ethanol in this volume is
m = V x C = 103.17 cm3 x 0.1754 g / cm3 = 18.097 g.
The volume of ethanol = m / D = 18.097 g / 0.78522 g / cm3 = 23.05 cm3.
Note that we cannot calculate the volume of ethanol by
= 123.4 cm3
even though this equation is dimensionally correct.
Note that this result required when to use the function C = m/V, and when to use the function D=m/V as conversion factors. Pure dimensional analysis could not reliably give the answer, since both functions have the same dimensions.
EXAMPLE 2 Find the volume occupied by a 4.73-g sample of benzene.
Solution The density of benzene is 0.880 g cm–3. Using Eq. (2),
(Note that taking the reciprocal of simply inverts the fraction ― 1 cm3 goes on top, and 0.880 g goes on the bottom.)
The two calculations just done show that density is a conversion factor which changes volume to mass, and the reciprocal of density is a conversion factor changing mass into volume. This can be done because the mathematical formula defining density relates it to mass and volume. Algebraic manipulation of this formula gave us expressions for mass and for volume [Eq. (1) and (2)], and we used them to solve our problems. If we understand the function D = m/V and heed the caveat above, we can devise appropriate converstion factors by unit cancellation, as the following example shows:
EXAMPLE 3 A student weighs 98.0 g of mercury. If the density of mercury is 13.6 g/cm3, what volume does the sample occupy?
We know that volume is related to mass through density.
V = m × conversion factor
Since the mass is in grams, we need to get rid of these units and replace them with volume units. This can be done if the reciprocal of the density is used as a conversion factor. This puts grams in the denominator so that these units cancel:
If we had multiplied by the density instead of its reciprocal, the units of the result would immediately show our error:
It is clear that square grams per cubic centimeter are not the units we want.
Using a conversion factor is very similar to using a unity factor — we know the conversion factor is correct when units cancel appropriately. A conversion factor is not unity, however. Rather it is a physical quantity (or the reciprocal of a physical quantity) which is related to the two other quantities we are interconverting. The conversion factor works because of the relationship [ie. the definition of density as defined by Eqs. (1) and (2) includes the relationships between density, mass, and volume], not because it is has a value of one. Once we have established that a relationship exists, it is no longer necessary to memorize a mathematical formula. The units tell us whether to use the conversion factor or its reciprocal. Without such a relationship, however, mere cancellation of units does not guarantee that we are doing the right thing.
A simple way to remember relationships among quantities and conversion factors is a “road map“of the type shown below:
This indicates that the mass of a particular sample of matterAnything that occupies space and has mass; contrasted with energy. is related to its volume (and the volume to its mass) through the conversion factor, density. The double arrow indicates that a conversion may be made in either direction, provided the units of the conversion factor cancel those of the quantity which was known initially. In general the road map can be written
As we come to more complicated problems, where several steps are required to obtain a final result, such road maps will become more useful in charting a path to the solution.
EXAMPLE 4 Black ironwood has a density of 67.24 lb/ft3. If you had a sample whose volume was 47.3 ml, how many grams would it weigh? (1 lb = 454 g; 1 ft = 30.5 cm).
Solution The road map
tells us that the mass of the sample may be obtained from its volume using the conversion factor, density. Since milliliters and cubic centimeters are the same, we use the SI unitsThe international system of units (Système International d'Unité) based on seven fundamental units: meter, kilogram, second, ampere, kelvin, candela, mole. for our calculation:
Mass = m = 47.3 cm3 ×
Since the volume units are different, we need a unity factor to get them to cancel:
We now have the mass in pounds, but we want it in grams, so another unity factor is needed:
In subsequent chapters we will establish a number of relationships among physical quantities. Formulas will be given which define these relationships, but we do not advocate slavish memorization and manipulation of those formulas. Instead we recommend that you remember that a relationship exists, perhaps in terms of a road map, and then adjust the quantities involved so that the units cancel appropriately. Such an approach has the advantage that you can solve a wide variety of problems by using the same technique. | http://chempaths.chemeddl.org/services/chempaths/?q=book/General%20Chemistry%20Textbook/Introduction%3A%20The%20Ambit%20of%20Chemistry/1183/conversion-factors-and-fun | 13 |
13 | How did the solar system evolve to its current diverse state?
Many of the other solar systems have massive Jupiter like planets close to their Sun, closer even than Mercury. Many scientists now believe that these gas giants could not have formed there. Rather, they must have began out where our Jupiter is, and moved inwards, scattering the smaller planets with their powerful gravity as they went. Why is it that our Jupiter and Saturn did not migrate inward? We are trying to learn more about our outer solar system by sending probes there. We sent Galileo to Jupiter, Cassini is at Saturn right now, and New Horizons is on its way to Pluto even as you read this.
Planets also change even if they don't move closer to the Sun. For example, Mars once had water on the surface. We know that thanks to our two rovers on Mars and a spacecraft in orbit. We recently launched Phoenix to explore near the pole and sniff the dirt for organic molecules. By studying Mars we will learn more about how rocky planets can change. If other planets change, then ours can change too.
*Sort missions by clicking the column headers.
The Cassini Mission is in the midst of a detailed study of Saturn, its rings, its magnetosphere, its icy satellites, and its moon Titan. Cassini also delivered a probe (called Huygens, provided by the European Space Agency) to Titan, and ...
|19971015 October 15, 1997||3Operating|
The Dawn mission intends to orbit Vesta and Ceres, two of the largest asteroids in the solar system. According to current theories, the very different properties of Vesta and Ceres are the result of the asteroids being formed and evolving ...
|20070927 September 27, 2007||3Operating|
The Deep Impact mission was selected as a Discovery mission in 1999. The spacecraft was launched aboard a Delta II rocket on January 12, 2005 and left Earth’s orbit toward Comet Tempel 1. The spacecraft consists of two main sections, ...
|20050112 January 12, 2005||4Past|
Lunar Atmosphere and Dust Environment Explorer (LADEE) is a NASA mission that will orbit the Moon and its main objective is to characterize the atmosphere and lunar dust environment. This mission is part of SMD's Robotic Lunar Exploration program.
|20130801 August 2013||2Development|
Mars Science Laboratory
NASA proposes to develop and to launch a roving long-range, long-duration science laboratory that will be a major leap in surface measurements and pave the way for a future sample return mission. The mission will also demonstrate the technology for ...
|20111126 November 26, 2011||3Operating|
The New Horizons Pluto-Kuiper Belt Mission will help us understand worlds at the edge of our solar system, by making the first reconnaissance of Pluto and Charon. The mission will then visit one or more Kuiper Belt Objects, in the ...
|20060119 January 19, 2006||3Operating|
Stardust is the first U.S. space mission dedicated solely to the exploration of a comet, and the first robotic mission designed to return extraterrestrial material from outside the orbit of the Moon. This mission is part of SMD's Discovery Program. ...
|19990207 February 07, 1999||4Past| | http://science.nasa.gov/planetary-science/big-questions/how-did-the-solar-system-evolve-to-its-current-diverse-state/?group=s-z | 13 |
13 | Complex life seems to have appeared on Earth 9.6 Gyrs (9.6 billion years) after the Big Bang. New theories from the Sapienza University of Rome suggest, however, that these life forms may have been derived from earlier life proto-life forms which emerged within a few billions years after the Big Bang at the onset of dark energy domination in the universe, coupled with the rapid star formation and supernovas that occur in this range of time. In short, the increase of dark energy, coupled with the stellar synthesis of the elements necessary for life, could be a key to the emergence of life in the universe.
The cosmic microwave background radiation provides a further indication about the temperature of the Universe as it is today versus what it must have been following the Big Bang. It is thought that as matter cooled, it underwent phase transitions, which triggered or allowed the condensed matter in the Universe to undergo and form multiple complex phases. The non-living to living matter transition is related to these transitions that occur in a temperature range from a maximum of about 390 K, and a minimum temperature of about 240 K.
Not all extraterrestrial life in the universe may be like the life of Earth, which depends upon and requires the synthesis of 23 different elements, most of which are produced during stellar nucleosynthesis or may be produced and then dispersed at the end of the life time of a star, in a supernova explosion. It may have different genetic codes, no genes at all, or be comprised of silicon, ammonia, sulfuric acid.
It is unknown when the first stars were formed in the Big Bang model. Based on computer simulations, the first protostars may have been created between 200 million to 400 million years after the Big Bang and are believed to have undergone supernova after a few million years.
According to various models, these first stars were the seeds for later stars such that by 10 to 12 billion years ago, the universe was bright with stars many of which also underwent supernova, spreading the seeds not just for additional stars, but for life.
A growing body of evidence now suggests that the first proto-genes and the first forms of proto-life may have been fashioned around 10 billion years ago. Many scientists also believe that these first proto-life forms or actual living cells were spread from star system to star system and from planet to planet via mechanisms of panspermia.
This association raises the question of whether an increase of dark energy in the universe at that time could have an influence on the emergence of life. Dark energy is related to the whole universe and can affect multiscale phenomena ranging from microscale to nanoscale, so why not life?
Dark energy, coupled with the nuclear synthesis of all the necessary elements for life, may have played an unknown but significant role in the origin and stability of living biological systems and may have contributed to the origins of life.
Casey Kazan via http://news.sciencemag.org
The Emergence of Life in the Universe at the Epoch of Dark Energy Domination Nicola Poccia, Ph.D., Alessandro Ricci, Ph.D., Antonio Bianconi Ph.D., Department of Physics, Sapienza University of Rome, P. le A. Moro 2, 00185 Roma, Italy. Journal of Cosmology, 5, 875-882.
« "Spacetime has No Time Dimension" -- Radical Theory Claims that Time is Not the 4th Dimension (Today's Most Popular) | Main | EcoAlert: NASA Sees Fewer Big Asteroids Endangering Earth (However...) » | http://www.dailygalaxy.com/my_weblog/2011/10/-dark-energy-did-its-onset-trigger-life-in-the-universe-dark-energy-did-its-onset-trigger-life-in-th.html | 13 |
18 | Your program should accept two values from the user (the angle x and the value of n) and then should compute and print the value of sin(x).
To make the program, do following tasks.
Write two functions, i.e. function to calculate factorial and function to calculate power having following prototypes.
double Factorial (int n); //Factorial function prototype
double Power(double x, int y); //Power function prototype
Use these functions in your main function to compute the series.
Till now, I've written the following program but I am not able to get the right answer.
using namespace std;
double fact (int f); //declaration of factorial function
double power(double x, int y); //declaration of power function
int x=0; //value of x in the series
float sum_pos = 0;
cout << "Enter the value of x: " << endl;
cin >> x;
1) you have "x" declared as int instead of double in main()
and the "p" variable declared as int instead of double in power()
2) You should use "double" instead of "float"
3) fyi : "x" is in radians, not degrees
4) There is no need to go all the way to "i=1000" ..., write a small program
that just prints out fact(i), you will see a problem.
5) The equation that you are using is best suited for x values near zero.
You can use trig identities to make the value of "x" smaller before
calculating the sin (example : sin(x + 2*pi) = sin(x)... So there is no need
to use any value greater than 2*pi. And actually, that range can be reduced
significantly using other trig identities).
Last edited by Philip Nicoletti; November 8th, 2012 at 07:06 AM.
Also note that the above formula calculates sin(x) where x is in radians, not in degrees.
if you are trying to manually check if a certain value matches what you get on a calculator, then you need to set your calculator to radians.
a 30° angle would be Pi/6 radians so your function should be returning sin( pi/6 ) or sin( 0.5235987755983 ) as being (more or less) equal to 0.5
But as philip posted above, your real problem is in trying to make your program "too good". You're causing a problem by making the loop go to 1000.
What type of course is this for ? If a numerical analysis course, the
error for truncated a Taylo series should be given in the text book.
At any rate ... you ahve an alternating series that is monotomically
decreasing, the absolute value of the error when stopping the series
at a certain term is less than or equal to the first dropped term.
In pratical terms, choose a tolerance (say 10e-6), and stop iterating
when the term that you calculate is less than or equal to the tolerance.
Computers deal with numerical types that have operational constraints.
A double for instance has an upper bound of the largest number it can contain. This upper bound has a define DBL_MAX which is 1.797E+308. If you are making calculations and any (intermediate) value you calculate exceeds the operational constraints you get overflow (or underflow, or one of several other undesired effects). Once this happens, all bets are off concerning the end result, you could get results that make absolutely no sense at all.
Dealing with floating point numbers correctly posses additional problems. Floating point values are NOT accurate values, they are approximations. Each value has a slight error to it. Depending on how you deal with the numbers you can continuously accumulate more error into your intermediate results (or worse, scale/multiply it). This too can throw your results off whack.
For taylor series specifically there are ways to calculate how much iterations you need to achieve the desired precision. And when dealing with floating point, there are ways to reduce error accumulation. And there are ways to figure out if floating point error would exceed the desired precision meaning calculation isn't possible with the current algorithm or the used floating point type.
I doubt all the above is relevant for you, if this is a simple course, then see if you have an iteration value in the text book. If that is not there, you will need to to find an iteration in such a way that none of the intermediates overflow. In short, find an i where power(i) and fact(i) never exceed 1.797E+308 (to avoid precision loss, you probably do NOT want to not the highest possible i that meets this requirement, but want an i that's a bit lower) | http://forums.codeguru.com/showthread.php?529137-Write-a-C-program-to-compute-Sin(x)&p=2091295&mode=linear | 13 |
16 | Generally, a spacecraft is launched with huge rockets into a certain trajectory, or path, and it continues on that that path. Often the smaller rockets that are attached to the spacecraft are not large enough to change that initial push significantly. Spacecraft traveling in one direction have a hard time turning around and going in another. Sometimes a spacecraft can use the gravity of a planet or moon for an orbit or swingby and change its direction that way.
Other than that, the course is not changed in large ways, just adjusted or corrected. If a course correction is needed, the spacecraft will fire small attitude rockets to change the direction it is pointing. After that, the main thruster can give the rocket a push in the new direction. In order to do this, the location and heading of the spacecraft must be known perfectly.
Generally, the spacecraft takes readings and send them down to ground control and then ground control radios up a command sequence about how to change the course. On spacecraft like DS1, though, AutoNav and Remote Agent can do some of these tasks automatically.
Ask any question below to learn about how spacecraft change course.
Why does DS1 get off course?
What is a course correction?
What is AutoNav?
How do we know where a spacecraft is?
Do small errors in space navigation matter?
What is a remote agent?
How do scientists know what the path of an object in space will be?
How often does DS1 communicate with a ground station?
How does NASA communicate with spacecraft?
How does NASA run space missions?
How does DS1 do a course correction?
When does DS1 do a course correction?
What would happen if DS1 collided with an object in space?
How do spacecraft use an orbit to move from planet to planet?
What is a flyby?
What is attitude control?
How is NASA overseeing the DS1 mission? | http://www.qrg.northwestern.edu/projects/vss/docs/Navigation/zoom-change-course.html | 13 |
13 | |Let Sk be the transformation mapping (x,y,z) onto (kx,ky,kz).|
k is variously called the magnitude, scale factor, size change factor, or even ratio of similitude. It can be any value but zero. However, k is usually taken as positive. Negative values result in a 180° rotation of the figure. Unless k=1, a similarity transformation is not an isometry. The isometries (reflection, translation, rotation, and glide reflection), are only one class of transformations. Another important class of transformations are those which preserve shape without necessarily preserving size. A formal definition follows.
A transformation is a similarity transformation if and only if|
it is the composition of size changes and reflections.
Remember how isometries preserved Angle measure, Betweenness, Collinearity, and Distance (ABCD Theorem). Angle measure, Betweenness, and Collinearity are still preserved by similarity. Thus only Distance is not preserved. However, distance is scaled by the magnitude k. That is, the distance between any points on the new figure is now k times the original distance. If 0 < k < 1, our resulting image is smaller than the original. This is called a contraction. If k > 1, our resulting image is larger than the original. This is called an expansion, a dilation, or perhaps a dilatation. If k = 1, we have the identity transformation and our resulting image is the same size as the original. A classic example would be enlargements of a picture or letters in different font sizes: zZZZ.
Figures F and G are similar, (notation: F ~ G), if and only if the size change and reflections map one onto the other. Similar Figures have the following properties:
Example: Consider a similarly proportioned 5'6", 67 Kg math teacher and 6'8" basketball player. Find the basketball player's mass. Solution: Their masses correspond not directly with height, but with the cube. Thus to find the corresponding mass of the basketball player we form the proportion: (66"/80")3=67/x. We need to be able to easily solve for x=119 Kg (or 263 pounds). More on that below.
Example: One of the biggest controversies regarding scale models and proportions (see below) involves the infamous Barbie doll. It debutted in 1959, thus has had a long history over several generations, sold over a billion since, and two currently sell each second somewhere. The average American girl will possess eight. Feminists hate her, labelling her a "vacuous blonde." Others blame her for bulimia, anerexia, and teen suicide. In 1998 Mattel released two dozen new Barbies, a third of which had modified body shapes to "appear more contemporary." Perhaps it was in response to complaints about her reputed 38-18-34 dimensions. Thus the modified dolls were thicker waisted, flatter chested, had flatter feet, and were more tight lipped. Consider an old-fashioned Barbie measuring 11.133" tall which you wish to scale up to 5'9". If you have access to a real Barbie, measure her bust, waist, and hips, then compute the corresponding ratios, and compare them with the values of 35-?-31 found by one author.
If G is the pre-image and G' the image, then:
|a11×b11 + a12×b21 a11×b12 + a12×b22|
|a21×b11 + a22×b21 a21×b12 + a22×b22|
Now consider what happens if the 2×2 identity matrix matrix I2=
Continuing, consider what happens when the following matrix is used. With this notation, a size change of 2 can be represented by transforming every point P=(x,y) by multiplying the matrix matrix P=
When working with proportions, one often encounters what is called cross-multiplication. Before getting into that, let's be clear about the names of the various terms involved in a proportion. Two naming systems are in common use. The first method numbers the terms. Thus if the proportion is a/b=c/d, a is term 1; b is term 2; c is term 3; and d is term 4. The terms numbered with squares (1 and 4) are known as the extremes, whereas the terms numbered with primes (2 and 3) are known as the means.
Cross-multiplication is often taught as a new trick, but really is the same as clearing the denominators, that is, multiplying by both denominators. Thus starting with a/b=c/d, we can multiply both sides by b and by d and obtain ad=cb, where we have cancelled out common factors. We can get there in one step by also multiplying the extremes and setting the product equal to the product of the means. When done in this fashion, we are using the Means-Extremes Property or doing cross multiplication. The point is that this short-cut method is nothing new, and the same concerns about multiplying by something which could be zero apply. When the means are equal (cf problem 12.5#16) we obtain the geometric mean which will be discuss more in the next lesson and was already covered in statistics. Just like we had extended ratios, we can have extended proportions. The law of sines in lesson 7 is a good example.
The extremes Mr. Gulliver encountered emphasizes the biological limitations of scale. Small creatures can exist with only an exoskeleton, whereas large creatures require the internal structure of a skeleton. Even then, the large legs of an elephant are in sharp contrast with the long neck and narrow legs of a giraffe. Without the bouyancy of water, a blue whale quickly sufficates. Similarly, a mosquito can walk on water, but if hit by a rain drop might never get out of the water alive (awww). (The large mammals which existed after the demise of the dinosaurs may have been aided by an atmosphere with a higher oxygen content than we have today.)
The human body has an average shape, which is getting taller and heavier in recent years, but there is great variation in the size of various parts. Presumably such variation follows the normal (bell-shaped) distribution), but exercise, diet, drugs, and surgery are often used to attain certain shapes deemed more attractive or desireable. As styles change, the task of finding clothes which "fit" becomes challenging as various cracks, crevises, and protuberances are more or less concealed or revealed. In addition to the generational differences, there are cultural differences to which many remain unaware. An interesting project our textbook suggests on page 730 is to use your weight/height to create a chart for similarly proportioned people ranging from 4'6" to 6'6", in two-inch increments. Do standard height/weight charts assume similarity?
music structure (page 690--12.1#19--dominant 7th chord) use graph paper for cube net (page 719) put students in means/extreme proportion. page 725 note kilocalories (not calories) | http://www.andrews.edu/~calkins/math/webtexts/geom12.htm | 13 |
12 | Sojourner: Roving on Mars for the First Time
The Mars Pathfinder mission was an important first step in rovers for NASA. The 1997 mission included the first successful rover on Mars, called Sojourner, which was delivered to the surface by the Pathfinder lander.
One of the mission's innovations was Pathfinder's landing system. Rather than using conventional rockets to touch down on the surface, NASA decided to use a cocoon of airbags instead. After going through the higher parts of the atmosphere, Pathfinder deployed a parachute and also shed the heat shield that protected the spacecraft from the heat that built up on the spacecraft during re-entry.
The landing system lowered the spacecraft from its backshell, using a tether. Then came the tricky part: airbags inflated just eight seconds before landing. Rocket engines on the backshell fired in the moments before landing to smooth the jolt when Pathfinder hit.
Pathfinder then dropped to the surface from a distance of up to 100 feet (30 meters) from the ground, bouncing across the plain for several minutes before rolling safely to a stop.
This was the first time such a system had been used, and NASA was pleased to see that it went so well.
Cartoon character rocks
Pathfinder's landing spot on Mars was Ares Vallis, an ancient flood plain chosen because it was relatively safe, and also a spot with lots of rocks for Sojourner to analyze.
While Pathfinder was solemnly renamed the Carl Sagan Memorial Station in honor of the recently deceased science popularizer, many of the rocks surrounding Sojourner were given more whimsical names named after cartoon characters.
Sojourner's first stop was analyzing the composition of "Barnacle Bill," a rock just a little ways away from the landing assembly. The two-foot-long rover used an Alpha Proton X-Ray Spectrometer (APSX) to look at the elements inside the rock, and found that it had more silica than the surrounding environment.
Silica, an element found in igneous rocks, is a sign of thermal activity. Based on Barnacle Bill's composition, NASA scientists theorized Mars could have had a more interesting geologic history than thought before.
APXS found very different results for some of the other rocks it looked at. "Yogi," for example, appeared to be a basalt, although the results could have been affected by a thin dusting of soil on the rock. "Scooby-Doo" was more sedimentary, similar to the area surrounding it.
What excited scientists the most, though, were the pictures beaming back to Earth showing rounded pebbles and "conglomerate" rocks that indicated something pushing different types of soil together in the past.
NASA said this was evidence of a more water-rich planet, and suggested the plain Sojourner sampled was formed from floods that originated nearby the landing site.
Legacy of Mars science
While Sojourner strolled on the surface, Pathfinder was on a mission of its own. It relayed information from the rover to Earth, and also took pictures of the Martian sky and surroundings.
On Sol 24, Pathfinder captured a sunset showing blue color near the Sun. The blue is caused by scattering in the dusty Martian atmosphere. It shows up best during sunrise and sunset when the light passes through the densest part of the dust, close to the ground.
Pathfinder also carried magnets on board to measure the magnetic properties of Martian dust. Pictures taken between Sol 10 and Sol 66 showed more dust coating the magnets, which gave scientists a sense of what the dust was made of.
NASA last heard from Pathfinder on Sol 83 of the mission, which in Earth time was Sept. 27, 1997. The agency suspected the cause of the failure was rooted in a dead battery that was affected by repeated charges and decharges.
The lander and rover communicated to Earth nearly three times longer than projected, sending back enough scientific information to keep researchers here busy for years.
Sojourner's APXS experiment was used again on Spirit, Opportunity and Curiosity, each time getting more robust and yielding more data for researchers.
The rover travelled nearly 330 feet (100 meters) during its three months on Mars, never going more than 40 feet (12 meters) from the lander. The rover sent more than 550 pictures, and the lander more than 16,500 images.
Many of these pictures went up nearly real-time on the Pathfinder/Sojourner website, making use of the Internet at a time when interest in the technology exploded among the public. NASA published updates from the mission several times a week.
What's more, the agency still preserves the website in its circa 1997 format, providing a historical time capsule of how early website design was used to talk about space missions.
— Elizabeth Howell, SPACE.com Contributor | http://www.space.com/17745-mars-pathfinder-sojourner-rover.html | 13 |
17 | Igneous rocks form when magma (molten rock) cools and solidifies. The solidification process may or may not involve crystallization, and it may take place either below the Earth's surface to generate "intrusive" (plutonic) rocks or on the surface to produce "extrusive" (volcanic) rocks. The magma may be derived from partial melts of pre-existing rocks in the Earth's mantle or crust. The melting may be the result of an increase in temperature, decrease in pressure, change in composition of the rock, or a combination of these factors.
Igneous rocks make up approximately 95 percent of the upper part of the Earth's crust, but their great abundance is hidden from the surface by a relatively thin but widespread layer of sedimentary and metamorphic rocks. More than 700 types of igneous rocks have been described, most of which were formed beneath the surface of the Earth's crust.
Igneous rocks are important for several reasons:
- Their minerals and global chemistry provide information about the composition of the mantle, from which some igneous rocks are extracted, and the temperature and pressure conditions that led to this extraction.
- Their ages can be calculated by various methods of radiometric dating. By comparing their ages with those of adjacent geological strata, a time sequence of events can be put together.
- Their features are usually characteristic of a specific tectonic environment, allowing scientists to reconstitute tectonic processes.
- Under some circumstances, they host important mineral deposits (ores). For example, ores of tungsten, tin, and uranium are usually associated with granites, and ores of chromium and platinum are commonly associated with gabbros.
Morphology and setting
As noted above, igneous rocks may be either intrusive (plutonic) or extrusive (volcanic).
Intrusive igneous rocks
Intrusive igneous rocks are formed from magma that cools and solidifies within the earth. Surrounded by pre-existing rock (called country rock), the magma cools slowly, and as a result these rocks are coarse grained. The mineral grains in such rocks can generally be identified with the naked eye. Intrusive rocks can also be classified according to the shape and size of the intrusive body and its relation to the other formations into which it intrudes. Typical intrusive formations are batholiths, stocks, laccoliths, sills and dikes. The extrusive types usually are called lavas.
The central cores of major mountain ranges consist of intrusive igneous rocks, usually granite. When exposed by erosion, these cores (called batholiths) may occupy huge areas of the Earth's surface.
Coarse grained intrusive igneous rocks which form at depth within the earth are termed as abyssal; intrusive igneous rocks which form near the surface are termed hypabyssal.
Extrusive igneous rocks
Extrusive igneous rocks are formed at the Earth's surface as a result of the partial melting of rocks within the mantle and crust.
The melt, with or without suspended crystals and gas bubbles, is called magma. Magma rises because it is less dense than the rock from which it was created. When it reaches the surface, magma extruded onto the surface either beneath water or air, is called lava. Eruptions of volcanoes under the air are termed subaerial whereas those occurring underneath the ocean are termed submarine. Black smokers and mid-ocean ridge basalt are examples of submarine volcanic activity.
Magma which erupts from a volcano behaves according to its viscosity, determined by temperature, composition, and crystal content. High-temperature magma, most of which is basaltic in composition, behaves in a manner similar to thick oil and, as it cools, treacle. Long, thin basalt flows with pahoehoe surfaces are common. Intermediate composition magma such as andesite tends to form cinder cones of intermingled ash, tuff and lava, and may have viscosity similar to thick, cold molasses or even rubber when erupted. Felsic magma such as rhyolite is usually erupted at low temperature and is up to 10,000 times as viscous as basalt. Volcanoes with rhyolitic magma commonly erupt explosively, and rhyolitic lava flows typically are of limited extent and have steep margins, because the magma is so viscous.
Felsic and intermediate magmas that erupt often do so violently, with explosions driven by release of dissolved gases—typically water but also carbon dioxide. Explosively erupted material is called tephra, and volcanic deposits are called pyroclastic, and they include tuff, agglomerate and ignimbrite. Fine volcanic ash is also erupted and forms ash tuff deposits which can often cover vast areas.
Because lava cools and crystallizes rapidly, it is fine grained. If the cooling has been so rapid as to prevent the formation of even small crystals after extrusion, the resulting rock may be mostly glass (such as the rock obsidian). If the cooling of the lava happened slowly, the rocks would be coarse-grained.
Because the minerals are fine-grained, it is much more difficult to distinguish between the different types of extrusive igneous rocks than between different types of intrusive igneous rocks. Generally, the mineral constituents of fine-grained extrusive igneous rocks can only be determined by examination of thin sections of the rock under a microscope, so only an approximate classification can usually be made in the field.
Igneous rock are classified according to mode of occurrence, texture, mineralogy, chemical composition, and the geometry of the igneous body.
The classification of the many types of different igneous rocks can provide us with important information about the conditions under which they formed. Two important variables used for the classification of igneous rocks are particle size, which largely depends upon the cooling history, and the mineral composition of the rock. Feldspars, quartz or feldspathoids, olivines, pyroxenes, amphiboles, and micas are all important minerals in the formation of almost all igneous rocks, and they are basic to the classification of these rocks. All other minerals present are regarded as nonessential in almost all igneous rocks and are called accessory minerals. Types of igneous rocks with other essential minerals are very rare, and these rare rocks include those with essential carbonates.
In a simplified classification, igneous rock types are separated on the basis of the type of feldspar present, the presence or absence of quartz, and in rocks with no feldspar or quartz, the type of iron or magnesium minerals present. Rocks containing quartz (silica in composition) are silica-oversaturated. Rocks with feldspathoids are silica-undersaturated, because feldspathoids cannot coexist with in a stable association with quartz.
Igneous rocks which have crystals large enough to be seen by the naked eye are called phaneritic; those with crystals too small to be seen are called aphanitic. Generally speaking, phaneritic implies an intrusive origin; aphanitic an extrusive one.
An igneous rock with larger, clearly discernible crystals embedded in a finer-grained matrix is termed porphyry. Porphyritic texture develops when some of the crystals grow to considerable size before the main mass of the magma crystallizes as finer-grained, uniform material.
- main article Rock microstructure
Texture is an important criterion for the naming of volcanic rocks. The texture of volcanic rocks, including the size, shape, orientation, and distribution of grains and the intergrain relationships, will determine whether the rock is termed a tuff, a pyroclastic lava or a simple lava.
However, the texture is only a subordinate part of classifying volcanic rocks, as most often there needs to be chemical information gleaned from rocks with extremely fine-grained groundmass or which are airfall tuffs which may be formed from volcanic ash.
Textural criteria are less critical in classifying intrusive rocks where the majority of minerals will be visible to the naked eye or at least using a hand lens, magnifying glass or microscope. Plutonic rocks tend also to be less texturally varied and less prone to gaining structural fabrics. Textural terms can be used to differentiate different intrusive phases of large plutons, for instance porphyritic margins to large intrusive bodies, porphyry stocks and subvolcanic apophyses. Mineralogical classification is used most often to classify plutonic rocks and chemical classifications are preferred to classify volcanic rocks, with phenocryst species used as a prefix, eg; "olivine-bearing picrite" or "orthoclase-phyric rhyolite."
- see also List of rock textures and Igneous textures
Igneous rocks can be classified according to chemical or mineralogical parameters:
Chemical - Total alkali - silica content (TAS diagram) for volcanic rock classification used when modal or mineralogic data is unavailable:
- acid igneous rocks containing a high silica content, greater than 63 percent SiO2 (examples rhyolite and dacite)
- intermediate igneous rocks containing 52 - 63 percent SiO2 (example andesite)
- basic igneous rocks have low silica 45 - 52 percent and typically high iron - magnesium content (example basalt)
- ultrabasic igneous rocks with less than 45 percent silica. (examples picrite and komatiite)
- alkalic igneous rocks with 5 - 15 percent alkali (K2O + Na2O) content or with a molar ratio of alkali to silica greater than 1:6. (examples phonolite and trachyte)
- Note: the acid-basic terminology is used more broadly in older (generally British) geological literature. In current literature felsic-mafic roughly substitutes for acid-basic.
Chemical classification also extends to differentiating rocks which are chemically similar according to the TAS diagram, for instance;
- Ultrapotassic; rocks containing molar K2O/Na2O greater than 3
- Peralkaline; rocks containing molar (K2O + Na2O)/ Al2O3 greater than 1
- Peraluminous; rocks containing molar (K2O + Na2O)/ Al2O3 less than 1
An idealized mineralogy (the normative mineralogy) can be calculated from the chemical composition, and the calculation is useful for rocks too fine-grained or too altered for identification of minerals that crystallized from the melt. For instance, normative quartz classifies a rock as silica-oversaturated; an example is rhyolite. A normative feldspathoid classifies a rock as silica-undersaturated; an example is nephelinite.
texture depends on the size, shape, and arrangement of its mineral crystals.
History of classification
In 1902 a group of American petrographers brought forward a proposal to discard all existing classifications of igneous rocks and to substitute for them a "quantitative" classification based on chemical analysis. They showed how vague and often unscientific was much of the existing terminology and argued that as the chemical composition of an igneous rock was its most fundamental characteristic it should be elevated to prime position.
Geological occurrence, structure, mineralogical constitution, the hitherto-accepted criteria for the discrimination of rock species were relegated to the background. The completed rock analysis is first to be interpreted in terms of the rock-forming minerals which might be expected to be formed when the magma crystallizes, e.g. quartz feldspars, olivine, akermannite, feldspathoids, magnetite, corundum and so on, and the rocks are divided into groups strictly according to the relative proportion of these minerals to one another.(Cross 1903)
For volcanic rocks, mineralogy is important in classifying and naming lavas. The most important criteria is the phenocryst species, followed by the groundmass mineralogy. Often, where the groundmass is aphanitic, chemical classification must be used to properly identify a volcanic rock.
Mineralogic contents - felsic versus mafic
- felsic rock, with predominance of quartz, alkali feldspar and/or feldspathoids: the felsic minerals; these rocks (e.g., granite) are usually light coloured, and have low density.
- mafic rock, with predominance of mafic minerals pyroxenes, olivines and calcic plagioclase; these rocks (example, basalt) are usually dark coloured, and have higher density than felsic rocks.
- ultramafic rock, with more than 90 percent of mafic minerals (e.g., dunite)
For intrusive, plutonic and usually phaneritic igneous rocks where all minerals are visible at least via microscope, the mineralogy is used to classify the rock. This usually occurs on ternary diagrams, where the relative proportions of three minerals are used to classify the rock.
The following table is a simple subdivision of igneous rocks according both to their composition and mode of occurrence.
|Mode of occurrence||Acid||Intermediate||Basic||Ultrabasic|
For a more detailed classification see QAPF diagram.
Example of classification
Granite is an igneous intrusive rock (crystallized at depth), with felsic composition (rich in silica and with more than 10 percent of felsic minerals) and phaneritic, subeuhedral texture (minerals are visible for the unaided eye and some of them retain original crystallographic shapes). Granite is the most abundant intrusive rock that can be found in the continents.
The Earth's crust averages about 35 kilometers thick under the continents, but averages only some 7-10 kilometers beneath the oceans. The continental crust is composed primarily of sedimentary rocks resting on crystalline basement formed of a great variety of metamorphic and igneous rocks including granulite and granite. Oceanic crust is composed primarily of basalt and gabbro. Both continental and oceanic crust rest on peridotite of the mantle.
Rocks may melt in response to a decrease in pressure, to a change in composition such as an addition of water, to an increase in temperature, or to a combination of these processes. Other mechanisms, such as melting from impact of a meteorite, are less important today, but impacts during accretion of the Earth led to extensive melting, and the outer several hundred kilometers of our early Earth probably was an ocean of magma. Impacts of large meteorites in last few hundred million years have been proposed as one mechanism responsible for the extensive basalt magmatism of several large igneous provinces.
Decompression melting occurs because of a decrease in pressure. The solidus temperatures of most rocks (the temperatures below which they are completely solid) increase with increasing pressure in the absence of water. Peridotite at depth in the Earth's mantle may be hotter than its solidus temperature at some shallower level. If such rock rises during the convection of solid mantle, it will cool slightly as it expands in an adiabatic process, but the cooling is only about 0.3°C per kilometer. Experimental studies of appropriate peridotite samples document that the solidus temperatures increase by 3°C to 4°C per kilometer. If the rock rises far enough, it will begin to melt. Melt droplets can coalesce into larger volumes and be intruded upwards. This process of melting from upward movement of solid mantle is critical in the evolution of the earth.
Decompression melting creates the ocean crust at mid-ocean ridges. Decompression melting caused by the rise of mantle plumes is responsible for creating ocean islands like the Hawaiian islands. Plume-related decompression melting also is the most common explanation for flood basalts and oceanic plateaus (two types of large igneous provinces), although other causes such as melting related to meteorite impact have been proposed for some of these huge volumes of igneous rock.
Effects of water and carbon dioxide
The change of rock composition most responsible for creation of magma is the addition of water. Water lowers the solidus temperature of rocks at a given pressure. For example, at a depth of about 100 kilometers, peridotite begins to melt near 800°C in the presence of excess water, but near or above about 1500°C in the absence of water (Grove and others, 2006). Water is driven out of the ocean lithosphere in subduction zones, and it causes melting in the overlying mantle. Hydrous magmas of basalt and andesite composition are produced directly and indirectly as results of dehydration during the subduction process. Such magmas and those derived from them build up island arcs such as those in the Pacific ring of fire. These magmas form rocks of the calc-alkaline series, an important part of continental crust.
The addition of carbon dioxide is relatively a much less important cause of magma formation than addition of water, but genesis of some silica-undersaturated magmas has been attributed to the dominance of carbon dioxide over water in their mantle source regions. In the presence of carbon dioxide, experiments document that the peridotite solidus temperature decreases by about 200°C in a narrow pressure interval at pressures corresponding to a depth of about 70 km. Magmas of rock types such as nephelinite, carbonatite, and kimberlite are among those that may be generated following an influx of carbon dioxide into a mantle volume at depths greater than about 70 km.
Increase of temperature is the most typical mechanism for formation of magma within continental crust. Such temperature increases can occur because of the upward intrusion of magma from the mantle. Temperatures can also exceed the solidus of a crustal rock in continental crust thickened by compression at a plate boundary. The plate boundary between the Indian and Asian continental masses provides a well-studied example, as the Tibetan Plateau just north of the boundary has crust about 80 kilometers thick, roughly twice the thickness of normal continental crust. Studies of electrical resistivity deduced from magnetotelluric data have detected a layer that appears to contain silicate melt and that stretches for at least 1000 kilometers within the middle crust along the southern margin of the Tibetan Plateau (Unsworth and others, 2005). Granite and rhyolite are types of igneous rock commonly interpreted as products of melting of continental crust because of increases of temperature. Temperature increases also may contribute to the melting of lithosphere dragged down in a subduction zone.
Most magmas are only entirely melt for small parts of their histories. More typically, they are mixes of melt and crystals, and sometimes also of gas bubbles. Melt, crystals, and bubbles usually have different densities, and so they can separate as magmas evolve.
As magma cools, minerals typically crystallize from the melt at different temperatures (fractional crystallization). As minerals crystallize, the composition of the residual melt typically changes. If crystals separate from melt, then the residual melt will differ in composition from the parent magma. For instance, a magma of gabbro composition can produce a residual melt of granite composition if early formed crystals are separated from the magma. Gabbro may have a liquidus temperature near 1200°C, and derivative granite-composition melt may have a liquidus temperature as low as about 700°C. Incompatible elements are concentrated in the last residues of magma during fractional crystallization and in the first melts produced during partial melting: either process can form the magma that crystallizes to pegmatite, a rock type commonly enriched in incompatible elements. Bowen's reaction series is important for understanding the idealised sequence of fractional crystallisation of a magma.
Magma composition can be determined by processes other than partial melting and fractional crystallization. For instance, magmas commonly interact with rocks they intrude, both by melting those rocks and by reacting with them. Magmas of different compositions can mix with one another. In rare cases, melts can separate into two immiscible melts of contrasting compositions.
There are relatively few minerals that are important in the formation of common igneous rocks, because the magma from which the minerals crystallize is rich in only certain elements: silicon, oxygen, aluminium, sodium, potassium, calcium, iron, and magnesium. These are the elements which combine to form the silicate minerals, which account for over ninety percent of all igneous rocks. The chemistry of igneous rocks is expressed differently for major and minor elements and for trace elements. Contents of major and minor elements are conventionally expressed as weight percent oxides (e.g., 51 percent SiO2, and 1.50 percent TiO2). Abundances of trace elements are conventionally expressed as parts per million by weight (e.g., 420 ppm Ni, and 5.1 ppm Sm). The term "trace element" typically is used for elements present in most rocks at abundances less than 100 ppm or so, but some trace elements may be present in some rocks at abundances exceeding 1000 ppm. The diversity of rock compositions has been defined by a huge mass of analytical data—over 230,000 rock analyses can be accessed on the web through a site sponsored by the U. S. National Science Foundation (see the External Link to EarthChem).
The word "igneous" is derived from the Latin igneus, meaning "of fire." Volcanic rocks are named after Vulcan, the Roman name for the god of fire.
Intrusive rocks are also called plutonic rocks, named after Pluto, the Roman god of the underworld.
- Blatt, Harvey, and Robert J. Tracy. 1995. Petrology: Igneous, Sedimentary, and Metamorphic, 2nd ed. New York: W. H. Freeman. ISBN 0716724383
- Cross, Whitman, et al. 1903. Quantitative Classification of Igneous Rocks Based on Chemical and Mineral Characters, With a Systematic Nomenclature. with Introductory Review of Development of Systematic Petrography in the 19th Cent. Chicago: University of Chicago: W. Wesley and Son, 1903. ASIN B00088YNFM
- Farndon, John. 2006. The Practical Encyclopedia of Rocks & Minerals: How to Find, Identify, Collect and Maintain the World's best Specimens, with over 1000 Photographs and Artworks. London: Lorenz Books. ISBN 0754815412
- McBirney, Alexander R. 2006. Igneous Petrology. 3rd ed. Jones & Bartlett. ISBN 0763734489
- Pellant, Chris. 2002. Rocks and Minerals. Smithsonian Handbooks. New York: Dorling Kindersley. ISBN 0789491060
- Shaffer, Paul R., Herbert S. Zim, and Raymond Perlman. 2001. Rocks, Gems and Minerals. Rev. ed. New York: St. Martin's Press. ISBN 1582381321
- Skinner, Brian J., Stephen C. Porter, and Jeffrey Park. 2004. Dynamic Earth: An Introduction to Physical Geology. 5th ed. Hoboken, NJ: John Wiley. ISBN 0471152285
- Igneous Rocks. U.S. Geological Survey. Retrieved April 16, 2007.
- A Web Browser Flow Chart for the Classification of Igneous Rocks. Department of Geology and Geophysics, Louisiana State University. Retrieved April 16, 2007.
- EarthChem. Retrieved April 16, 2007.
- Atlas of Igneous and Metamorphic Rocks, Minerals, and Textures. Geology Department, University of North Carolina. Retrieved April 16, 2007.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
Note: Some restrictions may apply to use of individual images which are separately licensed. | http://www.newworldencyclopedia.org/entry/Igneous_rock | 13 |
12 | These sites are for high school physics students to learn the basics of vectors, vector notation, and how vectors are used in physics.
BBC Vector Notation
Here is an introduction to vectors, including interactive tools on vector notation.
Introduction to Vectors
This tutorial includes descriptions and interactive tools to help students understand vectors.
BBC Vectors in Physics
Here is a short description of the uses of vectors in physics problems, including some sample problems.
Matrix Algebra: Vectors
This PDF file gives a thorough explanation of matrices and their use in vector notation.
Vectors and Matrices
This page includes a quick summary of how to use matrices to describe and add, or multiply vectors. Students should already have an understanding of matrices, and unit vectors.
This site provides an explanation of unit vectors and unit vector notation.
This interactive tool allows students to add two vectors, see the resulting vector, and see the equation for each vector.
Vector Boat Game
This lesson plan includes an interactive game. Students use vectors to steer an imaginary boat to land on an island, catch fish, or avoid a storm.
Vectors and Directions
This site includes description of vectors, practice problems on determining the direction and magnitude of a vector, and information on adding vectors, including some animations.
Here are links to explanations of vectors, including adding and subtracting vectors, unit vector notation, and dot and cross products. NOTE: Full version of each lesson requires Geometer's Sketchpad. | http://ethemes.missouri.edu/themes/1886?locale=zh | 13 |
12 | Middle School Studying For Your Learning Style
A series of short videos that deal with such aspects as types of learning styles and how to learn which is best for various types of learning. There are several short videos in this section. All students should watch this to get ideas on how to improve their study habits.
Volcanoes provide clues about what is going on inside Earth. Animations illustrate volcanic processes and how plate boundaries are related to volcanism. The program also surveys the various types of eruptions, craters, cones and vents, lava domes, magma, and volcanic rock. The 1980 eruption of Mount St. Helens serves as one example.
Intrusive Igneous Rocks
Most magma does not extrude onto Earth’s surface but cools slowly deep inside Earth. This magma seeps into crevices in existing rock to form intrusive igneous rocks. Experts provide a graphic illustration of this process and explain the types and textures of rocks such as granite, obsidian, and quartz. Once again, plate tectonics is shown to be involved in the process.
Anyone undertaking a building project must understand mass wasting — the downslope movement of earth under the influence of gravity. Various factors in mass wasting, including the rock’s effective strength and pore spaces, are discussed, as are different types of mass wasting such as creep, slump, and landslides. Images of an actual landslide illustrate the phenomeno
The weight of a mountain creates enough pressure to recrystallize rock, thus creating metamorphic rocks. This program outlines the recrystallization process and the types of rock it can create — from claystone and slate to schist and garnet-bearing gneiss. The relationship of metamorphic rock to plate tectonics is also covered.
Running Water I: Rivers, Erosion and Deposition
Rivers are the most common land feature on Earth and play a vital role in the sculpting of land. This program shows landscapes formed by rivers, the various types of rivers, the basic parts of a river, and how characteristics of rivers — their slope, channel, and discharge — erode and build the surrounding terrain. Aspects of flooding are also discussed.
Evolution and the Tree of Life What makes a snake a snake, and a lizard a lizard? What distinguishes one type of lizard from another? And how did so many types of reptiles come to be? Session 6 focuses on questions like these as we continue our study of the fundamentals of evolution. Building upon key ideas introduced
What makes a snake a snake, and a lizard a lizard? What distinguishes one type of lizard from another? And how did so many types of reptiles come to be? Session 6 focuses on questions like these as we continue our study of the fundamentals of evolution. Building upon key ideas introduced
Connecting With the Arts: What is Arts Integration?
This program presents three instructional models for integrating the arts: independent instruction, team-teaching, and collaborations with community resources. Participants will also explore informal, complementary, and interdependent curricular connections, and see examples of what these different types of arts-integrated instruction look like in the classroom.
How to Keep Score in Tennis
How to Keep Score in Tennis. Part of the series: How to Play Tennis. Scoring in tennis is kept by no-ad scoring and regular scoring. Both types of scoring are based on a four point system. Keep score in a tennis match with tips from a certified tennis pro in this free video on tennis.
This is an excellent slideshow (with voice narration) that explores the art of storytelling. The narrator highlights three different types of fiction templates.
Flute Vibrato with Jennifer Grim
There are 2 types of vibrato-one that comes from your throat and one that comes from your diaphragm. This video shows you how to produce both types. (1:40)
Choosing Supplies for Homeschooling
This video shows how to pick out simple homeschooling supplies for each student, which can be used for a variety of subjects. A homeschool explains how to choosing supplies for homeschooling depends on the child's areas of interest, but basic supplies include writing utensils, different types of paper, textbooks and other crafts.
Liverworts, Mosses, and Ferns
This video gives a quick overview of some different types of spore-bearing plants. Dr. Matt von Konrat explains the reproduction of liverworts, mosses, and ferns to a high school group of students. Audio is poor but information is good. Run time 07:50
Plant Reproduction: The Pine Tree
Pine trees actually produce two types of pinecones. This video addresses both the male cone and the female cone as well as their parts. Using computer generated pictures, the narrator explains the reproductive process for a pine tree. The narrator also addresses the special adaptations of the pine needle. Run time 02:41.
Data Quality: Missing Data
This module describes how missing data can be managed while maintaining data quality. It explains how to plan for missing data; defines different types of "missingness;" outlines the benefits of documenting missing data and illustrates how to document missing data; and describes procedures to minimize missing data. Upon completion of this module, students will be able to explain why data managers should strive to minimize missing data and develop a plan to record or code why data are missing.
Professor Jon Lewis on The Godfather
Jon Lewis discusses his new book on The Godfather.
15.992 S-Lab: Laboratory for Sustainable Business (MIT)
How can we translate real-world challenges into future business opportunities? How can individuals, organizations, and society learn and undergo change at the pace needed to stave off worsening problems? Today, organizations of all kinds—traditional manufacturing firms, those that extract resources, a huge variety of new start-ups, services, non-profits, and governmental organizations of all types, among many others—are tackling these very questions. For some, the massive challenges
13 - Banking: Successes and Failures
Banks, which were first created in primitive form by goldsmiths hundreds of years ago, have evolved into central economic institutions that manage the allocation of resources, channel information about productive activities, and offer the public convenient investment vehicles. Although there are several types of banking institutions, including credit unions and Saving and Loan Associations, commercial banks are the largest and most important in the banking system. Banks are designed to address t
Virtual laboratories in Molecular and Cell Biology - Intracellular signalling
A virtual laboratory which allows users to analyse intracellular signalling pathways. The programme allows the student to stimulate cells for different periods of time and analyse phosphorylation/activation of kinases in the signalling pathways, using SDS-PAGE and immunoblotting. Use of different cell types (dominant-negative mutants) and pull-down assays allows them to derive the hierarchy in the signalling pathways. The programme first introduces the theory behind the techniques. It then takes
Use networked services effectively to provide access to information
By developing their awareness of, and effectively using a range of, networked services practitioners enhance their responses to client requests for information. It contains activities and resources to facilitate self-paced learning. Topics include: types of databases, responding to client requests for help, the internet, bibliographic databases, kinetica, new developments and putting it all together. | http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Different%20types%20of%20epithel&start=1480&end=1500 | 13 |
10 | New grid computing simulations have shown that the smallest dinosaurs, not the biggest, were the fastest. This work is helping palaeontologists understand how dinosaurs moved around and what roles they played in the prehistoric world.
With its sharp teeth and massive jaws, the T. rex(officially known as Tyrannosaurus rex)is the stuff of nightmares. It’s not surprising that scientists are convinced the T. rexwas a carnivorous predator, because of its menacing jaws, but huge teeth don’t tell the whole story. Was it like the modern cheetah, catching its prey in short burst-like sprints? Or was the T. rexa sneaky stalk-and-ambush hunter like the jaguar?
Since we can’t see a real T. rex in action (it disappeared along with the other dinosaurs 65 million years ago), palaeontologists need to look elsewhere to understand its role as a predator and its speed. If zebras were to become extinct, the palaeontologists of the future could probably use living horses or donkeys as comparisons. People looking at dinosaur behavior don’t have that luxury because there is nothing alive today quite like a T. rex. The solution is to create a detailed computer simulation of the animal’s skeleton and muscles.
ISGTW reported last year about computer simulation research done by the Animal Simulation Laboratory at the University of Manchester, UK, that showed carnivores similar to T. rex such as Acrocanthosaurus couldn’t even outrun a bicycle. The computer models stated that this therapod (beast feet) dinosaur had an average running speed of 15 miles per hour (24.5 kilometers per hour) and walked at about 5.5 mph (9 kph). Now, researchers have simulated the running speeds of a number of dinosaurs compared with living animals today. The research suggests the smaller the dinosaur, the faster it was.
William Sellers and Phillip Manning, two palaeontologists from the University of Manchester, used a programme called GaitSym – a simulation environment that respects the real laws of physics (e.g. gravity, inertia) – to model the top running speeds of five types of bipedal dinosaur –Compsognathus,Velociraptor, Dilophosaurus, Allosaurus and T. rex. They also modelled three living animals – the ostrich, the emu and humans – with well-known top speeds to use as comparison.
First, they used the information available from known fossils to reconstruct the animal’s locomotive anatomy and to build a 2D musculoskeletal model. The model specifies, for example, where the joints are, where the muscles are, the weight or mass of the thighs, feet and other parts of the animal, alongside the size and properties of its muscles.
Then, they ‘released’ this virtual robot in GaitSym and told it to run as fast as possible. The key to the model is that the palaeontologists didn’t specify which muscle activation sequence the dinosaurs should use. GaitSym looked for the muscle activation pattern that allowed the animal to cover the most ground in a given amount of time. The program experiments with different combinations of muscle activation patterns and searches for an optimum solution.
Patterns that caused the animal to stagger, stumble or fall were abandoned while promising patterns were selected for further investigation. Each individual computation is not complex, but the problem is that GaitSym needs to go through thousands of muscle activation patterns. This makes the work computationally demanding and impractical to complete using a single computer. Instead, Sellers and Manning accessed the grid computing services provided by the UK’s NW-Grid and used about 170,000 hours of computing time to complete the project in a few months.
Sellers and Manning reported in their Proceedings of the Royal Society Bpaper that all simulations generated high-quality running gaits (the way locomotion is achieved using limbs) for the seven tested animals. The computer model also assigned top speeds to living animals that are reasonably close (although slightly slower) than what is measured in real life.
They found that the smallest dinosaur was also the fastest: little Compsognathuswas roughly the size of a turkey, but with a top speed of 64 km per hour it could keep up with a racing greyhound. The Velociraptorand the Dilophosaurus(both species responsible for more than a few fatalities in Jurassic Park)had top speeds of about 38 km per hour, just below what a modern elephant can manage.
The T. rexwas the slowest animal in the contest and according to the model could only get up to 29 km per hour. This means that Usain Bolt – the world’s fastest man – could probably outrun it with his 9.58 second 100 metre record at 37.5 km per hour, albeit for a short while at least.
The GaitSym models show that studying dinosaur locomotion is no longer science fiction. Recent advances in software, processing speeds, the availability of high performance computing clusters and grid computing made it possible to create very detailed simulations.
There is a lot to learn from these models, but palaeontologists are aware that they are only best-estimate representations. Simulations cannot provide definitive answers because there is a fair amount of guesswork involved in deciding which values about muscle size for example, to input into the model. This is a difficult problem to solve. We know roughly what dinosaurs looked like, thanks to skeletons, sometimes exquisitely preserved. But muscles rot away and very rarely make it into the fossil record.
Instead of adding precise input values to the model, palaeontologists performed sensitivity analysis tests, where inputs are tested in a given range to see how they affect the model’s behavior. Sensitivity analysis doesn’t give a definite answer of A or B, but it gives a good idea about which factors influenced the animals' speed the most – which is equally important if the ultimate goal is to understand how dinosaurs lived in their world.
Karl Bates, also from the University of Manchester, used sensitivity analysis tests to take a closer look at the Sellers & Manning results. For this work, which was part of his PhD, he focused on the Allosaurus– a huge carnivore, which lived in the Jurassic period, 100 million years before the T. rex– and accessed the grid computing resources provided by the UK’s National Grid Service.
Bates repeated the model runs described above, but instead of inputting precise values, he analyzed the ranges of five input parameters: muscle contraction velocity; force per unit area (a proxy for muscle mass); muscle fiber length; body weight and centre of mass.
The first finding was that body-weight related parameters don’t have much influence on top speed. Changing the total body mass of the Allosaurusto values within 1,100 to 2,300 kilograms has minimal impact on top speed, which ranges between 32.4 and 32.7 kilometers per hour. The position of the center of mass also didn’t seem to have much effect.
What made a real difference were the muscle parameters. Playing with the input values of muscle mass and contraction velocity caused top speed to vary by up to 66% and 42%, respectively.
Sellers and Manning considered that the maximum contraction velocity of the Allosaurus'muscles was 8 per second. But, since he couldn’t know this for sure, Bates tested the model within the 4 to 12 per second range and analysed how this change influenced top speed.
Thinking of top speed in possible ranges, instead of precise values, is very informative and allowed the palaeontologists to interpret their data with a greater degree of confidence.
Even considering the maximum values for all muscle parameters, the Allosaurusmodel was not able to run very fast. In fact, any speed in excess of about 43 km per hour would require extreme adaptations for high speed, including improbable proportions of muscle-to-body weight. There are still many unknowns, but models such as this let us know that the Allosaurusand its cousin T-rex were certainly not the cheetahs of their time, but perhaps more like a jaguar.
This is an edited version of a story that first appeared on egi.eu | http://www.isgtw.org/feature/how-fast-could-t-rex-run | 13 |
12 | In a little under seven weeks, the wonderfully complicated Sky Crane will deliver the Mars Science Laboratory (MSL) rover “Curiosity” to the surface of Mars.
But before the descent module lowers the rover on a tether and uses retrorockets to place it gently in Gale Crater, a parachute will slow the whole payload to subsonic speeds. In finalizing the design of MSL’s parachute, NASA looked to mission requirements and tests carried out nearly 50 years ago.
MSL’s parachute is the main source of atmospheric drag. It’s a 64.7 foot (19.7 meter) disk-gap-band style chute deployed by a mortar. The main disk is a dome-shaped canopy with a hole in the top to relieve the air pressure. A gap below the main canopy also lets air vent out to prevent the canopy from rupturing. Under the gap is a fabric band designed to increase its lateral stability by controlling the direction of incoming air.
It’s not easy testing this important piece of hardware on Earth since Mars’ atmosphere is one percent as thick and its gravity is only a third as strong.
Simulating these conditions on Earth is possible but expensive, so much so that any high altitude hypersonic tests were deemed prohibitively expensive early on in the MSL development process. So JPL engineers broke the parachute’s job into stages that could be tested individually: mortar deployment, canopy inflation, inflation strength, supersonic performance, and subsonic performance. Luckily, NASA had data on high-altitude hypersonic parachute tests from the late 1960s for parachutes exactly the size of MSL’s.
When NASA began working out the details for the 1976 Viking missions to Mars, the agency was enjoying the inflated budgets that came with the push to land a man on the moon before the end of the 1960s. In this environment, NASA established three programs dedicated to testing parachutes: the Planetary Entry Parachute Program (PEPP), the Supersonic Planetary Entry Detector Program (SPED), and the Supersonic High Altitude Parachute Experiments (SHAPE).
The PEPP program ran sixteen high altitude supersonic deployment tests, eight of which used the DGB style and one tested a 64.7-foot chute at hypersonic speeds in the upper atmosphere.
The method was simple. The parachute had 72 gores or sections (MSL by comparison uses 80 gores to reduce fabric stress and allow the use of lightweight fabric) with 72 suspension lines connecting the parachute to the bridle. The bridle was in turn connected to the payload. In this test’s case, the payload was a 15-foot-diameter analogue spacecraft.
On July 28, 1967, the whole configuration — payload and parachute packed in an aeroshell — was launched in a 26,000,000 foot cubed balloon from Walker Air Force Base in Roswell, New Mexico. It took three hours for the payload to reach its test altitude close to 130,000 feet over the White Sands Missile Range. At this altitude, the test was an adequate simulation of the environment a parachute would encounter in Mars’ upper atmosphere.
Once at launch height, the payload separated from the balloon. Rocket motors ignited 3.8 seconds later and propelled the stand-in spacecraft above Mach 1.5. The bridle unfurled, pulling the parachute out of its casing. The chute inflated, briefly collapsed, then filled with air again. Within three seconds it was fully inflated and stable.
The only major problem was a tear in two sections of the canopy — a loss of less than 0.5 percent of nominal surface area — after part of the casing ripped through the fabric. But overall the test was successful and proved the feasibility of deploying a 64.7 DGB parachute at supersonic speed in a thin atmosphere. It would inflate and produce enough drag to slow its payload’s rate of descent to a target planet’s surface.
DGB parachutes have been a staple of NASA’s Mars missions for decades. The Viking landers, Mars Pathfinder rover, both MER rovers, and the Mars Phoenix lander all used this type of parachute to reach the surface safely. The Sky Crane might be the most sophisticated, precise, and intricate landing system ever sent to the red planet, but its parachute has nearly 50 years of success behind it.
Image: The parachute for Curiosity passing flight-qualification testing in March and April 2009 inside the world’s largest wind tunnel, at NASA Ames Research Center, Moffett Field, Calif. Credit: NASA/JPL-Caltech | http://news.discovery.com/space/history-of-space/curiositys-retro-parachute-120618.htm | 13 |
70 | Mathematics Grade 4
|Printable Version (pdf)|
(1) Students generalize their understanding of place value to 1,000,000, understanding the relative sizes of numbers in each place. They apply their understanding of models for multiplication (equal-sized groups, arrays, area models), place value, and properties of operations, in particular the distributive property, as they develop, discuss, and use efficient, accurate, and generalizable methods to compute products of multi-digit whole numbers. Depending on the numbers and the context, they select and accurately apply appropriate methods to estimate or mentally calculate products. They develop fluency with efficient procedures for multiplying whole numbers; understand and explain why the procedures work based on place value and properties of operations; and use them to solve problems. Students apply their understanding of models for division, place value, properties of operations, and the relationship of division to multiplication as they develop, discuss, and use efficient, accurate, and generalizable procedures to find quotients involving multi-digit dividends. They select and accurately apply appropriate methods to estimate and mentally calculate quotients, and interpret remainders based upon the context.
(2) Students develop understanding of fraction equivalence and operations with fractions. They recognize that two different fractions can be equal (e.g., 15/9 = 5/3), and they develop methods for generating and recognizing equivalent fractions. Students extend previous understandings about how fractions are built from unit fractions, composing fractions from unit fractions, decomposing fractions into unit fractions, and using the meaning of fractions and the meaning of multiplication to multiply a fraction by a whole number.
(3) Students describe, analyze, compare, and classify two-dimensional shapes. Through building, drawing, and analyzing two-dimensional shapes, students deepen their understanding of properties of two-dimensional objects and the use of them to solve problems involving symmetry.
Grade 4 Overview
Operations and Algebraic Thinking
Number and Operations in Base Ten
Number and Operations - Fractions
Measurement and Data
Core Standards of the Course
1. Interpret a multiplication equation as a comparison, e.g., interpret 35 = 5 × 7 as a statement that 35 is 5 times as many as 7 and 7 times as many as 5. Represent verbal statements of multiplicative comparisons as multiplication equations.
2. Multiply or divide to solve word problems involving multiplicative comparison, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison.1
3. Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding.
4. Find all factor pairs for a whole number in the range 1–100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1–100 is a multiple of a given one-digit number. Determine whether a given whole number in the range 1–100 is prime or composite.
5. Generate a number or shape pattern that follows a given rule. Identify apparent features of the pattern that were not explicit in the rule itself. For example, given the rule “Add 3” and the starting number 1, generate terms in the resulting sequence and observe that the terms appear to alternate between odd and even numbers. Explain informally why the numbers will continue to alternate in this way.
1. Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division.
2. Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons.
5. Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.
6. Find whole-number quotients and remainders with up to four-digit dividends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.
1. Explain why a fraction a/b is equivalent to a fraction (n × a)/(n × b) by using visual fraction models, with attention to how the number and size of the parts differ even though the two fractions themselves are the same size. Use this principle to recognize and generate equivalent fractions.
2. Compare two fractions with different numerators and different denominators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that comparisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model.
- Understand addition and subtraction of fractions as joining and separating parts referring to the same whole.
- Decompose a fraction into a sum of fractions with the same denominator in more than one way, recording each decomposition by an equation. Justify decompositions, e.g., by using a visual fraction model. Examples: 3/8 = 1/8 + 1/8 + 1/8 ; 3/8 = 1/8 + 2/8 ; 2 1/8 = 1 + 1 + 1/8 = 8/8 + 8/8 + 1/8.
- Add and subtract mixed numbers with like denominators, e.g., by replacing each mixed number with an equivalent fraction, and/or by using properties of operations and the relationship between addition and subtraction.
- Solve word problems involving addition and subtraction of fractions referring to the same whole and having like denominators, e.g., by using visual fraction models and equations to represent the problem.
- Understand a fraction a/b as a multiple of 1/b. For example, use a visual fraction model to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4).
- Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction by a whole number. For example, use a visual fraction model to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.)
- Solve word problems involving multiplication of a fraction by a whole number, e.g., by using visual fraction models and equations to represent the problem. For example, if each person at a party will eat 3/8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie?
5. Express a fraction with denominator 10 as an equivalent fraction with denominator 100, and use this technique to add two fractions with respective denominators 10 and 100.4 For example, express 3/10 as 30/100, and add 3/10 + 4/100 = 34/100.
7. Compare two decimals to hundredths by reasoning about their size. Recognize that comparisons are valid only when the two decimals refer to the same whole. Record the results of comparisons with the symbols >, =, or <, and justify the conclusions, e.g., by using a visual model.
1. Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two-column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ...
2. Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale.
3. Apply the area and perimeter formulas for rectangles in real world and mathematical problems. For example, find the width of a rectangular room given the area of the flooring and the length, by viewing the area formula as a multiplication equation with an unknown factor.
4. Make a line plot to display a data set of measurements in fractions of a unit (1/2, 1/4, 1/8). Solve problems involving addition and subtraction of fractions by using information presented in line plots. For example, from a line plot find and interpret the difference in length between the longest and shortest specimens in an insect collection.
- An angle is measured with reference to a circle with its center at the common endpoint of the rays, by considering the fraction of the circular arc between the points where the two rays intersect the circle. An angle that turns through 1/360 of a circle is called a “one-degree angle,” and can be used to measure angles.
- An angle that turns through n one-degree angles is said to have an angle measure of n degrees.
7. Recognize angle measure as additive. When an angle is decomposed into non-overlapping parts, the angle measure of the whole is the sum of the angle measures of the parts. Solve addition and subtraction problems to find unknown angles on a diagram in real world and mathematical problems, e.g., by using an equation with a symbol for the unknown angle measure.
2. Classify two-dimensional figures based on the presence or absence of parallel or perpendicular lines, or the presence or absence of angles of a specified size. Recognize right triangles as a category, and identify right triangles.
3. Recognize a line of symmetry for a two-dimensional figure as a line across the figure such that the figure can be folded along the line into matching parts. Identify line-symmetric figures and draw lines of symmetry.
These materials have been produced by and for the teachers of the State of Utah. Copies of these materials may be freely reproduced for teacher and classroom use. When distributing these materials, credit should be given to Utah State Office of Education. These materials may not be published, in whole or part, or in any other format, without the written permission of the Utah State Office of Education, 250 East 500 South, PO Box 144200, Salt Lake City, Utah 84114-4200.
For more information about this core curriculum, contact the USOE Specialist, DAVID SMITH or visit the Mathematics - Elementary Home Page. For general questions about Utah's Core Curriculum, contact the USOE Curriculum Director, Sydnee Dickson . UEN Contact Info: 801-581-2999 | 800-866-5852 | Contact Us | http://www.uen.org/core/core.do?courseNum=5140 | 13 |
11 | How to Create Forms in Microsoft Word
Forms can be created in Microsoft Word by going to "View," selecting "Tools" and clicking on "Forms" to customize a form with blank spaces and drop-down menus. Create form documents in Word with a tutorial from a computer consultant in this video on computer programs.
Watch this video to learn all about copyright and how it applies to you. (6:56)
Why Is Plagiarism Wrong?
Why Is Plagiarism Wrong?. Part of the series: Writing and Education. Plagiarism is wrong because it is stealing someone's intellectual property, it creates a false impression of individual abilities, and it deprives one of learning opportunities. (01:49)
Envelope Money Management System
This video explains using an envelope/cash system to budget monthly items such as gas and spending money. The labeled envelopes and pretend money are shown , and a narrator explains the process. ( This video could be used to show younger students how to divide their allowance, gifts, etc., into different spending categories.) ( 3:52)
Solving Linear Equations: Variables in Numerator and Denominator
The instructor uses an electronic chalkboard to demonstrate how to solve linear equations in one variable. Examples include one-step and multi-step with variable in the numerator and denominator. This video is appropriate for high school students. Solving linear equations with variable expressions in the denominators of fractions.
Writing an Equation to Describe a Table
In Algebra, sometimes we are given points and asked to write an equation to describe them. This video clip describes the methods we can use for writing an equation to describe a table. For example, if the table describes a line, we use the y-intercept and calculate the slope to write the equation. To fully understand this concept, students should know how to plot points and how to interpret graphs. (1:15)
How to Use the Gimp Free Graphics Editor
How to Use the Gimp Free Graphics Editor-first download the program. (1:30)
~Video begins with an Ad~
The Great Pacific Garbage Patch (Gorilla in the Greenhouse)
This is an animation (07:43) to explain the Great Garbage Patch to younger viewers. Hufflebot will do anything to get that Wormulus out of his head, including creating a nation of plastic bags in the middle of the Pacific Ocean...until they get a taste of their own medicine. Created by Jay Golden for SustainLane Media.
Re-Wiring the Brain
Neuroscientist Michael Merzenich lectures on the secrets of the brain's ability to actively re-wire itself. He discusses his research into ways to harness the brain's plasticity to enhance our skills and recover lost function. The ability of the brain to grow and change as we develop is a complex process that progresses in a predictable way.
First-Grade Home School Lesson Subjects
Have you decided to home school your child? Learn all about important school subjects for home schooling first-graders in this education video.
Matt Moskal is a free-lance artist with a BA in Elementary / Special Education. He has taught Kindergarten through 6th grade in the Philadelphia School District since 2003.
Filmmaker: Christopher Rokosz
First-Grade Home School Reading Lessons : Reading Rules for First-Graders
Have you decided to home school your child? Learn how to teach reading rules to home schooled first graders in this education video.
First-Grade Home School Reading Lessons : Double Vowel Reading Lessons
Have you decided to home school your child? Learn how to teach double vowel reading lessons for homeschooled first graders in this education video.
Teaching Kids Simple Special Consonant Blends
Learn how to teach simple, special consonant blends like: ch,
sh and th, when home schooling kids in this free education video clip
taught by an expert teacher.
Cat Dissection Pectoral Muscles
Cat Dissection of the pectoral muscles, xiphihumeralis, pectoantibrachialis, latissimus dorsi, and clavodeltoid. Instructor shows how to find and separate each of the muscles very clearly. Very good video to review before beginning dissection of the chest region.
Caesar part 3 of 5
In part 3 Julius Caesar formed an unofficial triumvirate
with Marcus Licinius Crassus and Gnaeus Pompeius Magnus
which dominated Roman politics for several years, opposed in the
Roman Senate by optimates including Marcus Porcius Cato and Marcus Calpurnius Bibulus. His conquest of Gaul extended the Roman world to the North Sea, and he also conducted the first Roman invasion of Britain in 55 BC. The collapse of the triumvirate, however, led to a stand-off with Pompey and the S
This video will show how to do a gymnastics floor routine and talks about point levels.
Ancient Warriors - The Vikings 1/3
Historical Documentary on The Vikings. Video is dark at times, but is usable for the most part. General overview of the history of the Vikings and famous Viking warriors.
Achilles Book One- Life
This is part 1 of the the life of Achilles. Achilles was a Greek hero of the Trojan War, the central character and the greatest warrior of Homer's Iliad. Achilles also has the attributes of being the most handsome of the heroes assembled against Troy. His life story is depicted through ancient artwork with text at the bottom about his life with background music.
Sonnet no 18: By William Shakespeare
Shakespeare Sonnet 18 being read. Would be good if students had a copy to see what the words "sounded" like when read by a professional actor.
California Gold Rush
The discovery of gold during the 1840s attracts thousands of people attracts hoping to strike it rich in California and resulting in that state being admitted into the union. Mainly talk with little effort to show maps or images. | http://www.nottingham.ac.uk/xpert/scoreresults.php?keywords=Reenactment%20:%20fans%20performing%20movie%20scenes%20from%20the%20stage%20to&start=3840&end=3860 | 13 |
11 | An Overview of the Human Genome Project
What was the Human Genome Project?
The Human Genome Project (HGP) was the international, collaborative research program whose goal was the complete mapping and understanding of all the genes of human beings. All our genes together are known as our "genome."
The HGP was the natural culmination of the history of genetics research. In 1911, Alfred Sturtevant, then an undergraduate researcher in the laboratory of Thomas Hunt Morgan, realized that he could - and had to, in order to manage his data - map the locations of the fruit fly (Drosophila melanogaster) genes whose mutations the Morgan laboratory was tracking over generations. Sturtevant's very first gene map can be likened to the Wright brothers' first flight at Kitty Hawk. In turn, the Human Genome Project can be compared to the Apollo program bringing humanity to the moon.
The hereditary material of all multi-cellular organisms is the famous double helix of deoxyribonucleic acid (DNA), which contains all of our genes. DNA, in turn, is made up of four chemical bases, pairs of which form the "rungs" of the twisted, ladder-shaped DNA molecules. All genes are made up of stretches of these four bases, arranged in different ways and in different lengths. HGP researchers have deciphered the human genome in three major ways: determining the order, or "sequence," of all the bases in our genome's DNA; making maps that show the locations of genes for major sections of all our chromosomes; and producing what are called linkage maps, complex versions of the type originated in early Drosophila research, through which inherited traits (such as those for genetic disease) can be tracked over generations.
The HGP has revealed that there are probably about 20,500 human genes. The completed human sequence can now identify their locations. This ultimate product of the HGP has given the world a resource of detailed information about the structure, organization and function of the complete set of human genes. This information can be thought of as the basic set of inheritable "instructions" for the development and function of a human being.
The International Human Genome Sequencing Consortium published the first draft of the human genome in the journal Nature in February 2001 with the sequence of the entire genome's three billion base pairs some 90 percent complete. A startling finding of this first draft was that the number of human genes appeared to be significantly fewer than previous estimates, which ranged from 50,000 genes to as many as 140,000.The full sequence was completed and published in April 2003.
Upon publication of the majority of the genome in February 2001, Francis Collins, the director of NHGRI, noted that the genome could be thought of in terms of a book with multiple uses: "It's a history book - a narrative of the journey of our species through time. It's a shop manual, with an incredibly detailed blueprint for building every human cell. And it's a transformative textbook of medicine, with insights that will give health care providers immense new powers to treat, prevent and cure disease."
The tools created through the HGP also continue to inform efforts to characterize the entire genomes of several other organisms used extensively in biological research, such as mice, fruit flies and flatworms. These efforts support each other, because most organisms have many similar, or "homologous," genes with similar functions. Therefore, the identification of the sequence or function of a gene in a model organism, for example, the roundworm C. elegans, has the potential to explain a homologous gene in human beings, or in one of the other model organisms. These ambitious goals required and will continue to demand a variety of new technologies that have made it possible to relatively rapidly construct a first draft of the human genome and to continue to refine that draft. These techniques include:
- DNA Sequencing
- The Employment of Restriction Fragment-Length Polymorphisms (RFLP)
- Yeast Artificial Chromosomes (YAC)
- Bacterial Artificial Chromosomes (BAC)
- The Polymerase Chain Reaction (PCR)
Of course, information is only as good as the ability to use it. Therefore, advanced methods for widely disseminating the information generated by the HGP to scientists, physicians and others, is necessary in order to ensure the most rapid application of research results for the benefit of humanity. Biomedical technology and research are particular beneficiaries of the HGP.
However, the momentous implications for individuals and society for possessing the detailed genetic information made possible by the HGP were recognized from the outset. Another major component of the HGP - and an ongoing component of NHGRI - is therefore devoted to the analysis of the ethical, legal and social implications (ELSI) of our newfound genetic knowledge, and the subsequent development of policy options for public consideration.
Last Reviewed: November 8, 2012 | http://www.genome.gov/12011238 | 13 |
12 | the methodical process of logical reasoning; "I can't follow your line of reasoning"
An argument in which the premises strongly support the conclusion; that is, the premises make it reasonable to believe the conclusion.
Reasoning to support of the reality of the Aither based on the No-reducibility to the Matter of its characteristics.
A set of premises and a conclusion. Each given statement is a premise. The statement that is arrived at through reasoning is called the conclusion. An argument is valid if the conclusion has been arrived at through accepted forms of reasoning (Lesson 14.3).
In logic, an argument is an attempt to demonstrate the truth of an assertion called a conclusion, based on the truth of a set of assertions called premises. The process of demonstration of deductive (see also deduction) and inductive reasoning shapes the argument, and presumes some kind of communication, which could be part of a written text, a speech or a conversation. | http://www.metaglossary.com/meanings/2790810/ | 13 |
11 | Jet engine in which a turbine-driven compressor draws in and compresses air, forcing it into a combustion chamber into which fuel is injected. Ignition causes the gases to expand and to rush first through the turbine and then through a nozzle at the rear. Forward thrust is generated as a reaction to the rearward momentum of the exhaust gases. The turbofan or fanjet, a modification of the turbojet, came into common use in the 1960s. In the turbofan, some of the incoming air is bypassed around the combustion chamber and is accelerated to the rear by a turbine-operated fan. It moves a much greater mass of air than the simple turbojet, providing advantages in power and economy. Seealso ramjet.
Learn more about turbojet with a free trial on Britannica.com.
Turbojets are the oldest kind of general purpose jet engines. Two engineers, Frank Whittle in the United Kingdom and Hans von Ohain in Germany, developed the concept independently during the late 1930s, although credit for the first turbojet is given to Whittle who submitted the first proposal and held the first patent.
Turbojets consist of an air inlet, an air compressor, a combustion chamber, a gas turbine (that drives the air compressor) and a nozzle. The air is compressed into the chamber, heated and expanded by the fuel combustion and then allowed to expand out through the turbine into the nozzle where it is accelerated to high speed to provide propulsion.
Turbojets are quite inefficient (if flown below about Mach 2) and very noisy. Most modern aircraft use turbofans instead for economic reasons. Turbojets are still very common in medium range cruise missiles, due to their high exhaust speed, low frontal area and relative simplicity.
On 27 August, 1939 the Heinkel He 178 became the world's first aircraft to fly under turbojet power, thus becoming the first practical jet plane. The first two operational turbojet aircraft, the Messerschmitt Me 262 and then the Gloster Meteor entered service towards the end of World War II in 1944.
A turbojet engine is used primarily to propel aircraft. Air is drawn into the rotating compressor via the intake and is compressed to a higher pressure before entering the combustion chamber. Fuel is mixed with the compressed air and ignited by a flame in the eddy of a flame holder. This combustion process significantly raises the temperature of the gas. Hot combustion products leaving the combustor expand through the turbine where power is extracted to drive the compressor. Although this expansion process reduces the turbine exit gas temperature and pressure, both parameters are usually still well above ambient conditions. The gas stream exiting the turbine expands to ambient pressure via the propelling nozzle, producing a high velocity jet in the exhaust plume. If the momentum of the exhaust stream exceeds the momentum of the intake stream, the impulse is positive, thus, there is a net forward thrust upon the airframe.
Early generation jet engines were pure turbojets with either an axial or centrifugal compressor. They were used because they were able to achieve very high altitudes and speeds, much higher than propeller engines, because of a better compression ratio and because of their high exhaust speed. However they were not very fuel efficient. Modern jet engines are mainly turbofans, where a proportion of the air entering the intake bypasses the combustor; this proportion depends on the engine's bypass ratio. This makes turbofans much more efficient than turbojets at high subsonic/transonic and low supersonic speeds.
One of the most recent uses of turbojet engines was the Olympus 593 on Concorde. Concorde used turbojet engines because it turns out that the small cross-section and high exhaust speed is ideal for operation at Mach 2. Concorde's engine burnt less fuel to produce a given thrust for a mile at Mach 2.0 than a modern high-bypass turbofan such as General Electric CF6 at its optimum speed (about Mach 0.86)- however Concorde's airframe was far less efficient than any subsonic aircraft.
Although ramjet engines are simpler in design as they have virtually no moving parts, they are incapable of operating at low flight speeds.
Preceding the compressor is the air intake (or inlet). It is designed to be as efficient as possible at recovering the ram pressure of the air streamtube approaching the intake. The air leaving the intake then enters the compressor. The stators (stationary blades) guide the airflow of the compressed gases.
In most turbojet-powered aircraft, bleed air is extracted from the compressor section at various stages to perform a variety of jobs including air conditioning/pressurization, engine inlet anti-icing and turbine cooling. Bleeding air off decreases the overall efficiency of the engine, but the usefulness of the compressed air outweighs the loss in efficiency.
Several types of compressor are used in turbojets and gas turbines in general: axial, centrifugal, axial-centrifugal, double-centrifugal, etc.
Early turbojet compressors had overall pressure ratios as low as 5:1 (as do a lot of simple auxiliary power units and small propulsion turbojets today). Aerodynamic improvements, plus splitting the compression system into two separate units and/or incorporating variable compressor geometry, enabled later turbojets to have overall pressure ratios of 15:1 or more. For comparison, modern civil turbofan engines have overall pressure ratios of 44:1 or more.
After leaving the compressor section, the compressed air enters the combustion chamber.
The fuel-air mixture must be brought almost to a stop so that a stable flame can be maintained.
This occurs just after the start of the combustion chamber. The aft part of this flame front is allowed to progress rearward. This ensures that all of the fuel is burned, as the flame becomes hotter when it leans out, and because of the shape of the combustion chamber the flow is accelerated rearwards. Some pressure drop is required, as it is the reason why the expanding gases travel out the rear of the engine rather than out the front. Less than 25% of the air is involved in combustion, in some engines as little as 12%, the rest acting as a reservoir to absorb the heating effects of the burning fuel.
Another difference between piston engines and jet engines is that the peak flame temperature in a piston engine is experienced only momentarily in a small portion of the full cycle. The combustor in a jet engine is exposed to the peak flame temperature continuously and operates at a pressure high enough that a stoichiometric fuel-air ratio would melt the can and everything downstream. Instead, jet engines run a very lean mixture, so lean that it would not normally support combustion. A central core of the flow (primary airflow) is mixed with enough fuel to burn readily. The cans are carefully shaped to maintain a layer of fresh unburned air between the metal surfaces and the central core. This unburned air (secondary airflow) mixes into the burned gases to bring the temperature down to something a turbine can tolerate.
If, however, a convergent-divergent "de Laval" nozzle is fitted, the divergent (increasing flow area) section allows the gases to reach supersonic velocity within the nozzle itself. This is slightly more efficient on thrust than using a convergent nozzle. There is, however, the added weight and complexity since the con-di nozzle must be fully variable to cope basically with engine throttling.
represents the nozzle gross thrust
represents the ram drag of the intake.
Obviously, the jet velocity must exceed that of the flight velocity if there is to be a net forward thrust on the airframe.
Increasing the overall pressure ratio of the compression system raises the combustor entry temperature. Therefore, at a fixed fuel flow and airflow, there is an increase in turbine inlet temperature. Although the higher temperature rise across the compression system, implies a larger temperature drop over the turbine system, the nozzle temperature is unaffected, because the same amount of heat is being added to the system. There is, however, a rise in nozzle pressure, because overall pressure ratio increases faster than the turbine expansion ratio. Consequently, net thrust increases, while specific fuel consumption (fuel flow/net thrust) decreases.
Thus turbojets can be made more fuel efficient by raising overall pressure ratio and turbine inlet temperature in union. However, better turbine materials and/or improved vane/blade cooling are required to cope with increases in both turbine inlet temperature and compressor delivery temperature. Increasing the latter requires better compressor materials.
By Increasing the useful work to system , by minimizing the heat losses by conduction etc and minimizing the inlet temperature ratio up to a certain level will increase the themal efficiency of the turbo jet engine.
Early German engines had serious problems controlling the turbine inlet temperature. A lack of suitable alloys due to war shortages meant the turbine rotor and stator blades would sometimes disintegrate on first operation and never lasted long. Their early engines averaged 10-25 hours of operation before failing—often with chunks of metal flying out the back of the engine when the turbine overheated. British engines tended to fare better, running for 150 hours between overhauls. A few of the original fighters still exist with their original engines, but many have been re-engined with more modern engines with greater fuel efficiency and a longer TBO (such as the reproduction Me-262 powered by General Electric J85s).
The United States had the best materials because of their reliance on turbo/supercharging in high altitude bombers of World War II. For a time some US jet engines included the ability to inject water into the engine to cool the compressed flow before combustion, usually during takeoff. The water would tend to prevent complete combustion and as a result the engine ran cooler again, but the planes would take off leaving a huge plume of smoke.
Today these problems are much better handled, but temperature still limits turbojet airspeeds in supersonic flight. At the very highest speeds, the compression of the intake air raises the temperatures throughout the engine to the point that the turbine blades would melt, forcing a reduction in fuel flow to lower temperatures, but giving a reduced thrust and thus limiting the top speed. Ramjets and scramjets do not have turbine blades; therefore they are able to fly faster, and rockets run even hotter still.
At lower speeds, better materials have increased the critical temperature, and automatic fuel management controls have made it nearly impossible to overheat the engine.
Patent Application Titled "Mounting an Agb on an Intermediate Casing for a Turbojet Fan Compartment" under Review
Sep 27, 2012; By a News Reporter-Staff News Editor at Politics & Government Week -- According to news reporting originating from Washington,...
Patent Application Titled "Device for Measuring Temperature in a Primary Stream Flow Passage of a Bypass Turbojet" under Review
Mar 21, 2013; By a News Reporter-Staff News Editor at Computer Weekly News -- According to news reporting originating from Washington, D.C., by... | http://www.reference.com/browse/turbojet | 13 |
24 | When Galileo first turned his telescope to the sky, only the sun, moon and occasional comet were known as anything other than simple points of light. There were five known planets in the early 1600s, but they were only points of light that moved relative to the other points of light. The night sky was a universe unresolved. Through his telescope, Galileo saw for the first time Venus transformed from a "star'' to an orb; a disk with phases just like our Moon. Jupiter too became a disk, but instead of revealing phases, the telescope revealed four new worlds: the Galilean satellites. Both of these discoveries in their own way refuted the Earth centered model of the Universe and helped usher in the revolution of modern science. But they also ushered in another revolution: the resolution revolution. Starting with Galileo, telescopes would get larger and as their sized increased their resolving ability, the ability to discern detail on far away objects, would increase as well.
Today telescopes, both giant engineering marvels and simple back yard models, can resolve gaps in the rings of Saturn and spiral structure in the arms of distant galaxies. Browse through any book or web page on astronomy and one is presented with a kaleidescope of images of breathtaking nebulae, majestic galaxies, distant quasars and all the moons and planets our solar system has to offer. However, the one celestial object that looks today exactly like it did to Galileo is the one thing that everyone sees when they look up at night. Stars.
If one could look through the 200 inch telescope on Mt. Palomar, one would see stars as nothing more than the same points of light you would see by simply walking outside and looking up. Why are stars the sole hold out in the resolution revolution up to now? Simply put, stars are very small, and very far away. Our sun is a star, but even though it is almost 100 times larger than the largest planet in our own solar system, it is also at least a million times smaller than the interstellar nebulae of which we have so many pictures from the Hubble Space Telescope. At its average distance of 150 million kilometers, our sun spans an angular extent (it has an angular diameter) of half a degree. Move it to the distance of even the nearest other star to our solar system, Proxima Centauri at a distance of 30 trillion kilometers, and it has an angular diameter of 2 millionth of a degree or 7 milliarseconds (1 milliarcsecond is 1 thousandth of an arcsecond which is one sixtieth of an arcminute which is one sixtieth of a degree).
The resolution of a telescope (the size of the smallest point you can determine as being separate from another point) is proportional to the wavelength of light you're looking at divided by the diameter of your telescope. Make your telescope twice as big and you can resolve things twice as small. For the Keck telescope, 10 meters in diameter, the resolution at wavelengths used by the human eye is only 20 milliarcsecond (0.02 arcseconds). But the atmosphere through which the Keck telescope must look, blurs the ability to see detail and so on average, Keck can resolve objects only about 0.1 arcseconds in size. This is why telescopes in space, like HST, are so important. In order to resolve surface features on our sun at the distance of Proxima Centauri we'd need a telescope at least 40 meters in diameter. A telescope with a single mirror that big is currently impractical, but several smaller telescopes separated by 40 meters will yield the same resolution.
The NPOI on Anderson Mesa is just such a telescope. With it astronomers have been conducting observations designed to measure the angular diameters of stars and over a hundred stars have had their angular diameters measured to date. Figure 1 shows the relative angular diameters of eight stars measured with the NPOI. For reference, a person standing on the moon would have an angular height of 1.0 milliarcseconds. The smallest star shown, Arneb, also known as alpha Leporis has an angular diameter of 1.77 milliarcseconds. The largest star shown, Hamal, also known as alpha Cassiopeiae, is 6.88 milliarcseconds in diameter.
Relative angular diameters of stars. The man on the Moon has a diameter of 1 millisecond of arc.
But these diameters say nothing about how big the stars are themselves. Is a star that looks small really small or is it simply very far away? If one can measure the distance to a star then one can convert from an angular diameter to a true diameter. Distances to many of the brightest stars in the sky have already been determined using the Hipparcos satellite. The order in which stars overlap in Figure 1 is correct in terms of which stars are closer or farther away from us. Figure 2 shows the same stars for Figure 1, but now showing the correct relative linear size (where the size of each star in terms of how many times it is bigger than our Sun is shown). Since a person on the moon is obviously microscopic on this scale, our Sun is shown for reference. Notice that the smallest star in Figure 1, Arneb, is really the largest star of this sample.
Stars shown in relative proportion to their true diameters.
So what do we learn from knowing a star's diameter? If we know how bright it is, we can determine how much energy is put out by each square centimeter of its surface. This is directly proportional to how hot it is. By measuring its size we have taken its temperature. Sometimes we learn unexpected things. For the star Polaris -- the North Star -- measuring its size also revealed how the gasses deep inside it move. Learning about things we didn't even think to ask about when we started is one of the rewards of looking at the sky in ways that have never been done before. As interferometers get bigger and better the discoveries will only increase. Soon we will not only be able to measure the diameter of stars but image light and dark spots on their surfaces as well. With that advent, we will have passed another mile post in completing the work begun by Galileo, 400 years ago. | http://www2.lowell.edu/npoi/publications/diam.php | 13 |
13 | Kids let’s enjoy the story about seasons.
The story about seasons:
One day, the seasons had an argument. Each one of them said, “I am the best!”
Spring said, “I am when flowers bloom and it is green and fresh everywhere. Birds fly and insects have fun with new flowers.”
Summer said, “Yes, but I am when the sun shines brightly and it feels too hot to do anything. People eat ice-cream, enjoy cold drinks and eat yummy watermelon.
Autumn said, “I am when trees shed their leaves and cover the earth in orange brilliance. The air feels cool.”
Winter said, “I am when people wear woolen clothes, caps and gloves to keep their bodies warm. They get to drink hot chocolate. Birds fly south for the winter because it’s too cold.”
Since they couldn’t decide who was best, they agreed that they were all important because one could not do without the other.
Kindergarten kids can have fun reading the stories about the seasons and also share the story with your friends. In kindergarten math learning activities kids can enjoy a fun time activities
.Children can collect the pictures of things used in summer and winter seasons. They can make two different seasons pictures colorfully and can hang on the wall.Parents
can help the kids to read the story.Math Only Math
is based on the premise that children do not make a distinction between play and work and learn best when learning becomes play and play becomes learning.
However, suggestions for further improvement, from all quarters would be greatly appreciated.
Related Links :
● Number Rhymes.
● Matching the Objects.
● Numbers and Counting up to 10.
● Number the Pictures.
● Numbers up to 10.
● Numbers 1 to 10.
● Count and Write Numbers.
● Count the Numbers and Match.
● Numbers and their Names.
● Numbers and Counting up to 20.
● Learn About Counting.
● Counting Eleven to Twenty with Numbers and Words.
● Counting Numbers from Twenty One to Thirty.
● Counting Numbers from Thirty One to Forty.
● Geometric Shapes.
● Geometric Objects.
● Tell The Time.
● Worksheet on Time.
● Addition on a Number Line.
● Worksheet on Addition I.
● Worksheet on Addition II.
● Odd Man Out.
● Ordinal Numbers.
● Worksheet on Ordinal Numbers.
● Addition Worksheets.
● Subtraction Worksheets.
● Counting Numbers Practice Test.
●Worksheets on Counting Numbers.
● Worksheet on Counting Numbers 6 to 10.
● What is addition?
● Worksheet on Kindergarten Addition.
● Kindergarten Addition up to 5.
● Worksheets on Kindergarten Addition up to 5.
● Addition Facts.
● What is zero?
● Order of Numbers.
● Worksheets on Addition.
● Before and After Counting Worksheet up to 10.
● Worksheets on Counting Before and After.
● Before, After and Between Numbers Worksheet up to 10.
● Worksheet on Before, After and Between Numbers.
● Counting Before, After and Between Numbers up to 10.
● The Story about Seasons.
● Color by Number Worksheets.
● Worksheet on Joining Numbers.
● Missing Number Worksheets.
● Worksheet on Before, After and Between Numbers up to 20.
● Worksheet on Before, After and Between Numbers up to 50.
Kindergarten Math Activities
From Seasons to HOME PAGE | http://www.math-only-math.com/seasons.html | 13 |
17 | Saturn's moon Enceladus spreads its influence
A huge doughnut-shaped cloud of water vapor created by the moon encircles Saturn.
September 22, 2011
Chalk up one more feat for Saturn's intriguing moon Enceladus. The small, dynamic moon spews out dramatic plumes of water vapor and ice — first seen by NASA's Cassini spacecraft in 2005. It possesses simple organic particles and may house liquid water beneath its surface. Its geyser-like jets create a gigantic halo of ice, dust, and gas around Enceladus that helps feed Saturn's E ring. Now, thanks again to those icy jets, Enceladus is the only moon in our solar system known to substantially influence the chemical composition of its parent planet.
Water vapor and ice erupt from Saturn's moon Enceladus, the source of a newly discovered doughnut-shaped cloud around Saturn. Credit: NASA/JPL/Space Science Institute
In June, the European Space Agency (ESA) announced that its Herschel Space Observatory, which has important NASA contributions, had found a huge doughnut-shaped cloud, or torus, of water vapor created by Enceladus encircling Saturn. The torus is more than 373,000 miles (600,000 kilometers) across and about 37,000 miles (60,000 km) thick. It appears to be the source of water in Saturn's upper atmosphere.
Though it is enormous, the cloud had not been seen before because water vapor is transparent at most visible wavelengths of light, but Herschel could see the cloud with its infrared detectors. "Herschel is providing dramatic new information about everything from planets in our own solar system to galaxies billions of light-years away," said Paul Goldsmith from NASA's Jet Propulsion Laboratory in Pasadena, California.
The discovery of the torus around Saturn did not come as a complete surprise. NASA's Voyager and Hubble missions had given scientists hints of the existence of water-bearing clouds around Saturn. Then in 1997, ESA’s Infrared Space Observatory confirmed the presence of water in Saturn's upper atmosphere. NASA's Submillimeter Wave Astronomy Satellite also observed water emission from Saturn at far-infrared wavelengths in 1999.
While a small amount of gaseous water is locked in the warm, lower layers of Saturn's atmosphere, it can't rise to the colder, higher levels. To get to the upper atmosphere, water molecules must be entering Saturn's atmosphere from somewhere in space. But from where and how? Those were mysteries until now.
Build the model, and the data will come
The answer came by combining Herschel's observations of the giant cloud of water vapor created by Enceladus' plumes with computer models that researchers had already been developing to describe the behavior of water molecules in clouds around Saturn.
One of these researchers is Tim Cassidy from the University of Colorado, Boulder. "What's amazing is that the model, which is one iteration in a long line of cloud models, was built without knowledge of the observation,” said Cassidy. “Those of us in this small modeling community were using data from Cassini, Voyager, and the Hubble telescope, along with established physics. We weren't expecting such detailed 'images' of the torus, and the match between model and data was a wonderful surprise."
The results show that, though most of the water in the torus is lost to space, some of the water molecules fall and freeze on Saturn's rings, while a small amount — about 3 to 5 percent — gets through the rings to Saturn's atmosphere. This is just enough to account for the water that has been observed there.
Herschel's measurements combined with the cloud models also provided new information about the rate at which water vapor is erupting out of the dark fractures known as "tiger stripes" on Enceladus' southern polar region. Previous measurements by the Ultraviolet Imaging Spectrograph (UVIS) instrument aboard the Cassini spacecraft showed that the moon is ejecting about 440 pounds (200 kilograms) of water vapor every second.
"With the Herschel measurements of the torus from 2009 and 2010 and our cloud model, we were able to calculate a source rate for water vapor coming from Enceladus," said Cassidy. "It agrees very closely with the UVIS finding, which used a completely different method."
"We can see the water leaving Enceladus, and we can detect the end product — atomic oxygen — in the Saturn system," said Cassini UVIS science team member Candy Hansen from the Planetary Science Institute in Tucson, Arizona. "It's very nice with Herschel to track where it goes in the meantime."
While a small fraction of the water molecules inside the torus end up in Saturn's atmosphere, most are broken down into separate atoms of hydrogen and oxygen. "When water hangs out in the torus, it is subject to the processes that dissociate water molecules, first to hydrogen and hydroxide, and then the hydroxide dissociates into hydrogen and atomic oxygen," said Hansen. “This oxygen is dispersed through the Saturn system. Cassini discovered atomic oxygen on its approach to Saturn before it went into orbit insertion. At the time, no one knew where it was coming from. Now we do."
"The profound effect this little moon Enceladus has on Saturn and its environment is astonishing," said Hansen.
Look for this icon. This denotes premium subscriber content.
Learn more » | http://www.astronomy.com/en/News-Observing/News/2011/09/Saturns%20moon%20Enceladus%20spreads%20its%20influence.aspx | 13 |
20 | |, � & Radiation||Alpha Decay||Beta Decay|
|Gamma Decay||Spontaneous Fission||Neutron-Rich Versus Neutron-Poor Nuclides|
|Binding Energy Calculations||The Kinetics of Radioactive Decay||Dating By Radioactive Decay|
Early studies of radioactivity indicated that three different kinds of radiation were emitted, symbolized by the first three letters of the Greek alphabet , �/i>, and . With time, it became apparent that this classification scheme was much too simple. The emission of a negatively charged �/i>- particle, for example, is only one example of a family of radioactive transformations known as �/em>-decay. A fourth category, known as spontaneous fission, also had to be added to describe the process by which certain radioactive nuclides decompose into fragments of different weight.
Alpha decay is usually restricted to the heavier elements in the periodic table. (Only a handful of nuclides with atomic numbers less than 83 emit an -particle.) The product of -decay is easy to predict if we assume that both mass and charge are conserved in nuclear reactions. Alpha decay of the 238U "parent" nuclide, for example, produces 234Th as the "daughter" nuclide.
The sum of the mass numbers of the products (234 + 4) is equal to the mass number of the parent nuclide (238), and the sum of the charges on the products (90 + 2) is equal to the charge on the parent nuclide.
There are three different modes of beta decay:
Electron (�/em>-) emission is literally the process in which an electron is ejected or emitted from the nucleus. When this happens, the charge on the nucleus increases by one. Electron (�/i>-) emitters are found throughout the periodic table, from the lightest elements (3H) to the heaviest (255Es). The product of �/i>--emission can be predicted by assuming that both mass number and charge are conserved in nuclear reactions. If 40K is a �/i>--emitter, for example, the product of this reaction must be 40Ca.
Once again the sum of the mass numbers of the products is equal to the mass number of the parent nuclide and the sum of the charge on the products is equal to the charge on the parent nuclide.
Nuclei can also decay by capturing one of the electrons that surround the nucleus. Electron capture leads to a decrease of one in the charge on the nucleus. The energy given off in this reaction is carried by an x-ray photon, which is represented by the symbol hv, where h is Planck's constant and v is the frequency of the x-ray. The product of this reaction can be predicted, once again, by assuming that mass and charge are conserved.
The electron captured by the nucleus in this reaction is usually a 1s electron because electrons in this orbital are the closest to the nucleus.
A third form of beta decay is called positron (�sup>+) emission. The positron is the antimatter equivalent of an electron. It has the same mass as an electron, but the opposite charge. Positron (�/i>+) decay produces a daughter nuclide with one less positive charge on the nucleus than the parent.
Positrons have a very short life-time. They rapidly lose their kinetic energy as they pass through matter. As soon as they come to rest, they combine with an electron to form two -ray photons in a matter-antimatter annihilation reaction.
Thus, although it is theoretically possible to observe a fourth mode of beta decay corresponding to the capture of a positron, this reaction does not occur in nature.
Note that in all three forms of �/i>-decay for the 40K nuclide the mass number of the parent and daughter nuclides are the same for electron emission, electron capture, and position emission. All three forms of �/i>-decay therefore interconvert isobars.
The daughter nuclides produced by -decay or �/i>-decay are often obtained in an excited state. The excess energy associated with this excited state is released when the nucleus emits a photon in the -ray portion of the electromagnetic spectrum. Most of the time, the -ray is emitted within 10-12 seconds after the -particle or �/i>-particle. In some cases, gamma decay is delayed, and a short-lived, or metastable, nuclide is formed, which is identified by a small letter m written after the mass number. 60mCo, for example, is produced by the electron emission of 60Fe.
The metastable 60mCo nuclide has a half-life of 10.5 minutes. Since electromagnetic radiation carries neither charge nor mass, the product of -ray emission by 60mCo is 60Co.
Nuclides with atomic numbers of 90 or more undergo a form of radioactive decay known as spontaneous fission in which the parent nucleus splits into a pair of smaller nuclei. The reaction is usually accompanied by the ejection of one or more neutrons.
For all but the very heaviest isotopes, spontaneous fission is a very slow reaction. Spontaneous fission of 238U, for example, is almost two million times slower than the rate at which this nuclide undergoes -decay.
|Practice Problem 3:
Predict the products of the following nuclear reactions:
(a) electron emission by 14C (b) positron emission by 8B
(c) electron capture by 125I (d) alpha emission by 210Rn
(e) gamma-ray emission by 56mNi
In 1934 Enrico Fermi proposed a theory that explained the three forms of beta decay. He argued that a neutron could decay to form a proton by emitting an electron. A proton, on the other hand, could be transformed into a neutron by two pathways. It can capture an electron or it can emit a positron. Electron emission therefore leads to an increase in the atomic number of the nucleus.
Both electron capture and positron emission, on the other hand, result in a decrease in the atomic number of the nucleus.
A plot of the number of neutrons versus the number of protons for all of the stable naturally occurring isotopes is shown in the figure below. Several conclusions can be drawn from this plot.
|A graph of the number of neutrons versus the number of protons for all stable naturally occurring nuclei. Nuclei that lie to the right of this band of stability are neutron poor; nuclei to the left of the band are neutron-rich. The solid line represents a neutron to proton ratio of 1:1.|
- The stable nuclides lie in a very narrow band of neutron-to-proton ratios.
- The ratio of neutrons to protons in stable nuclides gradually increases as the number of protons in the nucleus increases.
- Light nuclides, such as 12C, contain about the same number of neutrons and protons. Heavy nuclides, such as 238U, contain up to 1.6 times as many neutrons as protons.
- There are no stable nuclides with atomic numbers larger than 83.
- This narrow band of stable nuclei is surrounded by a sea of instability.
- Nuclei that lie above this line have too many neutrons and are therefore neutron-rich.
- Nuclei that lie below this line don't have enough neutrons and are therefore neutron-poor.
The most likely mode of decay for a neutron-rich nucleus is one that converts a neutron into a proton. Every neutron-rich radioactive isotope with an atomic number smaller 83 decays by electron (�/i>-) emission. 14C, 32P, and 35S, for example, are all neutron-rich nuclei that decay by the emission of an electron.
Neutron-poor nuclides decay by modes that convert a proton into a neutron. Neutron-poor nuclides with atomic numbers less than 83 tend to decay by either electron capture or positron emission. Many of these nuclides decay by both routes, but positron emission is more often observed in the lighter nuclides, such as 22Na.
Electron capture is more common among heavier nuclides, such as 125I, because the 1s electrons are held closer to the nucleus of an atom as the charge on the nucleus increases.
A third mode of decay is observed in neutron-poor nuclides that have atomic numbers larger than 83. Although it is not obvious at first, -decay increases the ratio of neutrons to protons. Consider what happens during the -decay of 238U, for example.
The parent nuclide (238U) in this reaction has 92 protons and 146 neutrons, which means that the neutron-to-proton ratio is 1.587. The daughter nuclide (234Th) has 90 protons and 144 neutrons, so its neutron-to-proton ratio is 1.600. The daughter nuclide is therefore slightly less likely to be neutron-poor, as shown in the figure below.
|Practice Problem 4:
Predict the most likely modes of decay and the products of decay of the following nuclides:
(a) 17F (b) 105Ag (c) 185Ta
We should be able to predict the mass of an atom from the masses of the subatomic particles it contains. A helium atom, for example, contains two protons, two neutrons, and two electrons.
The mass of a helium atom should be 4.0329802 amu.
|2(1.0072765) amu||=||2.0145530 amu|
|2(1.0086650) amu||=||2.0173300 amu|
|2(0.0005486) amu||=||0.0010972 amu|
|Total mass||=||4.0329802 amu|
When the mass of a helium atom is measured, we find that the experimental value is smaller than the predicted mass by 0.0303769 amu.
|Predicted mass||=||4.0329802 amu|
|Observed mass||=||4.0026033 amu|
|Mass defect||=||0.0303769 amu|
The difference between the mass of an atom and the sum of the masses of its protons, neutrons, and electrons is called the mass defect. The mass defect of an atom reflects the stability of the nucleus. It is equal to the energy released when the nucleus is formed from its protons and neutrons. The mass defect is therefore also known as the binding energy of the nucleus.
The binding energy serves the same function for nuclear reactions as H for a chemical reaction. It measures the difference between the stability of the products of the reaction and the starting materials. The larger the binding energy, the more stable the nucleus. The binding energy can also be viewed as the amount of energy it would take to rip the nucleus apart to form isolated neutrons and protons. It is therefore literally the energy that binds together the neutrons and protons in the nucleus.
The binding energy of a nuclide can be calculated from its mass defect with Einstein's equation that relates mass and energy.
E = mc2
We found the mass defect of He to be 0.0303769 amu. To obtain the binding energy in units of joules, we must convert the mass defect from atomic mass units to kilograms.
Multiplying the mass defect in kilograms by the square of the speed of light in units of meters per second gives a binding energy for a single helium atom of 4.53358 x 10-12 joules.
Multiplying the result of this calculation by the number of atoms in a mole gives a binding energy for helium of 2.730 x 1012 joules per mole, or 2.730 billion kilojoules per mole.
This calculation helps us understand the fascination of nuclear reactions. The energy released when natural gas is burned is about 800 kJ/mol. The synthesis of a mole of helium releases 3.4 million times as much energy.
Since most nuclear reactions are carried out on very small samples of material, the mole is not a reasonable basis of measurement. Binding energies are usually expressed in units of electron volts (eV) or million electron volts (MeV) per atom.
The binding energy of helium is 28.3 x 106 eV/atom or 28.3 MeV/atom.
Calculations of the binding energy can be simplified by using the following conversion factor between the mass defect in atomic mass units and the binding energy in million electron volts.
1 amu = 931.5016 MeV
|Practice Problem 5:
Calculate the binding energy of 235U if the mass of this nuclide is 235.0349 amu.
Binding energies gradually increase with atomic number, although they tend to level off near the end of the periodic table. A more useful quantity is obtained by dividing the binding energy for a nuclide by the total number of protons and neutrons it contains. This quantity is known as the binding energy per nucleon.
The binding energy per nucleon ranges from about 7.5 to 8.8 MeV for most nuclei, as shown in the figure below. It reaches a maximum, however, at an atomic mass of about 60 amu. The largest binding energy per nucleon is observed for 56Fe, which is the most stable nuclide in the periodic table.
The graph of binding energy per nucleon versus atomic mass explains why energy is released when relatively small nuclei combine to form larger nuclei in fusion reactions.
It also explains why energy is released when relatively heavy nuclei split apart in fission (literally, "to split or cleave") reactions.
There are a number of small irregularities in the binding energy curve at the low end of the mass spectrum, as shown in the figure below. The 4He nucleus, for example, is much more stable than its nearest neighbors. The unusual stability of the 4He nucleus explains why -particle decay is usually much faster than the spontaneous fission of a nuclide into two large fragments.
Radioactive nuclei decay by first-order kinetics. The rate of radioactive decay is therefore the product of a rate constant (k) times the number of atoms of the isotope in the sample (N).
Rate = kN
The rate of radioactive decay doesn't depend on the chemical state of the isotope. The rate of decay of 238U, for example, is exactly the same in uranium metal and uranium hexafluoride, or any other compound of this element.
The rate at which a radioactive isotope decays is called the activity of the isotope. The most common unit of activity is the curie (Ci), which was originally defined as the number of disintegrations per second in 1 gram of 226Ra. The curie is now defined as the amount of radioactive isotope necessary to achieve an activity of 3.700 x 1010 disintegrations per second.
|Practice Problem 6:
The most abundant isotope of uranium is 238U; 99.276% of the atoms in a sample of uranium are 238U. Calculate the activity of the 238U in 1 L of a 1.00 M solution of the uranyl ion, UO22+. Assume that the rate constant for the decay of this isotope is 4.87 x 10-18 disintegrations per second.
The relative rates at which radioactive nuclei decay can be expressed in terms of either the rate constants for the decay or the half-lives of the nuclei. We can conclude that 14C decays more rapidly than 238U, for example, by noting that the rate constant for the decay of 14C is much larger than that for 238U.
|14C:||k = 1.210 x 10-4 y-1|
|238U:||k = 1.54 x 10-10 y-1|
We can reach the same conclusion by noting that the half-life for the decay of 14C is much shorter than that for 235U.
|14C:||t1/2 = 5730 y|
|238U:||t1/2 = 4.51 x 109 y|
The half-life for the decay of a radioactive nuclide is the length of time it takes for exactly half of the nuclei in the sample to decay. In our discussion of the kinetics of chemical reactions, we concluded that the half-life of a first-order process is inversely proportional to the rate constant for this process.
|Practice Problem 7:
Calculate the fraction of 14C that remains in a sample after eight half-lives.
The half-life of a nuclide can be used to estimate the amount of a radioactive isotope left after a given number of half-lives. For more complex calculations, it is easier to convert the half-life of the nuclide into a rate constant and then use the integrated form of the first-order rate law described in the kinetic section.
|Practice Problem 8:
How long would it take for a sample of 222Rn that weighs 0.750 g to decay to 0.100 g? Assume a half-life for 222Rn of 3.823 days.
The earth is constantly bombarded by cosmic rays emitted by the sun. The total energy received in the form of cosmic rays is small no more than the energy received by the planet from starlight. But the energy of a single cosmic ray is very large, on the order of several billion electron volts (0.200 million kJ/mol). These highly energetic rays react with atoms in the atmosphere to produce neutrons that then react with nitrogen atoms in the atmosphere to produce 14C.
The 14C formed in this reaction is a neutron-rich nuclide that decays by electron emission with a half-life of 5730 years.
Just after World War II, Willard F. Libby proposed a way to use these reactions to estimate the age of carbon-containing substances. The 14C dating technique for which Libby received the Nobel prize was based on the following assumptions.
- 14C is produced in the atmosphere at a more or less constant rate.
- Carbon atoms circulate between the atmosphere, the oceans, and living organisms at a rate very much faster than they decay. As a result, there is a constant concentration of 14C in all living things.
- After death, organisms no longer pick up 14C.
Thus, by comparing the activity of a sample with the activity of living tissue we can estimate how long it has been since the organism died.
The natural abundance of 14C is about 1 part in 1012 and the average activity of living tissue is 15.3 disintegrations per minute per gram of carbon. Samples used for 14C dating can include charcoal, wood, cloth, paper, sea shells, limestone, flesh, hair, soil, peat, and bone. Since most iron samples also contain carbon, it is possible to estimate the time since iron was last fired by analyzing for 14C.
|Practice Problem 9:
The skin, bones and clothing of an adult female mummy discovered in Chimney Cave, Lake Winnemucca, Nevada, were dated by radiocarbon analysis. How old is this mummy if the sample retains 73.9% of the activity of living tissue?
We now know that one of Libby's assumptions is questionable: The amount of 14C in the atmosphere hasn't been constant with time. Because of changes in solar activity and the earth's magnetic field, it has varied by as much as 5%. More recently, contamination from the burning of fossil fuels and the testing of nuclear weapons has caused significant changes in the amount of radioactive carbon in the atmosphere. Radiocarbon dates are therefore reported in years before the present era (B.P.). By convention, the present era is assumed to begin in 1950, when 14C dating was introduced.
Studies of bristlecone pines allow us to correct for changes in the abundance of 14C with time. These remarkable trees, which grow in the White Mountains of California, can live for up to five thousand years. By studying the 14C activity of samples taken from the annual growth rings in these trees, researchers have developed a calibration curve for 14C dates from the present back to 5145 B.C.
After roughly 45,000 years (eight half-lives), a sample retains only 0.4% of the 14C activity of living tissue. At that point it becomes too old to date by radiocarbon techniques. Other radioactive isotopes can be used to date rocks, soils, or archaeological objects that are much older. Potassium-argon dating, for example, has been used to date samples up to 4.3 billion years old. Naturally occurring potassium contains 0.0118% by weight of the radioactive 40K isotope. This isotope decays to 40Ar with a half-life of 1.3 billion years. The 40Ar produced after a rock crystallizes is trapped in the crystal lattice. It can be released, however, when the rock is melted at temperatures up to 2000C. By measuring the amount of 40Ar released when the rock is melted and comparing it with the amount of potassium in the sample, the time since the rock crystallized can be determined. | http://chemed.chem.purdue.edu/genchem/topicreview/bp/ch23/modes.php | 13 |
26 | Uniform motion is motion at a constant speed in a straight line. Uniform motion can be described by a few simple equations. The distance s covered by a body moving with velocity v during a time t is given by s=vt. If the velocity is changing, either in direction or magnitude, it is called accelerated motion (see acceleration). Uniformly accelerated motion is motion during which the acceleration remains constant. The average velocity during this time is one half the sum of the initial and final velocities. If a is the acceleration, vo the original velocity, and vf the final velocity, then the final velocity is given by vf=vo + at. The distance covered during this time is s=vot + 1/2 at2. In the simplest circular motion the speed is constant but the direction of motion is changing continuously. The acceleration causing this change, known as centripetal acceleration because it is always directed toward the center of the circular path, is given by a=v2/r, where v is the speed and r is the radius of the circle.
The relationship between force and motion was expressed by Sir Isaac Newton in his three laws of motion: (1) a body at rest tends to remain at rest or a body in motion tends to remain in motion at a constant speed in a straight line unless acted on by an outside force, i.e., if the net unbalanced force is zero, then the acceleration is zero; (2) the acceleration a of a mass m by an unbalanced force F is directly proportional to the force and inversely proportional to the mass, or a = F/m; (3) for every action there is an equal and opposite reaction. The third law implies that the total momentum of a system of bodies not acted on by an external force remains constant (see conservation laws, in physics). Newton's laws of motion, together with his law of gravitation, provide a satisfactory basis for the explanation of motion of everyday macroscopic objects under everyday conditions. However, when applied to extremely high speeds or extremely small objects, Newton's laws break down.
Motion at speeds approaching the speed of light must be described by the theory of relativity. The equations derived from the theory of relativity reduce to Newton's when the speed of the object being described is very small compared to that of light. When the motions of extremely small objects (atoms and elementary particles) are described, the wavelike properties of matter must be taken into account (see quantum theory). The theory of relativity also resolves the question of absolute motion. When one speaks of an object as being in motion, such motion is usually in reference to another object which is considered at rest. Although a person sitting in a car is at rest with respect to the car, both in motion with respect to the earth, and the earth is in motion with respect to the sun and the center of the galaxy. All these motions are relative.
It was once thought that there existed a light-carrying medium, known as the luminiferous ether, which was in a state of absolute rest. Any object in motion with respect to this hypothetical frame of reference would be in absolute motion. The theory of relativity showed, however, that no such medium was necessary and that all motion could be treated as relative.
See J. C. Maxwell, Matter and Motion (1877, repr. 1952).
Motion of a particle moving at a constant speed on a circle. Though the magnitude of the velocity of such an object may be constant, the object is constantly accelerating because its direction is constantly changing. At any given instant its direction is perpendicular to a radius of the circle drawn to the point of location of the object on the circle. The acceleration is strictly a change in direction and is a result of a force directed toward the centre of the circle. This centripetal force causes centripetal acceleration.
Learn more about uniform circular motion with a free trial on Britannica.com.
Analysis of the time spent in going through the different motions of a job or series of jobs in the evaluation of industrial performance. Such studies were first instituted in offices and factories in the U.S. in the early 20th century. They were widely adopted as a means of improving work methods by subdividing the different operations of a job into measurable elements, and they were in turn used as aids in standardization of work and in checking the efficiency of workers and equipment.
Learn more about time-and-motion study with a free trial on Britannica.com.
Repetitive back-and-forth movement through a central, or equilibrium, position in which the maximum displacement on one side is equal to the maximum displacement on the other. Each complete vibration takes the same time, the period; the reciprocal of the period is the frequency of vibration. The force that causes the motion is always directed toward the equilibrium position and is directly proportional to the distance from it. A pendulum displays simple harmonic motion; other examples include the electrons in a wire carrying alternating current and the vibrating particles of a medium carrying sound waves.
Learn more about simple harmonic motion with a free trial on Britannica.com.
In astronomy, the actual or apparent motion of a body in a direction opposite to that of the predominant (direct or prograde) motions of similar bodies. Observationally and historically, retrograde motion refers to the apparent reversal of the planets' motion through the stars for several months in each synodic period. This required a complex explanation in Earth-centred models of the universe (see Ptolemy) but was naturally explained in heliocentric models (see Copernican system) by the apparent motion as Earth passed by a planet in its orbit. It is now known that nearly all bodies in the solar system revolve and rotate in the same counterclockwise direction as viewed from a position in space above Earth's North Pole. This common direction probably arose during the formation of the solar nebula. The relatively few objects with clockwise motions (e.g., the rotation of Venus, Uranus, and Pluto) are also described as retrograde.
Learn more about retrograde motion with a free trial on Britannica.com.
Apparent motion of a star across the celestial sphere at right angles to the observer's line of sight, generally measured in seconds of arc per year. Any radial motion (toward or away from the observer) is not included. Edmond Halley was the first to detect proper motions; the largest known is that of Barnard's star, about 10 seconds yearly.
Learn more about proper motion with a free trial on Britannica.com.
Motion that is repeated in equal intervals of time. The time of each interval is the period. Examples of periodic motion include a rocking chair, a bouncing ball, a vibrating guitar string, a swinging pendulum, and a water wave. Seealso simple harmonic motion.
Learn more about periodic motion with a free trial on Britannica.com.
Mathematical formula that describes the motion of a body relative to a given frame of reference, in terms of the position, velocity, or acceleration of the body. In classical mechanics, the basic equation of motion is Newton's second law (see Newton's laws of motion), which relates the force on a body to its mass and acceleration. When the force is described in terms of the time interval over which it is applied, the velocity and position of the body can be derived. Other equations of motion include the position-time equation, the velocity-time equation, and the acceleration-time equation of a moving body.
Learn more about motion, equation of with a free trial on Britannica.com.
Sickness caused by contradiction between external data from the eyes and internal cues from the balance centre in the inner ear. For example, in seasickness the inner ear senses the ship's motion, but the eyes see the still cabin. This stimulates stress hormones and accelerates stomach muscle contraction, leading to dizziness, pallor, cold sweat, and nausea and vomiting. Minimizing changes of speed and direction may help, as may reclining, not turning the head, closing the eyes, or focusing on distant objects. Drugs can prevent or relieve motion sickness but may have side effects. Pressing an acupuncture point on the wrist helps some people.
Learn more about motion sickness with a free trial on Britannica.com.
Series of still photographs on film, projected in rapid succession onto a screen. Motion pictures are filmed with a movie camera, which makes rapid exposures of people or objects in motion, and shown with a movie projector, which reproduces sound synchronized with the images. The principal inventors of motion-picture machines were Thomas Alva Edison in the U.S. and the Lumière brothers in France. Film production was centred in France in the early 20th century, but by 1920 the U.S. had become dominant. As directors and stars moved to Hollywood, movie studios expanded, reaching their zenith in the 1930s and '40s, when they also typically owned extensive theatre chains. Moviemaking was marked by a new internationalism in the 1950s and '60s, which also saw the rise of the independent filmmaker. The sophistication of special effects increased greatly from the 1970s. The U.S. film industry, with its immense technical resources, has continued to dominate the world market to the present day. Seealso Columbia Pictures; MGM; Paramount Communications; RKO; United Artists; Warner Brothers.
Learn more about motion picture with a free trial on Britannica.com.
Change in position of a body relative to another body or with respect to a frame of reference or coordinate system. Motion occurs along a definite path, the nature of which determines the character of the motion. Translational motion occurs if all points in a body have similar paths relative to another body. Rotational motion occurs when any line on a body changes its orientation relative to a line on another body. Motion relative to a moving body, such as motion on a moving train, is called relative motion. Indeed, all motions are relative, but motions relative to the Earth or to any body fixed to the Earth are often assumed to be absolute, as the effects of the Earth's motion are usually negligible. Seealso Brownian motion; periodic motion; simple harmonic motion; simple motion; uniform circular motion.
Learn more about motion with a free trial on Britannica.com.
Relations between the forces acting on a body and the motion of the body, formulated by Isaac Newton. The laws describe only the motion of a body as a whole and are valid only for motions relative to a reference frame. Usually, the reference frame is the Earth. The first law, also called the law of inertia, states that if a body is at rest or moving at constant speed in a straight line, it will continue to do so unless it is acted upon by a force. The second law states that the force math.F acting on a body is equal to the mass math.m of the body times its acceleration math.a, or math.F = math.mmath.a. The third law, also called the action-reaction law, states that the actions of two bodies on each other are always equal in magnitude and opposite in direction.
Learn more about Newton's laws of motion with a free trial on Britannica.com.
Any of various physical phenomena in which some quantity is constantly undergoing small, random fluctuations. It was named for Robert Brown, who was investigating the fertilization process of flowers in 1827 when he noticed a “rapid oscillatory motion” of microscopic particles within pollen grains suspended in water. He later discovered that similar motions could be seen in smoke or dust particles suspended in air and other fluids. The idea that molecules of a fluid are constantly in motion is a key part of the kinetic theory of gases, developed by James Clerk Maxwell, Ludwig Boltzmann, and Rudolf Clausius (1822–88) to explain heat phenomena.
Learn more about Brownian motion with a free trial on Britannica.com.
The artist known as Little Eva was actually Carole King's babysitter, having been introduced to King and husband Gerry Goffin by The Cookies, a local girl group who would also record for the songwriters. Apparently the dance came before the lyrics; Eva was bopping to some music that King was playing at home, and a dance with lyrics was soon born. It was the first release on the new Dimension Records label, whose girl-group hits were mostly penned and produced by Goffin and King.
The Loco-Motion was quickly recorded by British girl group The Vernons Girls and entered the chart the same week as the Little Eva version. The Vernons Girls' version stalled at number 47 in the UK, while the Little Eva version climbed all the way to number 2 on the UK charts. It re-entered the chart some ten years later and almost became a top ten again, peaking at number 11.
The Little Eva version of the song was featured in the 2006 David Lynch film Inland Empire in a sequence involving the recurrent characters of the girl-friends/prostitutes performing the dance routine. The scene has been noted as being particularly surreal, even by the standards of David Lynch movies.
Serbian new wave band Električni Orgazam recorded an album of covers Les Chansones Populaires in 1983. The first single off the release was "Locomotion". Ljubomir Đukić provided the lead vocals. Having left the band, Đukić made a guest appearance on the first band's live album Braćo i sestre ("Brothers and sisters") and that is the only live version of the song the band released.
Jerick's different version of the song was originally released by Minogue as her debut single on July 27, 1987 in Australia under the title "Locomotion". After an impromptu performance of the song at an Australian rules football charity event with the cast of the Australian soap opera Neighbours, Minogue was signed a record deal by Mushroom Records to release the song as a single. The song was a hit in Australia, reaching number one and remained there for an amazing seven weeks. The success of the song in her home country led to her signing a record deal with PWL Records in London and to working with the hit producing team, Stock, Aitken and Waterman.
The music video for "Locomotion" was filmed at Essendon Airport and the ABC studios in Melbourne, Australia. The video for "The Loco-Motion" was created out of footage from the Australian music video.
At the end of 1988, the song was nominated for Best International Single at the Canadian Music Industry Awards.
In late 1988, Minogue travelled to the United States to promote "The Loco-Motion", where she did many interviews and performances on American television.
"The Loco-Motion" debuted at number eighty on the U.S. Billboard Hot 100 and later climbed to number three for two weeks. The song was Minogue's second single to chart in the U.S., but her first to reach the top ten. It remains her biggest hit in the United States. She would not even reach the top ten again until 2002 with the release of "Can't Get You Out Of My Head", which reached number seven on the chart.
In Canada, the song reached number one.
In Australia, the song was released on July 27 1987 and was a huge hit, reaching number one on the AMR singles chart and remaining there for seven weeks. The song set the record as the biggest Australian single of the decade. Throughout Europe and Asia the song also performed well on the music charts, reaching number one in Belgium, Finland, Ireland, Israel, Japan, and South Africa.
The flip-side "I'll Still Be Loving You" is a popular song, and one of the few not released as a single from her huge-selling debut album Kylie.
|Australian ARIA Singles Chart||1|
|Canada Singles Chart||1|
|Eurochart Hot 100||1|
|South Africa Singles Chart||1|
|Switzerland Singles Chart||2|
|UK Singles Chart||2|
|Germany Singles Chart||3|
|U.S. Billboard Hot 100||3|
|France Singles Chart||5|
|Belgian Singles Chart||1|
|Finland Singles Chart||1|
|Hong Kong Singles Chart||1|
|U.S. Hot Dance Music/Club Play||12|
|U.S. Hot Dance Music/Maxi-Singles Sales||4|
|Norway Singles Chart||3|
|Italian Singles Chart||6|
|Japan Singles Chart||1|
Israel #1 New Zealand #8 Sweden #10 USA Dance Chart #12
"Motion Vector Detection Apparatus, Motion Vector Detection Method, Image Encoding Apparatus, Image Encoding Method, and Computer Program" in Patent Application Approval Process
Jan 24, 2013; By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Sakamoto, Daisuke... | http://www.reference.com/browse/columbia/motion | 13 |
10 | Third Grade Math
A parent's guide with helping your child with today's math. Learn about the importance of math in your child's life.
These fun worksheets will help your child master fractions and get him on the road to Third Grade math to success!
Your child will have a blast doing these fun activities that will boost her Third Grade math skills along the way.
Math Study Help
This two-player game is a fun way to practice multiplication facts! You'll use a pair of dice to determine the numbers you will multiply with. The product determines whether you've hit a single, double, triple, or home run!
Nothing chases the blues away like a trip to the boardwalk. Bring the fun home with this do-it-yourself version that sneaks in some math practice.
Multiplication is a concept that requires a lot of practice. Try replacing the flash cards with this fun card game every once in awhile. Shuffle a deck of cards, then create a large spiral by placing them face up...
Factors can be a tricky concept to master, but they are essential to understanding division. Help your child gain confidence with factors while playing this fun card game. Soon he'll be ready to work with larger numbers and fractions!
Starting with mental math basics will give your child the confidence to take on longer, more complex problems. This activity is a great starting point because it is quick, easy and involves only simple addition facts.
Are you tired of worksheets and flashcards? This card game is a fun way to practice addition or subtraction. Compete for the highest score as you flip over cards. Add up your cards until you reach 100 points. The first one there wins!
If you're finding it painful to get your child to practice math lessons learned throughout the school year and her skills are slipping, try this mental and physical multitasking game to get your child back into the swing of things.
Looking for new and fun ways to practice counting coins with your child? Help your child improve his coin-counting skills in these two math games that have him race to reach $1.00.
Engage your third-grader in this version of the classic game and to help her practice using mental math to solve addition and subtraction problems.
Make a kaleidoscope at home to teach your kids about the color spectrum and introduce them to the science of mirrors.
Browse All Third Grade Math
- Multiplication & Division
- Probability & Data
- Rounding & Estimation
- Rounding & Estimation
Today on Education.com
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process | http://www.education.com/topic/third-grade/math/ | 13 |
18 | Rutherford's Gold Foil Experiment
Rutherford started his scientific career with much success in local schools leading to a scholarship to Nelson College. After achieving more academic honors at Nelson College, Rutherford moved on to Cambridge University's Cavendish laboratory. There he was lead by his mentor J. J. Thomson convinced him to study radiation. By 1889 Rutherford was ready to earn a living and sought a job. With Thomson's recommendation McGill University in Montreal accepted him as a professor of chemistry. Upon performing many experiments and finding new discoveries at McGill university, Rutherford was rewarded the nobel prize for chemistry. In 1907 he succeded Arthur Schuster at the University of Manchester. He began persuing alpha particles in 1908. With the help of Geiger he found the number of alpha particles emitted per second by a gram of radium. He was also able to confirm that alpha particles cause a faint but discrete flash when striking luminescent zinc sulfide screen. These great accomplishments are all overshadowed by Rutherford's famous Gold Foil experiment which revolutionized the atomic model.
This experiment was Rutherford's most notable achievement. It not only disproved Thomson's atomic model but also paved the way for such discoveries as the atomic bomb and nuclear power. The atomic model he concluded after the findings of his Gold Foil experiment have yet to be disproven. The following paragraphs will explain the significance of the Gold Foil Experiment as well as how the experiment contradicted Thomson's atomis model.
Rutherford began his experiment with the philosophy of trying «any dam fool experiment» on the chance it might work.1 With this in mind he set out to disprove the current atomic model. In 1909 he and his partner, Geiger, decided Ernest Marsden, a student of the University of Manchester, was ready for a real research project.2 This experiment's apparatus consisted of Polonium in a lead box emitting alpha particles towards a gold foil. The foil was surrounded by a luminescent zinc sulfide screen to detect where the alpha particles went after contacting the gold atoms. Because of Thomson's atomic model this experiment did not seem worthwhile for it predicted all the alpha particles would go straight through the foil. Despite however unlikely it may have seemed for the alpha particles to bounce off the gold atoms, they did. Leaving Rutherford to say, «It was almost as incredible as if you fired a fifteen-inch shell at a piece of tissue
paper and it came back and hit you.» Soon he came up with a new atomic model based on the results of this experiment. Nevertheless his findings and the new atomic model was mainly ignored by the scientific community at the time.
In spite of the views of other scientists, Rutherford's 1911 atomic model was backed by scientific proof of his Gold Foil Experiment. When he approched the experiment he respected and agreed with J. J. Thomson's, his friend and mentor, atomic theory. This theory proposed that the electrons where evenly distributed throughout an atom. Since an alpha paritcle is 8,000 times as heavy as an electron, one electron could not deflect an alpha particle at an obtuse angle. Applying Thomson's model, a passing particle could not hit more than one elctron at a time; therefore, all of the alpha particles should have passed straight through the gold foil. This was not the case - a notable few alpha particles reflected of the gold atoms back towards the polonium. Hence the mass of an atom must be condessed in consentrated core. Otherwise the mass of the alpha particles would be greated than any part of an atom they hit. As Rutherford put it:
«The alpha projectile changed course in a
Single encounter with a target atom. But
For this to occur, the forces of electrical
Repulsion had to be concentrated in a region
Of 10-13cm whereas the atom was known to
He went on to say that this meant most of the atom was empty space with a small dense core. Rutherford pondered for much time before anouncing in 1911 that he had made a new atomic model-this one with a condensed core (which he named the «nucleus») and electrons orbitting this core. As stated earlier, this new atomic model was not opposed but originally ignored by most of the scientific community.
Rutherford's experiment shows how scientists must never just accept the current theroies and models but rather they must constently be put to new tests and experiments. Rutherford was truly one of the most successful scientists of his time and yet his most renowned experiment was done expecting no profound results. Currently, chemists are still realizing the uses for atomic energy thanks to early findings from scientists such as Rutherford.
Please do not pass this sample essay as your own, otherwise you will be accused of plagiarism. Our writers can write any custom essay for you!
Atomic Bomb Sample essay topic, essay writing: Atomic Bomb - 2219 words
.. s. On January 1, 1896, his first paper on this subject was published. Many found it unbelievable; photographs of bones inside of hands and bullets that were lodged within bodies provided the proof. The implications for medicine intrude the mind almost unbiddingly. Rontgen, receiving the Plutonium Plutonium is a radioactive metallic element. Although it is occasionally found in nature, mostly all of our plutonium is produced artificially in a lab. The official chemical symbol for plutonium is Pu, coming from its first and third letters. Its atomic number is ninety-four. Plutonium is able to maintain its solid state until very high Miniresearch Mini - Research
ELECTRON - In 1897, Sir J. J. Thomas, an English physicist, measured the deflection of cathode-ray particles in magnetic and electrical fields. As a result he found the ratio of the charge, e, to the mass, m, of the cathode-ray particles. He found e/m identical to those particles irrespective of the metal the Gold in grendel Gold has many different uses. In John Gardner’s novel Grendel, it is used as a motif to symbolize different aspects of a character. Though it has a constant meaning throughout the novel, it also differs according to each character. Gardner uses gold as a symbol of majesty as well as protection, greed and power throughout Another Werner Heisenberg essay One cannot fully appreciate the work of Werner Heisenberg unless one Examines his contributions in the context of the time in which he lived. Werner Karl Heisenberg was born in Wuerzburg, Germany, on December 5, 1901, and grew up In academic surroundings, in a household devoted to the humanities. His father Was a professor at | http://www.mannmuseum.com/rutherford-039-s-gold-foil-experiment/ | 13 |
21 | Formulas for Circumference and Area
The formula for the circumference of acircle is based on the relationship between the flat surface. (See fig. 17-19.)
The distance from the initial position to thefinal position of the disk in figure 17-19 is approximately 3.14 times as long as the diameter letter pi. Thus we have the following equations:
Figure 17-18.-Arc, chord, segment, and sector.
Figure 17-19.-Measuring the circumference of a circle.
This formula states that the circumference of a circle is n times the diameter. Notice that it could be written as
C = 2r · p or C = 2pr
since the diameter d is the same as 2r (twice the radius),
Although the value of p is not exactly equal to any of the numerical expressions which are sometimes used for it, the ratio is very close to 3.14. If extreme accuracy is required, 3.1416 is used as an approximate value of p. Many calculations involving p are satisfactory if the fraction 22/7 is used as the value of p.
Practice problems. Calculate the circumference of each of the following circles, using22/7 as the value of p:
1. Radius = 21 in.
1. 132 in.
3. 88 ft
4. 8.8 yd
AREA.-The area of a circle is found formula is written as follows:
A = pr2
EXAMPLE: Find the area of a circle whose diameter is 4 ft, using 3.14 as the value of p.
SOLUTION: The radius is one-half the diameter. Therefore,
Practice problems. Find the area of each of the following circles, using 3.14 as the value of p.
1. Radius = 7 in.
1. A = 154 sq in.
Circles which have a common center are said to CONCENTRIC. (See fig. 17-20.)
The area bf the ring between the concentric circles in figure 17-20 is calculated as follows:
Figure 17-20.-Concentric circles.
Notice that the last expression is the difference of two squares. Factoring, we have
A = p(R + r)(R - r)
Therefore, the area of a ring between two circles is found by p times the product of the sum and difference of their radii.
Practice problems. Find the areas of the rings the following concentric circles: | http://www.tpub.com/math1/18e.htm | 13 |
18 | Desertification and other land depletion are serious environmental problems on a global scale, affecting developing countries and developed countries alike. Space-based observations are essential for studying these phenomena, which cover a wide area and are directly related to food shortages.
Desertification of farmland and rangeland
Brought about by the influence of human activity and natural disasters such as recurring droughts, desertification is thought to have a great impact on food production, rangeland for livestock, and systems for supplying water and energy. Such problems affect the social economies in most arid regions, and particularly in developing nations of Asia, Africa, and South America.
The advance of desertification adversely affects human society in many ways. However, its most direct effect is on the foundation of food production in the form of a depletion of rangeland and farmland.
According to a 1991 report from the United Nations Environment Program (UNEP), the effect of desertification on stock farming is the most widespread, with approximately 3.333 billion hectares of rangeland being affected. This corresponds to 73 percent of the total rangeland distributed in arid regions.
The next most affected region is rainfall-dependent farmland. Desertification affects 226 million hectares of farmland, corresponding to 47 percent of the total rainfall-dependent farmland in arid regions. Of the farmland in arid regions, approximately 43 million hectares are affected by desertification, mainly in the form of salinization.
Desertification has global repercussions
In 1992, the Earth Summit (the United Nations Conference on Environment and Development: UNCED) was held in Rio de Janeiro, Brazil to develop cooperative measures for environmental changes and achieve environmental preservation and sustainable development. This Summit was a large-scale conference attended by 100 leaders and heads of state from 180 participating countries and regions.
Agenda 21, adopted at the Earth Summit, defined desertification as the degradation of land in arid, semiarid, and dry sub-humid areas resulting from various factors including climatic changes and human activity.
According to this definition, desertification is not simply caused by natural factors (large-scale changes in atmospheric cycles and droughts), but also artificial factors.
The artificial factors include overgrazing, over-cultivation, poor irrigation, inadequate management of forests and deforestation practices, and destruction of vegetation. Each of these factors has its own causes. Over-cutting, for example, is probably caused by commercial logging, migrating cultivation, and increasing demand for fuel-wood (to secure firewood for everyday living in developing countries), and by having no adequate legal system regarding environmental preservation.
This problem of over-cutting leading to the destruction of forests is extremely grave in the Asia-Pacific region, and particularly in Indonesia, Malaysia, Thailand, and India.
Naturally, the decline in productivity of lands due to desertification can lead to a scarcity of food and have an adverse effect on the living conditions in the regions. In serious cases, this problem can threaten the foundation of human existence itself with famine and give rise to environmental refugees and other chaotic social situations. For example, a lot of human lives and livestock were lost during droughts in the Sahel region on the south edge of the Sahara Desert, which peaked in 1972, 1973, 1983, and 1984. During these periods, there were occurrences of environmental refugees over a wide area, causing a serious political and social problem.
Japan does not view desertification as a problem that will have much impact on it. However, desertification in such large regions as Asia, Africa, and South America lowers food production, increases poverty in developing nations, influences climatic changes, and impacts diversity of life forms. These problems are gradually transcending the borders of countries and regions to affect the entire world.
Monitoring desertification from space
The advance of desertification is indeed having grave adverse effects on the environment, but we must also be particularly concerned with the wide-ranging impact of desertification on climatic changes. Much information has been gathered on this issue to date. However, we must work on this problem more actively in the future with the aid of satellite observation data. Base data for developing and implementing technology and policies to prevent desertification in the regions will be gathered by regularly monitoring vegetation, soil, soil moisture, and the like over large expanses in arid and semiarid regions and by measuring quantities of changes in this data.
However, surveys of vegetation and soil in regions experiencing desertification are very expensive and difficult for humans to conduct due to the large areas that the surveys must cover and the severity of the natural environment. Therefore, surveys using such advanced technology as remote sensing are indispensable, as they enable us to gather data over large areas simultaneously.
Since satellites orbit the Earth in regular cycles, they are capable of performing regular observations of large regions that people would have great difficulty reaching. It is possible to access data acquired by LANDSAT Multispectral Scanner(MSS) from as far back as 1972. Accordingly, we can compare current data with data from nearly 30 years ago, enabling us to identify the progression of such environmental changes as desertification of land and changes in land use.
Figure 2 shows a false color composite image of the southwest portion of Chad Lake in Central Africa, observed on December 8, 1972, by LANDSAT MSS. In this image, vegetation appears red, water blue, bare land off-white, and clouds and the like white.
Figure 3 is a false color composite image of the same region observed by OPS aboard JERS-1 on September 28, 1994.
By comparing Figure 2 and Figure 3, the environmental changes occurring over the 22-year span can be seen. We can see how an area of the lake 22 years earlier has transformed into a marsh that is grown over with vegetation. This kind of environmental change is thought to be due not only to weather changes, but also to artificial factors such as the influence of human activity upstream on the river that flows into this lake.
The region of vegetation seen in the more recent image is distributed over a portion that used to be a water region, illustrating the magnitude of environmental change around the lake. However, this is only a simple comparison using images from two time periods. We can identify changes in the vegetation region by calculating special physical quantities such as the Normalized Difference Vegetation Index (NDVI). Visible and near-infrared satellite data is most commonly used to achieve this.
By observing visible, near-infrared satellite data and all-weather Synthetic Aperture Radar (SAR) data, which is not affected by weather or time of day, it is possible to regularly observe regions with a high probability of cloud cover. We can also detect changes in the structure and function of ecosystems and regularly update land use maps.
After launching JERS-1 in 1992, NASDA has been observing nearly all land area on the Earth, collecting observation data at multiple time periods for some regions. NASDA has accomplished much from JERS-1 SAR data, including creation of high resolution mosaic images showing the forest distribution in regions of Africa, Southeast Asia, and the Amazon River area.
NDVI was calculated with global cloud-free composite images created from ADEOS OCTS data. There are plans to construct a high-performance SAR such as PALSAR aboard ALOS, and a high-resolution optical sensor such as GLI and AVNIR-II aboard ADEOS-II. Acquisition of this satellite data is expected to gradually increase mosaic images on a global scale. After data of multiple time periods becomes available, we should be able to accumulate many useful data sets for identifying and predicting such environmental changes as desertification. | http://www.eorc.jaxa.jp/en/imgdata/gallery/eenvironment/sr0009.html | 13 |
13 | In Economics, use of the word ‘demand’ is made to show the relationship between the prices of a commodity and the amounts of the commodity which consumers want to purchase at those price.
Definition of Demand:
Hibdon defines, “Demand means the various quantities of goods that would be purchased per time period at different prices in a given market.”
Bober defines, “By demand we mean the various quantities of given commodity or service which consumers would buy in one market in a given period of time at various prices, or at various incomes, or at various prices of related goods.”
Demand for product implies:
a) desires to acquire it,
b) willingness to pay for it, and
c) Ability to pay for it.
All three must be checked to identify and establish demand. For example : A poor man’s desires to stay in a five-star hotel room and his willingness to pay rent for that room is not ‘demand’, because he lacks the necessary purchasing power; so it is merely his wishful thinking. Similarly, a miser’s desire for and his ability to pay for a car is not ‘demand’, because he does not have the necessary willingness to pay for a car. One may also come across a well-established person who processes both the willingness and the ability to pay for higher education. But he has really no desire to have it, he pays the fees for a regular cause, and eventually does not attend his classes. Thus, in an economics sense, he does not have a ‘demand’ for higher education degree/diploma.
It should also be noted that the demand for a product–-a commodity or a service–has no meaning unless it is stated with specific reference to the time, its price, price of is related goods, consumers’ income and tastes etc. This is because demand, as is used in Economics, varies with fluctuations in these factors.
To say that demand for an Atlas cycle in India is 60,000 is not meaningful unless it is stated in terms of the year, say 1983 when an Atlas cycle’s price was around Rs. 800, competing cycle’s prices were around the same, a scooter’s prices was around Rs. 5,000. In 1984, the demand for an Atlas cycle could be different if any of the above factors happened to be different. For example, instead of domestic (Indian), market, one may be interested in foreign (abroad) market as well. Naturally the demand estimate will be different. Furthermore, it should be noted that a commodity is defined with reference to its particular quality/brand; if its quality/brand changes, it can be deemed as another commodity.
To sum up, we can say that the demand for a product is the desire for that product backed by willingness as well as ability to pay for it. It is always defined with reference to a particular time, place, price and given values of other variables on which it depends.
Demand Function and Demand Curve
Demand function is a comprehensive formulation which specifies the factors that influence the demand for the product. What can be those factors which affect the demand?
Dx = D (Px, Py, Pz, B, W, A, E, T, U)
Here Dx, stands for demand for item x (say, a car)
Px, its own price (of the car)
Py, the price of its substitutes (other brands/models)
Pz, the price of its complements (like petrol)
B, the income (budget) of the purchaser (user/consumer)
W, the wealth of the purchaser
A, the advertisement for the product (car)
E, the price expectation of the user
T, taste or preferences of user
U, all other factors.
Briefly we can state the impact of these determinants, as we observe in normal circumstances:
i) Demand for X is inversely related to its own price. As price rises, the demand tends to fall and vice versa.
ii) The demand for X is also influenced by its related price—of goods related to X. For example, if Y is a substitute of X, then as the price of Y goes up, the demand for X also tends to increase, and vice versa. In the same way, if Z goes up and, therefore, the demand for X tends to go up.
iii) The demand for X is also sensitive to price expectation of the consumer; but here, much would depend on the psychology of the consumer; there may not be any definite relation.
This is speculative demand. When the price of a share is expected to go up, some people may buy more of it in their attempt to make future gains; others may buy less of it, rather may dispose it off, to make some immediate gain. Thus the price expectation effect on demand is not certain.
iv) The income (budget position) of the consumer is another important influence on demand. As income (real purchasing capacity) goes up, people buy more of ‘normal goods’ and less of ‘inferior goods’. Thus income effect on demand may be positive as well as negative. The demand of a person (or a household) may be influenced not only by the level of his own absolute income, but also by relative income—his income relative to his neighbour’s income and his purchase pattern. Thus a household may demand a new set of furniture, because his neighbour has recently renovated his old set of furniture. This is called ‘demonstration effect’.
v) Past income or accumulated savings out of that income and expected future income, its discounted value along with the present income—permanent and transitory—all together determine the nominal stock of wealth of a person. To this, you may also add his current stock of assets and other forms of physical capital; finally adjust this to price level. The real wealth of the consumer, thus computed, will have an influence on his demand. A person may pool all his resources to construct the ground floor of his house. If he has access to some additional resources, he may then construct the first floor rather than buying a flat. Similarly one who has a color TV (rather than a black-and-white one) may demand a V.C.R./V.C.P. This is regarded as the real wealth effect on demand.
vi) Advertisement also affects demand. It is observed that the sales revenue of a firm increases in response to advertisement up to a point. This is promotional effect on demand (sales). Thus
vii) Tastes, preferences, and habits of individuals have a decisive influence on their pattern of demand. Sometimes, even social pressure—customs, traditions and conventions exercise a strong influence on demand. These socio-psychological determinants of demand often defy any theoretical construction; these are non-economic and non-market factors—highly indeterminate. In some cases, the individual reveal his choice (demand) preferences; in some cases, his choice may be strongly ordered. We will revisit these concepts in the next unit.
You may now note that there are various determinants of demand, which may be explicitly taken care of in the form of a demand function. By contrast, a demand curve only considers the price-demand relation, other things (factors) remaining the same. This relationship can be illustrated in the form of a table called demand schedule and the data from the table may be given a diagrammatic representation in the form of a curve. In other words, a generalized demand function is a multivariate function whereas the demand curve is a single variable demand function.
Dx = D(Px)
In the slope—intercept from, the demand curve which may be stated as
Dx = α + β Px, where α is the intercept term and β the slope which is negative because of inverse relationship between Dx and Px.
Suppose, β = (-) 0.5, and α = 10
Then the demand function is : D=10-0.5P | http://www.mbaknol.com/managerial-economics/concept-of-demand-in-managerial-economics/ | 13 |
15 | Creative Debate is a role-playing exercise. Students assume a specific point of view and debate a controversial topic from this perspective. Creative Debates promote both critical thinking and tolerance of opposing views.
Steps to Creative Debate:
Discuss the rules for debate with the class. Have students suggest guidelines. Once a consensus is reached, post the rules for quick reference.
Suggest a topic for debate or allow the students to select a topic. If the topic requires research, allow the students to gather and organize information before the debate.
Divide the class into three groups. Select two groups to participate in the debate. The third group acts as observers. Rearrange the classroom so that opposing groups face one another and the observers sit to the side.
Provide a reading selection that states one of the positions on the debate topic. Assign one group to argue for the selection; the other group argues against.
Each student selects a character from the past or present that represents their position in the debate. (Teachers may want to suggest a list of characters to speed up this process.)
Have each student introduce himself as the character to the class and then argue the topic from the perspective of this character. Encourage students to "act out" the character's personality (speech patterns, mannerisms, etc.).
Each group presents their positions for ten minutes. Allow extra time for rebuttals.
Next, ask the student teams to switch their positions and argue the opposing viewpoint. (Perhaps the group of observers might change places with one of the other groups.) Repeat the debate and rebuttal process.
At the end of the debate, ask students to reflect on their experiences. Raise questions like . . .
Did you find it difficult to argue from both perspectives in the debate?
What did you learn from this experience?
Did your own views and opinions change?
How would you approach a similar debate in the future? | http://www.readingeducator.com/strategies/debate.htm | 13 |
72 | You can use formulas and functions in lists or libraries to calculate data in a variety of ways. By adding a calculated column to a list or library, you can create a formula that includes data from other columns and performs functions to calculate dates and times, to perform mathematical equations, or to manipulate text. For example, on a tasks list, you can use a column to calculate the number of days it takes to complete each task, based on the Start Date and Date Completed columns.
Note This article describes the basic concepts related to using formulas and functions. For specific information about a particular function, see the article about that function.
Formulas are equations that perform calculations on values in a list or library. A formula starts with an equal sign (=). For example, the following formula multiplies 2 by 3 and then adds 5 to the result.
You can use a formula in a calculated column and to calculate default values for a column. A formula can contain functions (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.), column references, operators (operator: A sign or symbol that specifies the type of calculation to perform within an expression. There are mathematical, comparison, logical, and reference operators.), and constants (constant: A value that is not calculated and, therefore, does not change. For example, the number 210, and the text "Quarterly Earnings" are constants. An expression, or a value resulting from an expression, is not a constant.), as in the following example.
||The PI() function returns the value of pi: 3.141592654.
|Reference (or column name)
||[Result] represents the value in the Result column for the current row.
||Numbers or text values entered directly into a formula, such as 2.
||The * (asterisk) operator multiplies, and the ^ (caret) operator raises a number to a power.
A formula might use one or more of the elements from the previous table. Here are some examples of formulas (in order of complexity).
Simple formulas (such as =128+345)
The following formulas contain constants and operators.
||Adds 128 and 345
Formulas that contain column references (such as =[Revenue] >[Cost])
The following formulas refer to other columns in the same list or library.
||Uses the value in the Revenue column.
||10% of the value in the Revenue column.
|=[Revenue] > [Cost]
||Returns Yes if the value in the Revenue column is greater than the value in the Cost column.
Formulas that call functions (such as =AVERAGE(1, 2, 3, 4, 5))
The following formulas call built-in functions.
|=AVERAGE(1, 2, 3, 4, 5)
||Returns the average of a set of values.
|=MAX([Q1], [Q2], [Q3], [Q4])
||Returns the largest value in a set of values.
|=IF([Cost]>[Revenue], "Not OK", "OK")
||Returns Not OK if cost is greater than revenue. Else, returns OK.
||Returns the day part of a date. This formula returns the number 15.
Formulas with nested functions (such as =SUM(IF([A]>[B], [A]-[B], 10), [C]))
The following formulas specify one or more functions as function arguments.
|=SUM(IF([A]>[B], [A]-[B], 10), [C])
The IF function returns the difference between the values in columns A and B, or 10.
The SUM function adds the return value of the IF function and the value in column C.
The PI function returns the number 3.141592654.
The DEGREES function converts a value specified in radians to degrees. This formula returns the value 180.
The FIND function searches for the string BD in Column1 and returns the starting position of the string. It returns an error value if the string is not found.
The ISNUMBER function returns Yes if the FIND function returned a numeric value. Else, it returns No.
Top of Page
Functions are predefined formulas that perform calculations by using specific values, called arguments, in a particular order, or structure. Functions can be used to perform simple or complex calculations. For example, the following instance of the ROUND function rounds off a number in the Cost column to two decimal places.
The following vocabulary is helpful when you are learning functions and formulas:
Structure The structure of a function begins with an equal sign (=), followed by the function name, an opening parenthesis, the arguments for the function separated by commas, and a closing parenthesis.
Function name This is the name of a function that is supported by lists or libraries. Each function takes a specific number of arguments, processes them, and returns a value.
Arguments Arguments can be numbers, text, logical values such as True or False, or column references. The argument that you designate must produce a valid value for that argument. Arguments can also be constants, formulas, or other functions.
In certain cases, you may need to use a function as one of the arguments of another function. For example, the following formula uses a nested AVERAGE function and compares the result with the sum of two column values.
Valid returns When a function is used as an argument, it must return the same type of value that the argument uses. For example, if the argument uses Yes or No, then the nested function must return Yes or No. If it doesn't, the list or library displays a #VALUE! error value.
Nesting level limits A formula can contain up to eight levels of nested functions. When Function B is used as an argument in Function A, Function B is a second-level function. In the example above for instance, the SUM function is a second-level function because it is an argument of the AVERAGE function. A function nested within the SUM function would be a third-level function, and so on.
- Lists and libraries do not support the RAND and NOW functions.
- The TODAY and ME functions are not supported in calculated columns but are supported in the default value setting of a column.
Top of Page
Using column references in a formula
A reference identifies a cell in the current row and indicates to a list or library where to search for the values or data that you want to use in a formula. For example, [Cost] references the value in the Cost column in the current row. If the Cost column has the value of 100 for the current row, then =[Cost]*3 returns 300.
With references, you can use the data that is contained in different columns of a list or library in one or more formulas. Columns of the following data types can be referenced in a formula: single line of text, number, currency, date and time, choice, yes/no, and calculated.
You use the display name of the column to reference it in a formula. If the name includes a space or a special character, you must enclose the name in square brackets ([ ]). References are not case-sensitive. For example, you can reference the Unit Price column in a formula as [Unit Price] or [unit price].
- You cannot reference a value in a row other than the current row.
- You cannot reference a value in another list or library.
- You cannot reference the ID of a row for a newly inserted row. The ID does not yet exist when the calculation is performed.
- You cannot reference another column in a formula that creates a default value for a column.
Top of Page
Using constants in a formula
A constant is a value that is not calculated. For example, the date 10/9/2008, the number 210, and the text "Quarterly Earnings" are all constants. Constants can be of the following data types:
- String (Example: =[Last Name] = "Smith")
String constants are enclosed in quotation marks and can include up to 255 characters.
- Number (Example: =[Cost] >= 29.99)
Numeric constants can include decimal places and can be positive or negative.
- Date (Example: =[Date] > DATE(2007,7,1))
Date constants require the use of the DATE(year,month,day) function.
- Boolean (Example: =IF([Cost]>[Revenue], "Loss", "No Loss")
Yes and No are Boolean constants. You can use them in conditional expressions. In the above example, if Cost is greater than Revenue, the IF function returns Yes, and the formula returns the string "Loss". If Cost is equal to or less than Revenue, the function returns No, and the formula returns the string "No Loss".
Top of Page
Using calculation operators in a formula
Operators specify the type of calculation that you want to perform on the elements of a formula. Lists and libraries support three different types of calculation operators: arithmetic, comparison, and text.
Use the following arithmetic operators to perform basic mathematical operations such as addition, subtraction, or multiplication; to combine numbers; or to produce numeric results.
|+ (plus sign)
|– (minus sign)
|/ (forward slash)
|% (percent sign)
You can compare two values with the following operators. When two values are compared by using these operators, the result is a logical value of Yes or No.
|= (equal sign)
||Equal to (A=B)
|> (greater than sign)
||Greater than (A>B)
|< (less than sign)
||Less than (A<B)
|>= (greater than or equal to sign)
||Greater than or equal to (A>=B)
|<= (less than or equal to sign)
||Less than or equal to (A<=B)
|<> (not equal to sign)
||Not equal to (A<>B)
Use the ampersand (&) to join, or concatenate, one or more text strings to produce a single piece of text.
||Connects, or concatenates, two values to produce one continuous text value ("North"&"wind")
Order in which a list or library performs operations in a formula
Formulas calculate values in a specific order. A formula might begin with an equal sign (=). Following the equal sign are the elements to be calculated (the operands), which are separated by calculation operators. Lists and libraries calculate the formula from left to right, according to a specific order for each operator in the formula.
If you combine several operators in a single formula, lists and libraries perform the operations in the order shown in the following table. If a formula contains operators with the same precedence — for example, if a formula contains both a multiplication operator and a division operator — lists and libraries evaluate the operators from left to right.
||Negation (as in –1)
|* and /
||Multiplication and division
|+ and –
||Addition and subtraction
||Concatenation (connects two strings of text)
|= < > <= >= <>
Use of parentheses
To change the order of evaluation, enclose in parentheses the part of the formula that is to be calculated first. For example, the following formula produces 11 because a list or library calculates multiplication before addition. The formula multiplies 2 by 3 and then adds 5 to the result.
In contrast, if you use parentheses to change the syntax, the list or library adds 5 and 2 together and then multiplies the result by 3 to produce 21.
In the example below, the parentheses around the first part of the formula force the list or library to calculate [Cost]+25 first and then divide the result by the sum of the values in columns EC1 and EC2.
Top of Page | http://office.microsoft.com/en-us/windows-sharepoint-services-help/introduction-to-data-calculations-HA010121588.aspx | 13 |
16 | Publication of the NOAA Education Team.
Specially for Kids - These items are designed especially
for children (grades K-5) and provide fun activities for kids to
explore the planet they live on.
- Earth Day Origami
Project - The Sun is the source of energy for life on Earth.
Put together this origami model of the Sun and learn more about our nearest star.
- NASA Kids -
This NASA web site will teach you about astronauts, the Earth, space, rockets,
airplanes and more.
Return to Top
Specially for Students - These items are designed especially
for students (grades 6-12) to provide a way of learning about the earth in a
fun and informative way.
- NOAA Environmental Visualization Lab- Exciting animations, data visualizations, satellite imagery and other visually stunning datasets of Earth. Updated daily.
- Science with NOAA
- This web page provides middle school science students with research and
investigation experiences using on-line resources. Even if you do
not have much experience using web-based activities in science, the
directions here are easy to follow. Space topics include solar flares,
coronal holes, and solar winds.
- Spuzzled for Kids
- This site takes NOAA images and offers students the chance to put those images into
the correct order while also learning more about the environmental work of the Agency.
There is a spuzzle at three levels of difficulty in this section.
- The Geostationary
- This site provides satellite imagery of the eastern continental
U.S., the western continental U.S., Puerto Rico, Alaska, and Hawaii.
You can also access sea surface temperatures from this site as well
as tropical Atlantic and Pacific information. This tropical
information is particularly interesting
during hurricane season.
Concentration - A good way to learn about animals is
to track them from space. Scientists pick individual animals and fit them with lightweight,
comfortable radio transmitters. Signals from the transmitters are received by special instruments
on certain satellites as they pass overhead. These satellites are operated by NOAA.
The polar orbits of the satellites let them see nearly every part of Earth as it rotates below
and receive signals from thousands of migrating animals. After the satellite gets the signal
from the animal's transmitter, it relays the information to a ground station. The ground station
then sends the information about the animal to the scientists, wherever they may be. Tracking
migrating animals using satellites may help us figure out how to make their journeys as safe as
possible and help them survive.
- Geostationary Satellites GOES satellites provide the kind
of continuous monitoring necessary for intensive data analysis.
They circle the Earth in a geosynchronous orbit, which means they
orbit the equatorial plane of the Earth at a speed matching the
Earth's rotation. This allows them to hover continuously over one
position on the surface. Because they stay above a fixed spot on the
surface, they provide a constant vigil for the atmospheric "triggers"
for severe weather conditions such as tornadoes, flash floods,
hail storms, and hurricanes.
NOAA-N Information NOAA satellites are launched by NASA and
maintained by NOAA after they are in place. Information about the newest NOAA
satellite, NOAA-N, is available on this site. NOAA-N will collect information
about Earth's atmosphere and environment to improve weather prediction and
climate research across the globe. NOAA-N is the 15th in a series of
polar-orbiting satellites dating back to 1978. NOAA uses two satellites,
a morning and afternoon satellite, to ensure every part of the Earth is
observed at least twice every 12 hours.
Return to Top
Website Owner: NOAA Office of Education,
National Oceanic & Atmospheric Administration (NOAA)
United States Department of Commerce
Last Updated: August 31 , 2006 | http://www.education.noaa.gov/sspace.html | 13 |
12 | The determinant of a matrix (written |A|) is a single number that depends on the elements of the matrix A. Determinants exist only for square matrices (i.e., ones where the number of rows equals the number of columns).
Determinants are a basic building block of linear algebra, and are useful for finding areas and volumes of geometric figures, in Cramer's rule, and in many other areas ways. If the characteristic polynomial splits into linear factors, then he determinant is equal to the product of the eigenvalues of a matrix, counted by their algebraic multiplicities.
A matrix can be used to transform a geometric figure. For example, in the plane, if we have a triangle defined by its vertices (3,3), (5,1), and (1,4), and we wish to transform this triangle into the triangle of vertices (3,-3), (5,-9), and (1,2), we can simply do a matrix multiplication of each vertex by the matrix .
In this transformation, no matter what is the shape of the initial geometric figure, its position, or its area, the final geometric figure will have the same area and orientation.
It can be seen that matrix transformations of geometric figures always give resulting figures whose area is proportional to the initial figure, and whose orientation is either always the same, or always the reverse.
This ratio is called the determinant of the matrix, and it's positive when the orienation is kept, negative when the orientation is reversed, and zero when the final figure always has zero area.
This two-dimensional concept is easily generalized for any dimensions. In 3D, replace area for volume, and in higher dimensions the analogue concept is called hypervolume.
The determinant of a matrix is the oriented ratio of the hypervolumes of the transformed figure to the source figure.
How to calculate
We need to introduce two notions: the minor and the cofactor of a matrix element. Also, the determinant of a 1x1 matrix equals the sole element of that matrix.
- The minor mij of the element aij of an NxN matrix is the determinant of the (N-1)x(N-1) matrix formed by removing the ith row and jth column from M.
- The cofactor Cij equals the minor mij multiplied by ( − 1)i + j
The determinant is then defined to be the sum of the products of the elements of any one row or column with their corresponding cofactors.
For the 2x2 matrix
the determinant is simply ad-bc (for example, using the above rule on the first row).
For a general 3x3 matrix
we can expand along the first row to find
where each of the 2x2 determinants is given above.
Properties of determinants
The following are some useful properties of determinants. Some are useful computational aids for simplifying the algebra needed to calculate a determinant. The first property is that | M | = | MT | where the superscript "T" denotes transposition. Thus, although the following rules refer to the rows of a matrix they apply equally well to the columns.
- The determinant is unchanged by adding a multiple of one row to any other row.
- If two rows are interchanged the sign of the determinant will change
- If a common factor α is factored out from each element of a single row, the determinant is multiplied by that same factor.
- If all the elements of a single row are zero (or can be made to be zero using the above rules) then the determinant is zero.
- | AB | = | A | | B |
In practice, one of the most efficient ways of finding the determinant of a large matrix is to add multiples of rows and/or columns until the matrix is in triangular form such that all the elements above or below the diagonal are zero, for example
The determinant of such a matrix is simply the product of the diagonal elements (use the cofactor expansion discussed above and expand down the first column). | http://www.conservapedia.com/Determinant | 13 |
18 | NASA is actively planning to expand human spaceflight and robotic exploration beyond low Earth orbit. To meet this challenge, a capability driven architecture will be developed to transport explorers to multiple destinations that each have their own unique space environments. Future destinations may include the moon, near Earth asteroids, and Mars and its moons.
NASA is preparing to explore these destinations by first conducting analog missions here on Earth. Analog missions are remote field tests in locations that are identified based on their physical similarities to the extreme space environments of a target mission. NASA engineers and scientists work with representatives from other government agencies, academia and industry to gather requirements and develop the technologies necessary to ensure an efficient, effective and sustainable future for human space exploration.
Analog teams test robotic equipment, vehicles, habitats, communications, and power generation and storage. They evaluate mobility, infrastructure, and effectiveness in the harsh environments.
Analogs provide NASA with data about strengths, limitations, and the validity of planned human-robotic exploration operations, and help define ways to combine human and robotic efforts to enhance scientific exploration. Test locations include the Antarctic, oceans, deserts, and arctic and volcanic environments.
Analog missions and field tests include:
Two NEEMO 13 crewmembers participate in an undersea session of extravehicular activity.
NASA's Extreme Environment Mission Operations (NEEMO)
The National Oceanic and Atmospheric Administration's Aquarius Undersea Laboratory is the analog test site for NASA's Extreme Environment Mission Operations, or NEEMO. NASA uses the laboratory's underwater environment to execute a range of analog "spacewalks" or extravehicular activities and to assess equipment for exploration concepts in advanced navigation and communication.
Long-duration NEEMO missions provide astronauts with a realistic approximation of situations they will likely encounter on missions in space and provide an understanding of how to carry out daily operations in a simulated planetary environment.
Inflatable Lunar Habitat
NASA conducted a range of analog tests to evaluate the inflatable lunar habitat. Astronauts may one day live on the moon in something similar to the conceptual housing structure. The tests were conducted at McMurdo Station in the cold, isolated landscape of Antarctica to provide information about the structure’s power consumption and resilience.
The analog test was also used to evaluate how easily a suited astronaut could assemble, pack, and transport the habitat. If selected for future missions, the structure will reduce the amount of hardware and fuel necessary for transportation and logistics on the moon.
In June, 2008, astronauts, engineers and scientists gathered at Moses Lake, Wash., to test spacesuits and rovers.
Moses Lake, WA
Short Distance Mobility Exploration Engineering Evaluation Field Tests—Phase 1
The Short Distance Mobility Exploration Engineering Evaluation analog field tests were designed to measure the benefits of using pressurized vehicles versus unpressurized vehicles and to incorporate the findings into upcoming lunar missions.
The sand dunes of Moses Lake, WA, provide a lunar-like environment of sand dunes, rugged terrain, soil inconsistencies, sandstorms, and temperature swings. Here, NASA tests a newly enhanced extravehicular activity suit and the new line of robotic rovers: ATHLETE Rover, K10, Lunar Truck, Lance Blade and Lunar Manipulator.
Black Point Lava Flow, AZ
Short Distance Mobility Exploration Engineering Evaluation Field Tests—Phase 2
The terrain and size of Black Point Lava Flow provide an environment geologically similar to the lunar surface. It is here that NASA first introduced the Small Pressurized Rover, a conceptual vehicle with an extended range and capability to travel rugged planetary terrain.
The Black Point landscape enables small, pressurized rovers to undertake sorties with ranges that extend greater than 10 kilometers. The sorties tests include a 3-day exploration mission.
Tested in Hawaii in November 2008, ROxygen could produce two thirds of the oxygen needed to sustain a crew of four on the moon.
In-Situ Resource Utilization Demonstrations
The volcanic terrain, rock distribution, and soil composition of Hawaii’s islands provide an ideal simulated environment for testing hardware and operations.
NASA performs analogs to identify a process that uses hardware or employs an operation to harness local resources (in-situ) for use in human and robotic exploration. The demonstrations could help reduce risk to lunar missions by demonstrating technologies for end-to-end oxygen extraction, separation, and storage from the volcanic material and other technologies that could be used to look for water or ice at the lunar poles.
Devon Island, Nunavut, Canada
Haughton Mars Project
The rocky arctic desert setting, geological features, and biological attributes of the Haughton Crater, one of Canada’s uninhabited treasures, provides NASA with an optimal setting to assess requirements for possible future robotic and human missions to Mars.
During the Haughton Mars Project, scientists, engineers, and astronauts perform multiple representative lunar science and exploration surface activities using existing field infrastructure and surface assets. They demonstrate scientific and operational concepts, including extravehicular activity traverses, long-term high-data communication, complex robotic interaction, and onboard rover and suit engineering. | http://www.nasa.gov/exploration/analogs/about.html | 13 |
14 | Climate and Earth's Energy Balance
Part D: Greenhouse Gas Lab
In the previous lab, you read about greenhouse gases and used computer models to investigate the effect of greenhouse gases on temperature. As you recall, greenhouse gases, which include water vapor, carbon dioxide, methane, nitrous oxide, and other man-made gases are relatively transparent to incoming shortwave (including visible) solar radiation however they absorb outgoing long-wave radiation emitted from Earth and the atmosphere, hence their name "greenhouse" gases. In this next hands-on lab activity, you will test the greenhouse potential of two easily acquired greenhouse gas samples: water vapor, carbon dioxide.
Note: this lab takes about 45 minutes to complete.
Materials:Note the list of materials described below is for one lab team.
- 3-4 (depending on variables to be tested) - Clear plastic water bottles with hole drilled into cap. The bottles should all be the same type and size (approximately 1-liter). The bottles should be transparent plastic (remove any labels), black bases are acceptable. The bottles need to have tightly fitting screw-on tops. Seltzer water bottles are a good type of bottle. (Recommend: 1 bottle for every variable).
- 4 - Thermometers (analogue, digital or digital recording; one for each bottle). Inexpensive digital probes, such as the ones pictured here, can be purchased online. Vernier and Pasco, as well as other companies, sell excellent probes for this type of investigation.
- A clock or watch showing minutes and seconds.
- 150 ml (2/3rd cup) of Vinegar
- 250 ml (1 cup) of Baking Soda
- 3 - Sponge pieces of equal dimensions; saturated with water for one bottle, left dry for the other two. Kitchen sponges can be cut into 3 pieces to fit the bottles.
Indoors: Light source (clamp lamp or goose neck) and bulb (standard incandescent or directed spot; one setup for each bottle).
Outdoors: Sheltered area with direct sunlight for duration of lab. A means for shading the bottles for the second half of the lab. A cardboard piece folded in half and placed tent-like over the bottles works well. Or you can plan to move the bottles into the classroom for the cooling phase.
Note:1-2 Alka-seltzer tablets, dissolved in water, can be used instead of the vinegar and baking soda. If using this method, be sure to add the equal volume of water to the bottle with water vapor and sponge. Another method for acquiring CO2, is to sublimate dry ice, or use a CO2 canister, such as for filling a bicycle tire. In both of these cases, let the CO2 warm to the same temperature as the other bottles before beginning the experiment.
- Assemble all materials and select a sheltered site (out of the wind) to work if working out-of-doors.
- Pre-drill holes in the caps of the bottles. Make sure you have enough bottles and caps for the entire class.
- Insert temperature probes and seal the holes with modeling clay, hot glue, or silicone sealant.
- Cut sponge pieces to size (should cover half the of bottle bottom) and insert in bottles. Leave the lids off the bottles and let the sponge pieces dry for the dry air and CO2 bottles.
- Prepare the gases for the bottles before class using the instructions below. Seal the bottles after preparing them.
- Preview the lab methods with the students so that you are ready to go at the start of the class period.
Preparation of Bottles:For the bottle with air: Be sure that air is dry use a warm hair dryer to heat the air, for a few minutes then just tighten the cap. Let the bottle cool before the lab.
For the bottle with saturated air: Pour small amount of water in the bottle to saturate the piece of saturated sponge in the bottom of the bottle. Make certain the sponge is wet but there is no extra water in the bottle. Seal the bottle.
For the bottle with carbon dioxide and for pouring the gas into the bottle: Carbon dioxide can be easily made with baking soda and vinegar. Vinegar (acetic acid) CH3COOH, and baking soda (sodium bicarbonate) NaHCO3 produces an acid-base reaction when they come in contact with one another. The fizzing and bubbling indicates that a gas (CO2 ) is being produced.
Pour 30 ml (about 1 ounce) of vinegar into an extra plastic bottle or tall beaker. Spoon in ½ tsp of baking soda. Allow the reaction to bubble and fizz without disturbing it. When the fizzing is over, carefully pour the CO2 into the bottle. [Adding more vinegar and baking soda will just make the reaction bubble excessively, and the CO2 will tend to bubble over the beaker and you won't be able to get it into the bottle.] BE CERTAIN NOT TO POUR ANY LIQUID INTO THE BOTTLE! Repeat this process two more times. Put the cap with the probe on the bottle. Another way to produce CO2 is to dissolve an alka-sletzer table in the bottle. If using this method, you will need to put an equal amount of water in the bottle with water.
Note: CO2 gas is more dense than air. It will stay in the beaker, forcing out the air. Although you can't see it, you can pour CO2 gas out of the beaker just like you would pour a liquid. By way of teacher demonstration, a match can be lit and placed down into the gas. The match will be extinguished showing that the oxygen in the air has now been forced out, replaced by the carbon dioxide. Students can also feel the CO2 being poured out of the beaker because it's cold (similar to cold carbon dioxide gas coming out of a fire extinguisher). As the reaction with baking soda and vinegar is "endothermic," meaning that energy (as well as CO2) leaves the products during the reaction cold, care should be taken not to introduce any of the liquid into the bottle as it will continue to keep the temperature of the liquid depressed.
Lab activity instructions:
- If possible, divide up members of your class into several different groups (one for each bottle containing a different gas). Each group of 2-3 students will have one bottle into which one of the gases (regular air, water saturated air, CO2 ) has been placed. (For the saturated air, carbon dioxide, see above.)
- For each bottle, prepare a data chart with three columns: time, temperature, and notes. You will need to collect data for 25 minutes, once every minute. Be sure to locate a stopwatch or other clock showing minutes.
- Record the starting "room" temperature by holding the control temperature probe in the air for 1 minute. Record this temperature. NOTE: Do not set the probe on the desk. If you do, you'll be recording the desk temperature. Also, don't hold the tip of the probe because then you'll be taking your temperature.
- Place all the bottles at a designated distance from the light source (Recommend: 10-25 cm (4-10 inches) away from light source, if using an incandescent light source. If working outdoors, place the bottles all in direct sunlight be aware of shadows.)
- Either plug in the lamp and turn it on, or move the bottles into the sunlight. Immediately start collecting and recording temperature on a data chart. Continue to do so every minute for 15 minutes. After 15 minutes, turn the light off, or move the bottles into the shade, and continue recording the temperature for an additional 10 minutes. Safety Note: Be careful around the hot lamp.
- Plot the data you collected in step 5. Plot temperature on the (Y) axis and time on the (X) axis. Label your axes. If sharing data with the class, agree on a scale before plotting the data. If you have access to Excel or another graphing program, you can chart the graph electronically. Share the data / graphs so that each team has all of the data sets.
- Describe the general trends that you see in the temperature over time.
- Did one gas warm more quickly than the others? Was the increase in temperature gradual or were there changes in the slope of the line?
- Which gas had the greatest change in temperature while heating?
- How did the cooling of the gases compare to the warming? Which gas appeared to hold the heat the longest of the three that you tested?
- Recall that temperature is a measure of kinetic energy of molecules. Explain, in terms of kinetic energy, why the bottles remained warm after the light source was turned off or the bottles were shaded.
- How does the composition of the gases in the bottles differ from the composition of gases naturally found in the atmosphere?
- If you increased the concentration of CO2 in the bottle, how might this affect the temperature trend in the lab?
- How do greenhouse gases affect the Earth's radiation balance?
Stop and Think
Analyze your data. Consider the following questions: | http://serc.carleton.edu/eslabs/weather/2d.html | 13 |
13 | The Trojan asteroid travels with the Earth as they both orbit the sun. (Paul Wiegert / University…)
Turns out the moon's not the Earth's only traveling companion. Space scientists have discovered an asteroid that's been following our fair planet for thousands of years, at least — and there may be many more where it came from, according to a recent study.
If other so-called Trojan asteroids are found, they could turn out to be ideal candidates for a visit from astronauts, something NASA hopes will be possible within the next 15 years.
Most of the asteroids in the solar system populate the belt of rocky debris between Mars and Jupiter. But planets can pull asteroids into their orbits, too. More than 4,000 Trojan asteroids have been discovered around the gas giant Jupiter, along with a few around Neptune and Mars.
But no such asteroid had ever been found near Earth. That led some scientists to believe that our planet lacked an entourage.
But others proposed a different explanation: Perhaps there were Trojan asteroids in Earth's orbit around the sun, but they were simply hidden from view.
The problem was this: In order for an asteroid to attain a stable position in a planet's orbit, it must find the spot where the gravitational pull of the planet and that of the sun cancel each other out. Two of these spots, called Lagrangian points, lie along a planet's orbit — one ahead of the planet and one behind it. Drawing straight lines between the Earth, the sun and a Lagrangian point produces a triangle whose sides are equal in length. An asteroid there would hover in the sky at a 60-degree angle from the sun.
Any object that close to the sun would be difficult to see from Earth because it would be overhead mostly during broad daylight, as invisible as the stars.
But Martin Connors, a space scientist at Athabasca University in Alberta, Canada, had an idea. Maybe NASA's Wide-Field Infrared Survey Explorer, which aims its lens 90 degrees away from the sun, would be able to pick up an oddball Trojan with an eccentric orbit.
Indeed it did. Connors found one candidate whose strange path over six days in late 2010 seemed to match the unevenly elongated orbit typical of Trojans. His team confirmed the Trojan's identity by spotting it a few months later with another telescope in Hawaii.
"This is pretty cool," said Amy Mainzer, a scientist at the Jet Propulsion Laboratory who wasn't involved in the study, which was published online Wednesday by the journal Nature. "It's a new class of near-Earth object that's been hypothesized to exist."
And if more Trojan asteroids can be found, researchers said, they could be ideal for astronaut visits and the mining of precious resources. (This particular asteroid is too tilted with respect to the solar system to make a good candidate, Mainzer said.)
Stuffed into a forgotten closet in the sky, such relics could also give scientists a fresh glimpse into the early formation of the solar system. | http://articles.latimes.com/2011/jul/28/science/la-sci-trojan-asteroid-20110728 | 13 |
10 | Skip over navigation
Guide and features
Guide and features
Science, Technology, Engineering and Mathematics
Featured Early Years Foundation Stage; US Kindergarten
Featured UK Key Stage 1&2; US Grades 1-4
Featured UK Key Stage 3-5; US Grades 5-12
Featured UK Key Stage 1, US Grade 1 & 2
Featured UK Key Stage 2; US Grade 3 & 4
Featured UK Key Stages 3 & 4; US Grade 5-10
Featured UK Key Stage 4 & 5; US Grade 11 & 12
Number the Sides
Why do this problem?
is a good introduction to the numerical aspects of similar triangles. It will also bring in ratio, and use multiplication and division.
You might suggest that children have a go at the
problem before they try this, which would offer a good basic introduction to similar triangles and might provoke some interesting discussion amongst the class.
This problem would be best introduced to the whole group at first. You could simply show them the first three triangles and ask them what they think the missing length is. Invite children to explain to everyone how they worked out their response. Listening to different ways of articulating the thought processes will help those who are not so sure find an explanation which they can make their own. The next step might be to show the group the same set of triangles but with the third triangle in a different orientation as in the second image. This will challenge them a little at first but makes a good lead into the main activity.
You could print off copies of
for the children to use, which has all the sets of triangles on it.
Would it help to write out the lengths of the sides of each triangle in a set?
Why don't you compare the shortest side of the first triangle with the shortest side of the third triangle?
How about comparing the two "middle length" sides?
Can you use this to work out what the longest side length is in the third triangle?
Learners could draw their own sets of similar triangles.
Suggest using the interactivity in
this simpler problem
which introduces similar triangles.
Addition & subtraction
Factors and multiples
Trial and improvement
Multiplication & division
Meet the team
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
Register for our mailing list
Copyright © 1997 - 2012. University of Cambridge. All rights reserved.
NRICH is part of the family of activities in the
Millennium Mathematics Project | http://nrich.maths.org/5639/note | 13 |
11 | Case Study | FAQ | Resources
Laboratory experiences are essential for students in many science courses. Students with disabilities will need to have access to the physical facility, equipment, materials, safety devices and other services. Access issues for students with disabilities vary considerably depending on the subject, the physical facility, and the needs of each student. For example, a student who is blind will be unable to use standard measurement equipment used in a chemistry or physics laboratory. A student with limited use of her hands may have difficulty manipulating lab tools and materials. A student who uses a wheelchair may be unable to access lab tables and computers, or maneuver in a crowded laboratory. Solutions to access barriers will vary considerably among individual students and the laboratory activities. Each student is the best source of information about his needs.
Working closely with a lab partner or assistant can facilitate involvement in the lab activity for some students with disabilities. For example, a student who is blind could enter observation data into the computer while his partner describes the lab findings. Or, a student with limited dexterity in her hands and fingers could dictate instructions and procedures to her partner who manipulates equipment and materials and carries out the measurement process.
Allowing the student extra time to set up a lab or complete the work can also provide an effective accommodation for some students with disabilities. This may allow more time to focus on procedures and results and eliminate the stress that may result from time constraints.
To assure safety, provide a thorough lab orientation and provide necessary adjustments to procedures, depending on the specific disability. Have a plan established that may involve moving equipment, placing the student in a specific location in the room, or involving another student as a back up in case of emergency.
The following paragraphs describe typical accommodation strategies for specific disabilities - blindness, low vision, mobility impairments, hearing impairments, learning disabilities, health impairments, and mental health or psychiatric impairments.
Following are examples of accommodations in science labs that can be used to maximize the participation of students who are blind:
- Include tactile drawings or graphs, three-dimensional models, and a lot of hands-on learning.
- Use a glue gun to make raised line drawings.
- Make a tactile syringe by cutting notches in the plunger at 5 ml. increments.
- Make a tactile triple beam balance by filing deep notches for each gram increment. Add glue drops on either side of the balance line so that the student will know when the weights are balanced.
- Create Braille labels with Dymo Labelers.
- Identify increments of temperature on stove using fabric paint.
- Use different textures such as sandpaper or yarn to identify drawers, cabinets, and equipment areas.
- Place staples on a meter stick to label centimeters.
- Use 3-D triangles or spheres to describe geometric shapes.
- Use Styrofoam and toothpicks or molecular kits to exemplify atoms or molecules.
- When measuring liquids, have glassware with specific measurements or make a tactile graduated cylinder.
- Use talking thermometers and calculators, light probes, and tactile timers.
- Implement auditory lab warning signals.
- Use clear verbal descriptions of demonstrations or visual aids.
For more information about students with blindness, consult the Blindness section of this website.
Following are typical accommodations in science labs that can be used to maximize the participation of students who have low vision:
- Create large-print instructions.
- Use large-print reading materials that include laboratory signs and equipment labels.
- Enlarge images by connecting TV monitors to microscopes.
- Use raised line drawings or tactile models for illustrations or maps.
- Verbally describe visual aids.
For more information about students with low vision, consult the Low Vision section of this website.
The following are typical accommodations in science labs that can be used to maximize the participation of students who have mobility impairments.
Basic requirements for a laboratory work station for a student in a wheelchair include:
- Work surfaces 30 inches from the floor.
- 29-inch clearance beneath the top to a depth of at least 20 inches, and a minimum width of 36 inches to allow leg space for the seated individual.
- Utility and equipment controls within easy reach for a wheelchair user.
- Clear aisle width of 42 to 48 inches sufficient to maneuver a wheelchair.
Additional accommodations and guidelines to enhance lab accessibility for students with mobility impairments include:
- Keep the lab layout uncluttered.
- Provide at least one adjustable laboratory workstation.
- Provide preferential seating to avoid obstacles and physical classroom barriers and that provides visual access to demonstrations.
- Use mirrors above the instructor or enlarged screen demonstrations.
- Provide c-clamps for holding objects.
- Provide surgical gloves for handling wet or slippery items.
- Provide beakers and other equipment with handles.
- Create alternative workspaces such as pullout or drop leaf shelves and counter tops, or lap-desks.
- Provide extended eyepieces so students who use wheelchairs can use microscopes.
- Use single-action lever controls or blade type handles in place of knobs.
- Provide flexible connections to electrical, water and gas lines.
- Create alternate lab storage methods (e.g., a portable Lazy Susan, or a storage cabinet on casters).
For more information about students with mobility impairments, consult the Mobility Impairments section of this website.
Following are typical accommodations in science labs that can be used to maximize the participation of students who have hearing impairments:
- Provide access to videotaped demonstrations or software with captioning.
- Provide written instructions or captioned video instructions prior to class.
- Use visual lab warning signals.
- Provide preferential seating to view demonstrations and watch the instructor.
For more information about students with hearing impairments, consult the Hearing Impairments section of this website.
Following are typical accommodations in science labs that can be used to maximize the participation of students who have learning disabilities:
- Use a combination of written, verbal, and pictorial instructions.
- Create opportunities to work with lab partners rather than alone.
- Extend the time allotted for set-up and process.
- Provide role-modeling/demonstration and allow practice.
For more information about students with learning disabilities, consult the Learning Disabilities section of this website.
Some students may not be able to manage certain chemicals or materials. Alternative experiences will need to be considered in these cases. For more information about students with health impairments, consult the Health Impairments section of this website.
Mental Health or Psychiatric Impairments
Following are examples of accommodations that are often appropriate for students with mental health or psychiatric impairments.
- Allow for extended set-up, process, and practice time.
- Use a combination of written, oral, and pictorial instructions.
- Demonstrate and role model procedures.
- Allow for frequent brief breaks.
- Provide preferential seating - particularly near the door.
- Decrease extraneous distracting stimuli.
- Allow student to bring a water bottle to lab.
For more information about students with psychiatric impairments, consult the Psychiatric Impairments section of this website.
Check Your Understanding
Suppose you have a student with a spinal cord injury who uses a wheelchair and has limited use of his hands. What accommodations would help him access your introductory chemistry lab? Choose a response.
- Assure that the physical facility of the lab is wheelchair accessible.
- Provide an adjustable workstation.
- Provide adaptive lab devices and tools.
- Ask a lab partner to provide assistance.
Additional content on this topic can be found in the DO-IT publications with accompanying videos, The Winning Equation: Access + Attitude = Success in Math and Science and Working Together: Science Teachers and Students with Disabilities. Consult The Faculty Room Knowledge Base for questions & answers, case studies, and promising practices. | http://www.washington.edu/doit/Faculty/Strategies/Academic/Science/ | 13 |
18 | Astronomer: Most black holes form when a star which is ten times more massive than our Sun runs out of fuel for fusion. This causes the star to collapse, explode as a supernova, and, if enough material is left over after the explosion, becomes what is called a stellar black hole. A black hole is an object with such a high density that even light doesn’t travel fast enough to escape its gravity. Something that falls into a black hole can never escape, because nothing can travel faster than the speed of light.
What would happen if one of these stellar black holes wandered into our solar system? Very Bad Things. The first indication we might get that something unusual was happening would be subtle changes in the orbits of the outer planets. These changes would be detectable at least by the time the black hole was a few hundred thousand times the distance between the Earth and the Sun.
By then the black hole would be near the outer reaches of the solar system, in an area filled with icy comet-like objects called the Oort cloud. It’s possible that the gravitational disruption caused by the black hole traveling through the Oort cloud could gravitationally catapult a large number of additional comets into the inner solar system, some of which might strike Earth or other planets. If the black hole passed through only this outer part of the solar system, for example if it were moving too fast to be strongly affected by the Sun’s gravitational influence, an increase in comets in the inner solar system might be the only effect we would observe.
At this point we likely wouldn’t see anything at the black hole’s position, even if we looked with the best available telescopes. The black hole itself doesn’t doesn’t give off light, and the only way we might detect it is through the energy released when it consumes some gas. Even the black hole’s affect on the light from stars behind it – which causes the light to be bent into an apparent ring around the black hole – would be too small for us to see. Only until the black hole reaches the inner edge of the asteroid belt would we be able to directly observe the light-bending effects of the black hole. By this point, the effects on the Earth’s orbit would be extreme and it’s likely the black hole would have become visible through its interaction with one of the outer planets.
If the black hole continued to move toward the inner solar system, the orbits of the planets would continue to be disrupted in dramatic ways. Jupiter, the most massive planet, might be snared by the black hole due to their strong mutual gravitational attraction. The black hole would pull gas from Jupiter, forming a bright disk of swirling, hot gas. The hot gas disk gives off x-ray radiation. Despite the fact that Jupiter is thousands of times larger than the black hole, the black hole is thousands of times more massive than Jupiter and easily wins. Jupiter is entirely consumed onto the relatively tiny black hole.
By this time, the Earth is already in grave trouble. The gravitational effects of the black hole have caused earthquakes and volcanic eruptions more extreme than those ever seen before by humans. The Earth would be pulled out of its usual orbit, possibly experiencing abrupt changes in direction or being pulled away or towards the Sun. By the time the black hole crosses Earth’s orbit the geologic effects from tidal forces will have effectively repaved the Earth’s surface with magma and wiped out all life. Since the Sun contains 99.9% of the mass of the solar system, the Sun and the black hole experience a strong gravitational pull towards each other. The black hole would approach the Sun, whose gas is stripped and pulled into the black hole. The Earth, whose inhabitants have already died, would approach the sun/black hole pair, heat up, be torn apart by gravitational forces, and then be pulled into the black hole itself.
Now that we’ve set this morbid scene, you might wonder how likely is it that a black hole will wander into our solar system, causing widespread death and destruction. Here, at least, we have some good news. With what we know today, it seems exceedingly unlikely to happen anywhere in the galaxy (except at the very center), much less our own solar system. Distances between black holes are huge, and the density of black holes is less because we are in the outer third of our galaxy. In addition, most black holes aren’t zipping around the galaxy at high speed, which makes them far less likely to encounter a solar system.
(picture credit: University of Warwick/ Mark A. Garlick) | http://www.askamathematician.com/2012/02/q-what-would-happen-if-a-black-hole-passed-through-our-solar-system-2/ | 13 |
17 | November 21, 2003
It takes only a few hundred to a thousand years for a dying Sun-like star, many billions of years old, to transform into a dazzling, glowing cloud called a planetary nebula. This relative blink in a long lifetime means that a Sun-like star's final moments - the crucial phase when its planetary nebula takes shape - have, until now, gone undetected.
In research reported in the Nov. 20 issue of Nature, astronomers led by Dr. Raghvendra Sahai of NASA's Jet Propulsion Laboratory, Pasadena, Calif., have caught one such dying star in the act. This nearby star, called V Hydrae, has been captured by the Space Telescope Imaging Spectrograph onboard NASA's Hubble Space Telescope in the last stages of its demise, just as material has begun to shoot away from it in a high-speed jet outflow.
While previous studies have indicated the role of jet outflows in shaping planetary nebulae, the new findings represent the first time these jets have been directly detected.
"The discovery of a newly launched jet outflow is likely to have a significant impact on our understanding of this short-lived stage of stellar evolution and will open a window onto the ultimate fate of our Sun," said Sahai.
Other institutions contributing to this paper include: University of California, Los Angeles; Princeton University, Princeton, New Jersey; Harvard-Smithsonian Center for Astrophysics, Cambridge, Massachusetts; and Valdosta State University, Valdosta, Georgia.
Low-mass stars like the Sun typically survive around ten billion years before their hydrogen fuel begins to run out and they start to die. Over the next ten to hundred thousand years, the stars slowly eject nearly half of their mass in expanding, spherical winds. Then - in a poorly understood phase lasting just 100 to 1,000 years - the stars evolve into a stunning array of geometrically shaped glowing clouds called planetary nebulae.
Just how these extraordinary "star-clouds" are shaped has remained unclear, though Sahai, in several previous papers, put forth a new hypothesis. Based on results from a recent Hubble Space Telescope imaging survey of young planetary nebulae, he proposed that two-sided, or bipolar, high-speed jet-like outflows are the primary means of shaping these objects. The latest study will allow Sahai and his colleagues to test this hypothesis with direct data for the first time.
"Now, in the case of V Hydrae, we can observe the evolution of the jet outflow in real-time," said Sahai, who together with his colleagues will study the star with the Hubble Space Telescope for three more years.
The new findings also suggest what may be driving the jet outflows. Past models of dying stars predict that accretion discs - swirling rings of matter encircling stars - may trigger jet outflows. The V Hydrae data support the presence of an accretion disc surrounding, not V Hydrae itself, but a companion object around the star. This companion is likely to be another star or even a giant planet too dim to be detected. The authors have also found evidence for an outlying large dense disc in V Hydrae, which could enable the formation of the accretion disc around the companion.
Further support in favor of a companion-driven jet outflow comes from the scientists' observation that the jet fires in bursts: because the companion orbits the star in a periodic fashion, the accretion disc around it is expected to produce regular spurts of material rather than a steady stream.
The Space Telescope Imaging Spectrograph is managed by NASA's Goddard Space Flight Center in Greenbelt, Maryland. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. The California Institute of Technology, Pasadena manages JPL for NASA.
Contact: Whitney Clavin (626) 395-1877 | http://www.jpl.nasa.gov/news/news.php?release=154 | 13 |
14 | Fourteenth Amendment to the United States Constitution
|United States of America|
This article is part of the series:
|Original text of the Constitution|
|Amendments to the Constitution|
|Full text of the Constitution|
Other countries · Law Portal
The Fourteenth Amendment (Amendment XIV) to the United States Constitution was adopted on July 9, 1868, as one of the Reconstruction Amendments. Its first section, which has frequently been the subject of lawsuits, includes several clauses: the Citizenship Clause, Privileges or Immunities Clause, Due Process Clause, and Equal Protection Clause.
The Citizenship Clause provides a broad definition of citizenship, overruling the Supreme Court's decision in Dred Scott v. Sandford (1857), which had held that Americans descended from African slaves could not be citizens of the United States. The Citizenship Clause is followed by the Privileges or Immunities Clause, which has been interpreted in such a way that it does very little.
The Due Process Clause prohibits state and local government officials from depriving persons of life, liberty, or property without legislative authorization. This clause has also been used by the federal judiciary to make most of the Bill of Rights applicable to the states, as well as to recognize substantive and procedural requirements that state laws must satisfy.
The Equal Protection Clause requires each state to provide equal protection under the law to all people within its jurisdiction. This clause was the basis for Brown v. Board of Education (1954), the Supreme Court decision that precipitated the dismantling of racial segregation in the United States, and for many other decisions rejecting irrational or unnecessary discrimination against people belonging to various groups. The second, third, and fourth sections of the amendment are seldom, if ever, litigated; the fifth section gives Congress enforcement power.
Section 1. All persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside. No State shall make or enforce any law which shall abridge the privileges or immunities of citizens of the United States; nor shall any State deprive any person of life, liberty, or property, without due process of law; nor deny to any person within its jurisdiction the equal protection of the laws.
Section 2. Representatives shall be apportioned among the several States according to their respective numbers, counting the whole number of persons in each State, excluding Indians not taxed. But when the right to vote at any election for the choice of electors for President and Vice President of the United States, Representatives in Congress, the Executive and Judicial officers of a State, or the members of the Legislature thereof, is denied to any of the male inhabitants of such State, being twenty-one years of age, and citizens of the United States, or in any way abridged, except for participation in rebellion, or other crime, the basis of representation therein shall be reduced in the proportion which the number of such male citizens shall bear to the whole number of male citizens twenty-one years of age in such State.
Section 3. No person shall be a Senator or Representative in Congress, or elector of President and Vice President, or hold any office, civil or military, under the United States, or under any State, who, having previously taken an oath, as a member of Congress, or as an officer of the United States, or as a member of any State legislature, or as an executive or judicial officer of any State, to support the Constitution of the United States, shall have engaged in insurrection or rebellion against the same, or given aid or comfort to the enemies thereof. But Congress may, by a vote of two-thirds of each House, remove such disability.
Section 4. The validity of the public debt of the United States, authorized by law, including debts incurred for payment of pensions and bounties for services in suppressing insurrection or rebellion, shall not be questioned. But neither the United States nor any State shall assume or pay any debt or obligation incurred in aid of insurrection or rebellion against the United States, or any claim for the loss or emancipation of any slave; but all such debts, obligations and claims shall be held illegal and void.Section 5. The Congress shall have power to enforce, by appropriate legislation, the provisions of this article.
Citizenship and civil rights
Section 1 formally defines United States citizenship and protects individual civil and political rights from being abridged or denied by any state. In effect, it overruled the Supreme Court's Dred Scott decision that black people were not citizens and could not become citizens, nor enjoy the benefits of citizenship. The Civil Rights Act of 1866 had granted citizenship to all persons born in the United States if they were not subject to a foreign power. The framers of the Fourteenth Amendment wanted this principle enshrined into the Constitution to protect the new Civil Rights Act from being declared unconstitutional by the Supreme Court and to prevent a future Congress from altering it by a mere majority vote.
This section was also in response to the Black Codes that southern states had passed in the wake of the Thirteenth Amendment, which abolished slavery in the United States. The Black Codes attempted to return former slaves to something like their former condition by, among other things, restricting their movement, forcing them to enter into year-long labor contracts, prohibiting them from owning firearms, and by preventing them from suing or testifying in court.
Finally, this section was in response to violence against black people within the southern states. A Joint Committee on Reconstruction found that only a Constitutional amendment could protect black people's rights and welfare within those states. This section has been the most frequently litigated part of the amendment, and this amendment has been the most frequently litigated part of the Constitution.
There are varying interpretations of the original intent of Congress, based on statements made during the congressional debate over the amendment. During the original debate over the amendment Senator Jacob M. Howard of Michigan—the author of the Citizenship Clause—described the clause as having the same content, despite different wording, as the earlier Civil Rights Act of 1866, namely, that it excludes Native Americans who maintain their tribal ties and "persons born in the United States who are foreigners, aliens, who belong to the families of ambassadors or foreign ministers." According to historian Glenn W. LaFantasie of Western Kentucky University, "A good number of his fellow senators supported his view of the citizenship clause." Others also agreed that the children of ambassadors and foreign ministers were to be excluded. However, concerning children born in the United States to parents who are not citizens of the United States (and not foreign diplomats), three Senators, including Senate Judiciary Committee Chairman Lyman Trumbull, the author of the Civil Rights Act, as well as President Andrew Johnson, asserted that both the Civil Rights Act and the Fourteenth Amendment would confer citizenship on them at birth, and no Senator offered a contrary opinion.
Senator James Rood Doolittle of Wisconsin asserted that all Native Americans were subject to United States jurisdiction, so that the phrase "Indians not taxed" would be preferable, but Trumbull and Howard disputed this, arguing that the federal government did not have full jurisdiction over Native American tribes, which govern themselves and make treaties with the United States. In Elk v. Wilkins (1884), the clause's meaning was tested regarding whether birth in the United States automatically extended national citizenship. The Supreme Court held that Native Americans who voluntarily quit their tribes did not automatically gain national citizenship.
The clause's meaning was tested again in United States v. Wong Kim Ark (1898). The Supreme Court held that under the Fourteenth Amendment, a man born within the United States to Chinese citizens who have a permanent domicile and residence in the United States and are carrying on business in the United States—and whose parents were not employed in a diplomatic or other official capacity by a foreign power—was a citizen of the United States. Subsequent decisions have applied the principle to the children of foreign nationals of non-Chinese descent. In 2010, Republican Senators discussed revising the amendment's providing of birthright citizenship to reduce the practice of "birth tourism", in which a pregnant foreign national gives birth in the United States for purposes of the child's citizenship.
Loss of citizenship
Loss of national citizenship is possible only under the following circumstances:
- Fraud in the naturalization process. Technically, this is not loss of citizenship but rather a voiding of the purported naturalization and a declaration that the immigrant never was a citizen of the United States.
- Voluntary relinquishment of citizenship. This may be accomplished either through renunciation procedures specially established by the State Department or through other actions that demonstrate desire to give up national citizenship.
For much of the country's history, voluntary acquisition or exercise of a foreign citizenship was considered sufficient cause for revocation of national citizenship. This concept was enshrined in a series of treaties between the United States and other countries (the Bancroft Treaties). However, the Supreme Court repudiated this concept in Afroyim v. Rusk (1967), as well as Vance v. Terrazas (1980), holding that the Citizenship Clause of the Fourteenth Amendment barred the Congress from revoking citizenship.
Privileges or Immunities Clause
In the Slaughter-House Cases (1873), the Supreme Court ruled that the amendment's Privileges or Immunities Clause was limited to "privileges or immunities" granted to citizens by the federal government by virtue of national citizenship. The Court further held in the Civil Rights Cases (1883) that the amendment was limited to "state action" and, therefore, did not authorize the Congress to outlaw racial discrimination on the part of private individuals or organizations. Neither of these decisions has been overruled and they have been specifically reaffirmed several times.
Despite fundamentally differing views concerning the coverage of the Privileges or Immunities Clause of the Fourteenth Amendment, most notably expressed in the majority and dissenting opinions in the Slaughter-House Cases (1873), it has always been common ground that this Clause protects the third component of the right to travel. Writing for the majority in the Slaughter-House Cases, Justice Miller explained that one of the privileges conferred by this Clause "is that a citizen of the United States can, of his own volition, become a citizen of any State of the Union by a bona fide residence therein, with the same rights as other citizens of that State."
Due Process Clause
The Due Process Clause of the amendment protects both procedural due process—the guarantee of a fair legal process—and substantive due process—the guarantee that the fundamental rights of citizens will not be encroached on by government.
Beginning with Allgeyer v. Louisiana (1897), the Court interpreted the Due Process Clause as providing substantive protection to private contracts and thus prohibiting a variety of social and economic regulation, under what was referred to as "freedom of contract". Thus, the Court struck down a law decreeing maximum hours for workers in a bakery in Lochner v. New York (1905) and struck down a minimum wage law in Adkins v. Children's Hospital (1923). In Meyer v. Nebraska (1923), the Court stated that the "liberty" protected by the Due Process Clause
"[w]ithout doubt...denotes not merely freedom from bodily restraint but also the right of the individual to contract, to engage in any of the common occupations of life, to acquire useful knowledge, to marry, establish a home and bring up children, to worship God according to the dictates of his own conscience, and generally to enjoy those privileges long recognized at common law as essential to the orderly pursuit of happiness by free men."
However, the Court did uphold some economic regulation such as state prohibition laws (Mugler v. Kansas, 1887), laws declaring maximum hours for mine workers (Holden v. Hardy, 1898), laws declaring maximum hours for female workers (Muller v. Oregon, 1908), President Wilson's intervention in a railroad strike (Wilson v. New, 1917), as well as federal laws regulating narcotics (United States v. Doremus, 1919). The Court repudiated the "freedom of contract" line of cases in West Coast Hotel v. Parrish (1937).
By the 1960s, the Court had extended its interpretation of substantive due process to include rights and freedoms that are not specifically mentioned in the Constitution but that, according to the Court, extend or derive from existing rights. The Court has also significantly expanded the reach of procedural due process, requiring some sort of hearing before the government may terminate civil service employees, expel a student from public school, or cut off a welfare recipient's benefits.
The Due Process Clause is also the foundation of a constitutional right to privacy. The Court first ruled that privacy was protected by the Constitution in Griswold v. Connecticut (1965), which overturned a Connecticut law criminalizing birth control. While Justice William O. Douglas wrote for the majority that the right to privacy was found in the "penumbras" of the Bill of Rights, Justices Arthur Goldberg and John Marshall Harlan II wrote in concurring opinions that the "liberty" protected by the Due Process Clause included individual privacy.
The right to privacy became the basis for Roe v. Wade (1973), in which the Court invalidated a Texas law forbidding abortion except to save the mother's life. Like Goldberg and Harlan's dissents in Griswold, the majority opinion authored by Justice Harry A. Blackmun located the right to privacy in the Due Process Clause's protection of liberty. The decision disallowed many state and federal abortion restrictions, and became one of the most controversial in the Court's history. In Planned Parenthood v. Casey (1992), the Court decided that "the essential holding of Roe v. Wade should be retained and once again reaffirmed." In Lawrence v. Texas (2003), the Court found that a Texas law against same-sex sexual intercourse violated the right to privacy.
The Court has ruled that, in certain circumstances, the Due Process Clause requires a judge to recuse himself on account of concern of there being a conflict of interest. For example, in Caperton v. A.T. Massey Coal Co. (2009), the Court ruled that a justice of the Supreme Court of Appeals of West Virginia had to recuse himself from a case involving a major contributor to his campaign for election to that court.
While many state constitutions are modeled after the United States Constitution and federal laws, those state constitutions did not necessarily include provisions comparable to the Bill of Rights. In Barron v. Baltimore (1833), the Supreme Court unanimously ruled that the Bill of Rights restrained only the federal government, not the states. Under the Fourteenth Amendment, most provisions of the Bill of Rights have been held to apply to the states as well as the federal government, a process known as incorporation.
Whether this incorporation was intended by the amendment's framers, such as John Bingham, has been debated by legal historians. According to legal scholar Akhil Reed Amar, the framers and early supporters of the Fourteenth Amendment believed that it would ensure that the states would be required to recognize the same individual rights as the federal government; all of these rights were likely understood as falling within the "privileges or immunities" safeguarded by the amendment.
By the latter half of the 20th century, nearly all of the rights in the Bill of Rights had been applied to the states. The Supreme Court has held that the amendment's Due Process Clause incorporates all of the substantive protections of the First, Second, Fourth, Fifth (except for its Grand Jury Clause) and Sixth Amendments and the Cruel and Unusual Punishment Clause of the Eighth Amendment. While the Third Amendment has not been applied to the states by the Supreme Court, the Second Circuit ruled that it did apply to the states within that circuit's jurisdiction in Engblom v. Carey. The Seventh Amendment has been held not to be applicable to the states.
Equal Protection Clause
The Equal Protection Clause was added to deal with the lack of equal protection provided by law in states with Black Codes. Under Black Codes, blacks could not sue, give evidence, or be witnesses, and they received harsher degrees of punishment than whites. The clause mandates that individuals in similar situations be treated equally by state and federal laws.
"The court does not wish to hear argument on the question whether the provision in the Fourteenth Amendment to the Constitution, which forbids a State to deny to any person within its jurisdiction the equal protection of the laws, applies to these corporations. We are all of the opinion that it does."
This dictum, which established that corporations enjoyed personhood under the Equal Protection Clause, was repeatedly reaffirmed by later courts. It remained the predominant view throughout the twentieth century, though it was challenged in dissents by justices such as Hugo Black and William O. Douglas.
In the decades following the adoption of the Fourteenth Amendment, the Supreme Court overturned laws barring blacks from juries (Strauder v. West Virginia, 1880) or discriminating against Chinese Americans in the regulation of laundry businesses (Yick Wo v. Hopkins, 1886), as violations of the Equal Protection Clause. However, in Plessy v. Ferguson (1896), the Supreme Court held that the states could impose segregation so long as they provided similar facilities—the formation of the "separate but equal" doctrine.
The Court went even further in restricting the Equal Protection Clause in Berea College v. Kentucky (1908), holding that the states could force private actors to discriminate by prohibiting colleges from having both black and white students. By the early 20th century, the Equal Protection Clause had been eclipsed to the point that Justice Oliver Wendell Holmes, Jr. dismissed it as "the usual last resort of constitutional arguments."
The Court held to the "separate but equal" doctrine for more than fifty years, despite numerous cases in which the Court itself had found that the segregated facilities provided by the states were almost never equal, until Brown v. Board of Education (1954) reached the Court. In Brown the Court ruled that even if segregated black and white schools were of equal quality in facilities and teachers, segregation by itself was harmful to black students and so was unconstitutional. Brown met with a campaign of resistance from white Southerners, and for decades the federal courts attempted to enforce Brown's mandate against repeated attempts at circumvention. This resulted in the controversial desegregation busing decrees handed down by federal courts in various parts of the nation (see Milliken v. Bradley, 1974).
In Hernandez v. Texas (1954), the Court held that the Fourteenth Amendment protects those beyond the racial classes of white or "Negro" and extends to other racial and ethnic groups, such as Mexican Americans in this case. In the half-century following Brown, the Court extended the reach of the Equal Protection Clause to other historically disadvantaged groups, such as women and illegitimate children, although it has applied a somewhat less stringent standard than it has applied to governmental discrimination on the basis of race (United States v. Virginia, 1996; Levy v. Louisiana, 1968).
Reed v. Reed (1971), which struck down an Idaho probate law favoring men, was the first decision in which the Court ruled that arbitrary gender discrimination violated the Equal Protection Clause. In Craig v. Boren (1976), the Court ruled that statutory or administrative sex classifications had to be subjected to an intermediate standard of judicial review. Reed and Craig later served as precedents to strike down a number of state laws discriminating by gender.
Since Wesberry v. Sanders (1964) and Reynolds v. Sims (1964), the Supreme Court has interpreted the Equal Protection Clause as requiring the states to apportion their congressional districts and state legislative seats according to "one man, one vote". The Court has also struck down redistricting plans in which race was a key consideration. In Shaw v. Reno (1993), the Court prohibited a North Carolina plan aimed at creating majority-black districts to balance historic underrepresentation in the state's congressional delegations.
The Equal Protection Clause served as the basis for the decision in Bush v. Gore (2000), in which the Court ruled that no constitutionally valid recount of Florida's votes in the 2000 presidential election could be held within the needed deadline; the decision effectively secured Bush's victory in the disputed election. In League of United Latin American Citizens v. Perry (2006), the Court ruled that House Majority Leader Tom DeLay's Texas redistricting plan intentionally diluted the votes of Latinos and thus violated the Equal Protection Clause.
Apportionment of representation in House of Representatives
Section 2 altered the way each state's representation in the House of Representatives is determined. It counts all residents for apportionment, overriding Article I, Section 2, Clause 3 of the Constitution, which counted only three-fifths of each state's slave population.
Section 2 also reduces a state's apportionment if it wrongfully denies any adult male's right to vote, while explicitly permitting felony disenfranchisement. However, this provision was never enforced, and southern states continued to use pretexts to prevent many blacks from voting until the passage of Voting Rights Act in 1965. Because it protects the right to vote only of adult males, not adult females, this clause is the only provision of the US Constitution to discriminate explicitly on the basis of sex.
Some have argued that Section 2 was implicitly repealed by the Fifteenth Amendment, but the Supreme Court acknowledged the provisions of Section 2 in some later decisions. For example, in Richardson v. Ramirez (1974), the Court cited Section 2 as justification for the states disenfranchising felons.
Participants in rebellion
Section 3 prohibits the election or appointment to any federal or state office of any person who had held any of certain offices and then engaged in insurrection, rebellion or treason. However, a two-thirds vote by each House of the Congress can override this limitation. In 1898, the Congress enacted a general removal of Section 3's limitation. In 1975, the citizenship of Confederate general Robert E. Lee was restored by a joint congressional resolution, retroactive to June 13, 1865. In 1978, pursuant to Section 3, the Congress posthumously removed the service ban from Confederate president Jefferson Davis.
Section 3 was used to prevent Socialist Party of America member Victor L. Berger, convicted of violating the Espionage Act for his anti-militarist views, from taking his seat in the House of Representatives in 1919 and 1920.
Validity of public debt
Section 4 confirmed the legitimacy of all U.S. public debt appropriated by the Congress. It also confirmed that neither the United States nor any state would pay for the loss of slaves or debts that had been incurred by the Confederacy. For example, during the Civil War several British and French banks had lent large sums of money to the Confederacy to support its war against the Union. In Perry v. United States (1935), the Supreme Court ruled that under Section 4 voiding a United States bond "went beyond the congressional power."
The debt-ceiling crisis in 2011 raised the question of what powers Section 4 gives to the President, an issue that remains unsettled. Some, such as legal scholar Garrett Epps, fiscal expert Bruce Bartlett and Treasury Secretary Timothy Geithner, have argued that a debt ceiling may be unconstitutional and therefore void as long as it interferes with the duty of the government to pay interest on outstanding bonds and to make payments owed to pensioners (that is, Social Security recipients). Legal analyst Jeffrey Rosen has argued that Section 4 gives the President unilateral authority to raise or ignore the national debt ceiling, and that if challenged the Supreme Court would likely rule in favor of expanded executive power or dismiss the case altogether for lack of standing. Erwin Chemerinsky, professor and dean at University of California, Irvine School of Law, has argued that not even in a "dire financial emergency" could the President raise the debt ceiling as "there is no reasonable way to interpret the Constitution that [allows him to do so]".
Power of enforcement
Section 5 enables Congress to pass laws enforcing the Amendment's provisions. In the Civil Rights Cases (1883), the Supreme Court interpreted Section 5 narrowly, stating that "the legislation which Congress is authorized to adopt in this behalf is not general legislation upon the rights of the citizen, but corrective legislation"; in other words, Congress could only pass laws intended to combat violations of the rights enumerated in other sections. In a 1966 decision, Katzenbach v. Morgan, the Court upheld a section of the Voting Rights Act of 1965, ruling that Section 5 enabled Congress to act both remedially and prophylactically to protect rights enumerated in the amendment. In City of Boerne v. Flores (1997), the Court rejected Congress' ability to define or interpret constitutional rights via Section 5.
Proposal and ratification
The 39th United States Congress proposed the Fourteenth Amendment on June 13, 1866. Ratification of the Fourteenth Amendment was bitterly contested: all the Southern state legislatures, with the exception of Tennessee, refused to ratify the amendment. This refusal led to the passage of the Reconstruction Acts. Ignoring the existing state governments, military government was imposed until new civil governments were established and the Fourteenth Amendment was ratified.
On March 2, 1867, the Congress passed a law that required any formerly Confederate state to ratify the Fourteenth Amendment before "said State shall be declared entitled to representation in Congress".
By July 9, 1868, three-fourths of the states (28 of 37) ratified the amendment:
- Connecticut (June 25, 1866)
- New Hampshire (July 6, 1866)
- Tennessee (July 19, 1866)
- New Jersey (September 11, 1866)*
- Oregon (September 19, 1866)
- Vermont (October 30, 1866)
- Ohio (January 4, 1867)*
- New York (January 10, 1867)
- Kansas (January 11, 1867)
- Illinois (January 15, 1867)
- West Virginia (January 16, 1867)
- Michigan (January 16, 1867)
- Minnesota (January 16, 1867)
- Maine (January 19, 1867)
- Nevada (January 22, 1867)
- Indiana (January 23, 1867)
- Missouri (January 25, 1867)
- Rhode Island (February 7, 1867)
- Wisconsin (February 7, 1867)
- Pennsylvania (February 12, 1867)
- Massachusetts (March 20, 1867)
- Nebraska (June 15, 1867)
- Iowa (March 16, 1868)
- Arkansas (April 6, 1868, after having rejected it on December 17, 1866)
- Florida (June 9, 1868, after having rejected it on December 6, 1866)
- North Carolina (July 4, 1868, after having rejected it on December 14, 1866)
- Louisiana (July 9, 1868, after having rejected it on February 6, 1867)
- South Carolina (July 9, 1868, after having rejected it on December 20, 1866)
*Ohio passed a resolution that purported to withdraw its ratification on January 15, 1868. The New Jersey legislature also tried to rescind its ratification on February 20, 1868, citing procedural problems with the amendment's congressional passage, including that specific states were unlawfully denied representation in the House and the Senate at the time. The New Jersey governor had vetoed his state's withdrawal on March 5, and the legislature overrode the veto on March 24.
On July 20, 1868, Secretary of State William H. Seward certified that the amendment had become part of the Constitution if the rescissions were ineffective, and presuming also that the later ratifications by states whose governments had been reconstituted superseded the initial rejection of the prior state legislatures. The Congress responded on the following day, declaring that the amendment was part of the Constitution and ordering Seward to promulgate the amendment.
Meanwhile, two additional states had ratified the amendment:
- Alabama (July 13, 1868, the date the ratification was approved by the governor)
- Georgia (July 21, 1868, after having rejected it on November 9, 1866)
Thus, on July 28, Seward was able to certify unconditionally that the amendment was part of the Constitution without having to endorse the Congress's assertion that the withdrawals were ineffective.
After the Democrats won the legislative election in Oregon, they passed a rescission of the Unionist Party's previous adoption of the amendment. The rescission was ignored as too late, as it came on October 15, 1868. The amendment has since been ratified by all of the 37 states that were in the Union in 1868, including Ohio, New Jersey, and Oregon re-ratifying after their rescissions:
- Virginia (October 8, 1869, after having rejected it on January 9, 1867)
- Mississippi (January 17, 1870, after having rejected it on January 31, 1868)
- Texas (February 18, 1870, after having rejected it on October 27, 1866)
- Delaware (February 12, 1901, after having rejected it on February 7, 1867)
- Maryland (April 4, 1959, after having rejected it on March 23, 1867)
- California (March 18, 1959)
- Oregon (1973, after withdrawing it on October 15, 1868)
- Kentucky (May 6, 1976, after having rejected it on January 8, 1867)
- New Jersey (2003, after having rescinded on February 20, 1868)
- Ohio (2003, after having rescinded on January 15, 1868)
Selected Supreme Court cases
Privileges or immunities
Procedural due process/Incorporation
Substantive due process
Apportionment of Representatives
- 1974: Richardson v. Ramirez
Power of enforcement
- "Constitution of the United States: Amendments 11–27". National Archives and Records Administration. Archived from the original on June 11, 2013. Retrieved June 11, 2013.
- "Tsesis, Alexander, The Inalienable Core of Citizenship: From Dred Scott to the Rehnquist Court". Arizona State Law Journal, Vol. 39, 2008 (Ssrn.com). SSRN 1023809.
- McDonald v. Chicago, 130 S. Ct. 3020, 3060 (2010) ("This [clause] unambiguously overruled this Court's contrary holding in Dred Scott.")
- Goldstone 2011, pp. 23–24.
- Eric Foner, "The Second American Revolution", In These Times, September 1987; reprinted in Civil Rights Since 1787, ed. Jonathan Birnbaum & Clarence Taylor, NYU Press, 2000. ISBN 0814782493
- Duhaime, Lloyd. "Legal Definition of Black Code". duhaime.org. Retrieved March 25, 2009.
- Foner, Eric. Reconstruction. pp. 199–200. ISBN 0-8071-2234-3.
- "Finkelman, Paul, John Bingham and the Background to the Fourteenth Amendment". Akron Law Review, Vol. 36, No. 671, 2003 (Ssrn.com). April 2, 2009. SSRN 1120308.
- Harrell, David and Gaustad, Edwin. Unto A Good Land: A History Of The American People, Volume 1, p. 520 (Eerdmans Publishing, 2005): "The most important, and the one that has occasioned the most litigation over time as to its meaning and application, was Section One."
- Stephenson, D. The Waite Court: Justices, Rulings, and Legacy, p. 147 (ABC-CLIO, 2003).
- Messner, Emily. “Born in the U.S.A. (Part I)”, The Debate, The Washington Post (March 30, 2006).
- Robert Pear (August 7, 1996). "Citizenship Proposal Faces Obstacle in the Constitution". The New York Times.
- LaFantasie, Glenn (March 20, 2011) The erosion of the Civil War consensus, Salon
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893 Senator Reverdy Johnson said in the debate: "Now, all this amendment provides is, that all persons born in the United States and not subject to some foreign Power--for that, no doubt, is the meaning of the committee who have brought the matter before us--shall be considered as citizens of the United States...If there are to be citizens of the United States entitled everywhere to the character of citizens of the United States, there should be some certain definition of what citizenship is, what has created the character of citizen as between himself and the United States, and the amendment says citizenship may depend upon birth, and I know of no better way to give rise to citizenship than the fact of birth within the territory of the United States, born of parents who at the time were subject to the authority of the United States."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2897.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 572.
- Congressional Globe, 1st Session, 39th Congress, pt. 1, p. 498. The debate on the Civil Rights Act contained the following exchange:
Mr. Cowan: "I will ask whether it will not have the effect of naturalizing the children of Chinese and Gypsies born in this country?"
Mr. Trumbull: "Undoubtedly."
Mr. Trumbull: "I understand that under the naturalization laws the children who are born here of parents who have not been naturalized are citizens. This is the law, as I understand it, at the present time. Is not the child born in this country of German parents a citizen? I am afraid we have got very few citizens in some of the counties of good old Pennsylvania if the children born of German parents are not citizens."
Mr. Cowan: "The honorable Senator assumes that which is not the fact. The children of German parents are citizens; but Germans are not Chinese; Germans are not Australians, nor Hottentots, nor anything of the kind. That is the fallacy of his argument."
Mr. Trumbull: "If the Senator from Pennsylvania will show me in the law any distinction made between the children of German parents and the children of Asiatic parents, I may be able to appreciate the point which he makes; but the law makes no such distinction; and the child of an Asiatic is just as much of a citizen as the child of a European."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2891-2 During the debate on the Amendment, Senator John Conness of California declared, "The proposition before us, I will say, Mr. President, relates simply in that respect to the children begotten of Chinese parents in California, and it is proposed to declare that they shall be citizens. We have declared that by law [the Civil Rights Act]; now it is proposed to incorporate that same provision in the fundamental instrument of the nation. I am in favor of doing so. I voted for the proposition to declare that the children of all parentage, whatever, born in California, should be regarded and treated as citizens of the United States, entitled to equal Civil Rights with other citizens.".
- See veto message by President Andrew Johnson.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, pp. 2890,2892-4,2896.
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2893. Trumbull, during the debate, said, "What do we [the committee reporting the clause] mean by 'subject to the jurisdiction of the United States'? Not owing allegiance to anybody else. That is what it means." He then proceeded to expound upon what he meant by "complete jurisdiction": "Can you sue a Navajoe Indian in court?...We make treaties with them, and therefore they are not subject to our jurisdiction.... If we want to control the Navajoes, or any other Indians of which the Senator from Wisconsin has spoken, how do we do it? Do we pass a law to control them? Are they subject to our jurisdiction in that sense?.... Would he [Sen. Doolittle] think of punishing them for instituting among themselves their own tribal regulations? Does the Government of the United States pretend to take jurisdiction of murders and robberies and other crimes committed by one Indian upon another?... It is only those persons who come completely within our jurisdiction, who are subject to our laws, that we think of making citizens."
- Congressional Globe, 1st Session, 39th Congress, pt. 4, p. 2895. Howard additionally stated the word jurisdiction meant "the same jurisdiction in extent and quality as applies to every citizen of the United States now" and that the U.S. possessed a "full and complete jurisdiction" over the person described in the amendment.
- Elk v. Wilkins, 112 U.S. 94 (1884)
- Urofsky, Melvin I.; Finkelman, Paul (2002). A March of Liberty: A Constitutional History of the United States 1 (2nd ed.). New York, NY: Oxford University Press. ISBN 0-19-512635-1.
- United States v. Wong Kim Ark 169 U.S. 649 (1898)
- Rodriguez, C.M. (2009). ""The Second Founding: The Citizenship Clause, Original Meaning, and the Egalitarian Unity of the Fourteenth Amendment" [PDF]". U. Pa. J. Const. L. 11: 1363–1475. Retrieved January 20, 2011.
- "14th Amendment: why birthright citizenship change 'can't be done'". Christian Science Monitor. August 10, 2010. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- U.S. Department of State (February 1, 2008). "Advice about Possible Loss of U.S. Citizenship and Dual Nationality". Retrieved April 17, 2009.
- For example, see Perez v. Brownell, 356 U.S. 44 (1958), overruled by Afroyim v. Rusk, 387 U.S. 253 (1967)
- Afroyim v. Rusk, 387 U.S. 253 (1967)
- Vance v. Terrazas, 444 U.S. 252 (1980)
- Slaughter-House Cases, 83 U.S. 36 (1873)
- Civil Rights Cases, 109 U.S. 3 (1883)
- e.g., United States v. Morrison, 529 U.S. 598 (2000)
- Saenz v. Roe, 526 U.S. 489 (1999)
- Gupta, Gayatri (2009). "Due process". In Folsom, W. Davis; Boulware, Rick. Encyclopedia of American Business. Infobase. p. 134.
- Allgeyer v. Louisiana, 169 U.S. 649 (1897)
- "Due Process of Law – Substantive Due Process". West's Encyclopedia of American Law. Thomson Gale. 1998.
- Lochner v. New York, 198 U.S. 45 (1905)
- Adkins v. Children's Hospital, 261 U.S. 525 (1923)
- Meyer v. Nebraska, 262 U.S. 390 (1923)
- "CRS Annotated Constitution". Cornell University Law School Legal Information Institute. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- Mugler v. Kansas, 123 U.S. 623 (1887)
- Holden v. Hardy, 169 U.S. 366 (1898)
- Muller v. Oregon, 208 U.S. 412 (1908)
- Wilson v. New, 243 U.S. 332 (1917)
- United States v. Doremus, 249 U.S. 86 (1919)
- West Coast Hotel v. Parrish, 300 U.S. 379 (1937)
- White, Bradford (2008). Procedural Due Process in Plain English. National Trust for Historic Preservation. ISBN 0-89133-573-0.
- See also Mathews v. Eldridge (1976).
- Griswold v. Connecticut, 381 U.S. 479 (1965)
- "Griswold v. Connecticut". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Retrieved June 16, 2013.
- Roe v. Wade, 410 U.S. 113 (1973)
- "Roe v. Wade 410 U.S. 113 (1973) Doe v. Bolton 410 U.S. 179 (1973)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). January 1, 2000. Retrieved June 16, 2013.
- Planned Parenthood v. Casey, 505 U.S. 833 (1992)
- Casey, 505 U.S. at 845-846.
- Lawrence v. Texas, 539 U.S. 558 (2003)
- Marc Spindelman (June 1, 2004). "Surviving Lawrence v. Texas". Michigan Law Review. – via HighBeam Research (subscription required). Retrieved June 16, 2013.
- Caperton v. A.T. Massey Coal Co., 556 U.S. ___ (2009)
- Jess Bravin and Kris Maher (June 8, 2009). "Justices Set New Standard for Recusals". The Wall Street Journal. Retrieved June 9, 2009.
- Barron v. Baltimore, 32 U.S. 243 (1833)
- Leonard W. Levy. "Barron v. City of Baltimore 7 Peters 243 (1833)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 13, 2013.
- Foster, James C. (2006). "Bingham, John Armor". In Finkleman, Paul. Encyclopedia of American Civil Liberties. CRC Press. p. 145.
- Amar, Akhil Reed (1992). "The Bill of Rights and the Fourteenth Amendment". Yale Law Journal (The Yale Law Journal, Vol. 101, No. 6) 101 (6): 1193–1284. doi:10.2307/796923. JSTOR 796923.
- "Duncan v. Louisiana (Mr. Justice Black, joined by Mr. Justice Douglas, concurring)". Cornell Law School – Legal Information Institute. May 20, 1968. Retrieved April 26, 2009.
- Levy, Leonard (1970). Fourteenth Amendment and the Bill of Rights: The Incorporation Theory (American Constitutional and Legal History Series). Da Capo Press. ISBN 0-306-70029-8.
- 677 F.2d 957 (1982)
- "Minneapolis & St. Louis R. Co. v. Bombolis (1916)". Supreme.justia.com. May 22, 1916. Retrieved August 1, 2010.
- Goldstone 2011, pp. 20, 23–24.
- Failinger, Marie (2009). "Equal protection of the laws". In Schultz, David Andrew. The Encyclopedia of American Law. Infobase. pp. 152–53.
- Johnson, John W. (January 1, 2001). Historic U.S. Court Cases: An Encyclopedia. Routledge. pp. 446–47. ISBN 978-0-415-93755-9. Retrieved June 13, 2013.
- Vile, John R., ed. (2003). "Corporations". Encyclopedia of Constitutional Amendments, Proposed Amendments, and Amending Issues: 1789 - 2002. ABC-CLIO. p. 116.
- Strauder v. West Virginia, 100 U.S. 303 (1880)
- Yick Wo v. Hopkins, 118 U.S. 356 (1886)
- Plessy v. Ferguson, 163 U.S. 537 (1896)
- Abrams, Eve (February 12, 2009). "Plessy/Ferguson plaque dedicated". WWNO (University New Orleans Public Radio). Retrieved April 17, 2009.
- Berea College v. Kentucky, 211 U.S. 45 (1908)
- Oliver Wendell Holmes, Jr. "274 U.S. 200: Buck v. Bell". Cornell University Law School Legal Information Institute. Archived from the original on June 12, 2013. Retrieved June 12, 2013.
- Brown v. Board of Education, 347 U.S. 483 (1954)
- Patterson, James (2002). Brown v. Board of Education: A Civil Rights Milestone and Its Troubled Legacy (Pivotal Moments in American History). Oxford University Press. ISBN 0-19-515632-3.
- "Forced Busing and White Flight". Time. September 25, 1978. Retrieved June 17, 2009.
- Hernandez v. Texas, 347 U.S. 475 (1954)
- United States v. Virginia, 518 U.S. 515 (1996)
- Levy v. Louisiana, 361 U.S. 68 (1968)
- Gerstmann, Evan (1999). The Constitutional Underclass: Gays, Lesbians, and the Failure of Class-Based Equal Protection. University Of Chicago Press. ISBN 0-226-28860-9.
- Reed v. Reed, 404 U.S. 71 (1971)
- "Reed v. Reed 1971". Supreme Court Drama: Cases that Changed America. – via HighBeam Research (subscription required). January 1, 2001. Retrieved June 12, 2013.
- Craig v. Boren, 429 U.S. 190 (1976)
- Kenneth L. Karst (January 1, 2000). "Craig v. Boren 429 U.S. 190 (1976)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 16, 2013.
- Wesberry v. Sanders, 376 U.S. 1 (1964).
- Reynolds v. Sims, 377 U.S. 533 (1964).
- Epstein, Lee; Walker, Thomas G. (2007). Constitutional Law for a Changing America: Rights, Liberties, and Justice (6th ed.). Washington, D.C.: CQ Press. p. 775. ISBN 0-87187-613-2. "Wesberry and Reynolds made it clear that the Constitution demanded population-based representational units for the U.S. House of Representatives and both houses of state legislatures...."
- Shaw v. Reno, 509 U.S. 630 (1993)
- Aleinikoff, T. Alexander; Samuel Issacharoff (1993). "Race and Redistricting: Drawing Constitutional Lines after Shaw v. Reno". Michigan Law Review (Michigan Law Review, Vol. 92, No. 3) 92 (3): 588–651. doi:10.2307/1289796. JSTOR 1289796.
- Bush v. Gore, 531 U.S. 98 (2000)
- "Bush v. Gore". Encyclopaedia Britannica. Retrieved June 12, 2013.
- League of United Latin American Citizens v. Perry, 548 U.S. 399 (2006)
- Gilda R. Daniels (March 22, 2012). "Fred Gray: life, legacy, lessons". Faulkner Law Review. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- Walter Friedman (January 1, 2006). "Fourteenth Amendment". Encyclopedia of African-American Culture and History. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- Chin, Gabriel J. (2004). "Reconstruction, Felon Disenfranchisement, and the Right to Vote: Did the Fifteenth Amendment Repeal Section 2 of the Fourteenth?". Georgetown Law Journal 92: 259.
- Richardson v. Ramirez, 418 U.S. 24 (1974)
- "Sections 3 and 4: Disqualification and Public Debt". Caselaw.lp.findlaw.com. June 5, 1933. Retrieved August 1, 2010.
- "Pieces of History: General Robert E. Lee's Parole and Citizenship". Prologue Magazine (The National Archives) 37 (1). 2005.
- Goodman, Bonnie K. (2006). "History Buzz: October 16, 2006: This Week in History". History News Network. Retrieved June 18, 2009.
- "Chapter 157: The Oath As Related To Qualifications", Cannon's Precedents of the U.S. House of Representatives 6, January 1, 1936
- For more on Section 4 go to Findlaw.com
- "294 U.S. 330 at 354". Findlaw.com. Retrieved August 1, 2010.
- Liptak, Adam (July 24, 2011). "The 14th Amendment, the Debt Ceiling and a Way Out". The New York Times. Retrieved July 30, 2011. "In recent weeks, law professors have been trying to puzzle out the meaning and relevance of the provision. Some have joined Mr. Clinton in saying it allows Mr. Obama to ignore the debt ceiling. Others say it applies only to Congress and only to outright default on existing debts. Still others say the president may do what he wants in an emergency, with or without the authority of the 14th Amendment."
- "Our National Debt 'Shall Not Be Questioned,' the Constitution Says". The Atlantic. May 4, 2011.
- Sahadi, Jeanne. "Is the debt ceiling unconstitutional?". CNN Money. Retrieved January 2, 2013.
- Rosen, Jeffrey. "How Would the Supreme Court Rule on Obama Raising the Debt Ceiling Himself?". The New Republic. Retrieved July 29, 2011.
- Chemerinsky, Erwin (July 29, 2011). "The Constitution, Obama and raising the debt ceiling". Los Angeles Times. Retrieved July 30, 2011.
- "FindLaw: U.S. Constitution: Fourteenth Amendment, p. 40". Caselaw.lp.findlaw.com. Retrieved August 1, 2010.
- Katzenbach v. Morgan, 384 U.S. 641 (1966)
- Theodore Eisenberg (January 1, 2000). "Katzenbach v. Morgan 384 U.S. 641 (1966)". Encyclopedia of the American Constitution. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- City of Boerne v. Flores, 521 U.S. 507 (1997)
- Steven A. Engel (October 1, 1999). "The McCulloch theory of the Fourteenth Amendment: City of Boerne v. Flores and the original understanding of section 5". Yale Law Journal. – via HighBeam Research (subscription required). Retrieved June 12, 2013.
- "The Civil War And Reconstruction". Retrieved October 21, 2010.
- "Library of Congress, Thirty-Ninth Congress Session II". Retrieved May 11, 2013.
- Mount, Steve (January 2007). "Ratification of Constitutional Amendments". Retrieved February 24, 2007.
- Documentary History of the Constitution of the United States, Vol. 5. Department of State. pp. 533–543. ISBN 0-8377-2045-1.
- A Century of Lawmaking for a New Nation: U.S. Congressional Documents and Debates, 1774-1875. Library of Congress. p. 707.
- Chin, Gabriel J.; Abraham, Anjali (2008). "Beyond the Supermajority: Post-Adoption Ratification of the Equality Amendments". Arizona Law Review 50: 25.
- P.L. 2003, Joint Resolution No. 2; 4/23/03
- Goldstone, Lawrence (2011). Inherently Unequal: The Betrayal of Equal Rights by the Supreme Court, 1865-1903. Walker & Company. ISBN 978-0-8027-1792-4.
- Halbrook, Stephen P. (1998). Freedmen, the 14th Amendment, and the Right to Bear Arms, 1866-1876. Greenwood Publishing Group. ISBN 9780275963316. Retrieved March 29, 2013. at Questia
- Nelson, William E. The Fourteenth Amendment: from political principle to judicial doctrine (Harvard University Press, 1988) online edition
- Bogen, David S. (April 30, 2003). Privileges and Immunities: A Reference Guide to the United States Constitution. Greenwood Publishing Group. ISBN 9780313313479. Retrieved March 19, 2013.
- "Amendments to the Constitution of the United States" (PDF). GPO Access. Retrieved September 11, 2005. (PDF, providing text of amendment and dates of ratification)
- CRS Annotated Constitution: Fourteenth Amendment
- Fourteenth Amendment and related resources at the Library of Congress
- National Archives: Fourteenth Amendment | http://en.wikipedia.org/wiki/Fourteenth_Amendment_to_the_United_States_Constitution | 13 |
16 | Lines and angles constitute the foundation of GRE geometry; though you might have learned your triangles and squares first, all polygons are essentially made up lines and angles. Here are the basics of lines and angles so you can nail those basic geometry questions.
An angle is formed by the union of two lines that share an endpoint (the vertex of an angle). The angle measurement corresponds to how far you have to rotate one of the lines to reach the other line.
Angles are measured in degrees, symbolized by the symbol º. No, that’s not an exponent of 0, but it looks pretty close. A complete rotation has 360 degrees, so it makes sense that a circle has 360 degrees, and the four angles produced by two intersecting lines (seen below) add up to 360 degrees.
While a full revolution is 360 degrees, a half revolution (a.k.a. a straight angle) has 180 degrees, and a quarter revolution (a.k.a. right angle) has 90 degrees.
Obtuse and Acute
You may remember that “acute” angles are less than 90 degrees while obtuse angles are more than 90 degrees but less than 180.
Complementary and Supplementary
The terms complementary and supplementary refer to special pairs of angles. Complementary angles add up to 90 degrees ad supplementary angles add up to 180 degrees.
When two lines intersect, we have two pairs of equal angles that are opposite each other.
In the diagram, angles 1 and 3 are equal and angles 4 and 2 are equal.
Lines that never intersect are called parallel lines. You may see this symbolized on the test as | |. Think of train tracks as parallel lines–they always run along each other and never converge.
You will almost always run into at least one problem that presents two parallel lines intersected by a third straight line known as a traversal. When this happens, eight angles are formed with special relationships to each other. Essentially, you can figure out all eight angles when given only one angle.
In the diagram above, angles A, D, E, and H are equal to each other while angles B, C, F and G and equal to each other. The sum of any two adjacent angles, like A and B or F and H, is always 180 degrees (since they are supplementary). For example, if angle A was 110 degrees, and I asked you to find the rest of the angles, you would immediately know that D, E, and H are 110 degrees while B, C, F, and G each has 70 degrees (180 – 110= 70).
There are special names for these related angles in the diagram. In the diagram, angle pairs like A and H are alternate exterior angles, angle pairs like C and F are alternate interior angles, and angle pairs like A and E are corresponding angles.
Two intersecting lines that form 90 degrees (a.k.a. a right angle) are called perpendicular lines. Simply put, when two lines form a cross or a “T,” they are perpendicular.
Angles and lines are used in diagrams throughout the test. Knowing these basics will help you immensely with even the most complicated geometric diagrams. | http://grockit.com/blog/gre/2011/02/28/gre-quantitative-lines-and-angles/ | 13 |
12 | |What is it?||What are the Myths?||How does it work?|
What is it?
The Central Processing Unit is the part of the computer that processes instructions from the data instructions that it retrieves from RAM. The computer then sends the results back to RAM to be stored or delivered to output.
One major myth that was bestoyed on the CPU was that it was the same thing as the motherboard. The motherboard is in fact, the main board that sets the structure for the whole computer system. The CPU is thought of as the "brain" of the computer. The CPU takes care of the information, and dumps it onto the bus (circuits that that provide the electronic roadway for the information processed by the CPU). The bus and the CPU are located on on this main board.
The CPU is made up of one or more integrated circuits. A micro-computer has one single integrated circuit. It is called a mircoprocessor.
The microprocessor plugs into the motherboard and connects to the data bus.
Two Parts of the CPU
|1. Arithmethic Logic Unit performs addition and subtraction by comparing two numbers.
2. Control Unit directs & coordinates processing.
A computer accomplishes tasks by following very complex single steps called instructions. The instructions tell
the computer to preform a specific task.
The CPU speed is influenced by clock rate, word size, cache and instruction set size. Computers can have a highly rated processor but if it has a slow hard drive or small amount of RAM it still may be slow.
|1.||Power Up. Turn on the power switch. The power light comes on and power is sent to the main board & internal fan.|
|2.||Start Boot Program instructions stored in ROM are executed by the microprocessor.|
|3.||Power on self test completes & runs diagnostic tests on system components.|
|4.||Load operating system. (copying of operating system from disk to RAM)|
|5.||Configaration check and customize mircoprocessor checks and reads configuration data and starts up are executed.|
|6.||Then it is ready for commantry and data entry.|
For the more advanced user click here to learn how to install CPUs.
|Mail comments to Josh and Patrick| | http://library.thinkquest.org/11309/data/process.htm | 13 |
42 | If you have progressed through the tutorial this far, you are now ready to program in 3D. However, 3D programming is not like modeling clay, where you simply move the clay with your hands and everything looks perfect.
3D programming is strictly mathematical, and you must understand the concepts of 3D mathematics before you can effectively program with them. Don't worry, though. It's nothing complex. You won't need any more math than it takes to program in C++, so you should already be far enough along to be able to understand this.
This lesson is a theoretical lesson. We will cover the practice involved in the next lesson. In this lesson we will cover coordinate systems and how they apply to Direct3D and creating a 3D scene.
Without understanding of the basic math of 3D, 3D programming would be impossible. And I don't mean doing college algebra all over again, but just understanding the concepts of 3D coordinates, how they work and the various things which might get in your way.
Of course, before you understand 3D coordinate systems, you need to understand Cartesian Coordinates.
The Cartesian Coordinate System might be better recognized if called a 2D coordinate system. In other words, it is a system of locating an exact point on a flat surface.
A point is defined as an exact position along an axis. If we wanted to know how far something has gone, we usually give an exact number, as in "Bob walked 12 meters". 12 meters is a distance along a single axis. We say that 0 is our starting point, and as Bob progresses, he moves farther and farther along this axis. This is a 1D coordinate system.
1D Coordinate System
When we look at this scenario from the side, as in the picture, we can see that as Bob continues walking toward the right of the screen, his distance travelled increases away from 0. We will call this '0' the origin, as it is where he started from. On the other side of the origin, we would have negative values instead of positive values.
However, what if he were then to turn 90 degrees and walk in a different direction? Truthfully, Bob would then be walking along a second axis, and we would diagram his path like this:
The Cartesian Coordinate System
Now that we have more than one axis, we give ourselves a way to identify them. The horizontal axis, along which Bob walked 12 meters, we will call the x-axis. The vertical axis we will call the y-axis.
Of course, this new axis, like the horizontal axis, also has an origin. It is the point where Bob stopped walking sideways and started walking up. Notice that the y-axis origin is also given the value of 0, and increases the farther Bob walks. (go Bob go...)
So now we have two axes (the x-axis and the y-axis), and each have their origins. Well, this is what forms our Cartesian Coordinate System. We can now locate any point along this surface (probably the ground in Bob's case). We can state Bob's exact position by saying how far he is off of each axis' origin, so we could say he is at (x, y) or (12, 4), 12 being his position on the x-axis and 4 being his position on the y-axis.
These two numbers are called coordinates, and are used to show how far an exact point is from the origin (or the '0' point on both axes).
Actually, the 3D Coordinate System is merely an extention to what we have been discussing. If we took Cartesian Coordinates and added a third axis (a z-axis) running perpendicular to both the x and y axes, we would have 3D coordinates. This is illustrated here.
The 3D Coordinate System
Like Cartesian Coordinates, 3D coordinates can be both positive and negative, depending on which direction the point is. However, instead of being written like Cartesian Coordinates, 3D coordinates are written with three numbers, like this: (x, y, z) or (12, 4, 15). This would indicate that Bob was somehow fifteen meters in the air. It could also be written (12, 4, -15). Perhaps this means he's lost in a dungeon somewhere.
Now let's cover how 3D coordinates are applied to games and game programming. If a point in a 3D coordinate system represents a position in space, then we can form an array of exact positions which will eventually become a 3D model. Of course, setting so many points would take up a lot of space in memory, so an easier and faster way has been employed. This method is set up using triangles.
Triangles, of course, are a very useful shape in just about any mathematical area. They can be formed to measure circles, they can be used to strengthen buildings, and they can be used to create 3D images. The reason we would want to use triangles is because triangles can be positioned to form just about any shape imaginable, as shown in these images:
Models Made From Triangles
Because of the useful nature of triangles when creating 3D models, Direct3D is designed solely around triangles and combining triangles to make shapes. To build a triangle, we use something called vertices.
Vertices is plural for vertex. A vertex is defined as an exact point in 3D space. It is defined by three values, x, y and z. In Direct3D, we add to that a little. We also include various properties of this point. And so we extend the definition to mean "the location and properties of an exact point in 3D space".
A triangle is made up of three vertices, each defined in your program in clockwise order. When coded, these three vertices form a flat surface, which can then be rotated, textured, positioned and modified as needed.
A Triangle Built From Vertices
The triangle shown in the above image is created by three points:
x = 0, y = 5, z = 1
x = 5, y = -5, z = 1
x = -5, y = -5, z = 1
You will notice that all the above vertices have a z-value of 1. This is because we aren't talking about a 3D object; we are talking about a triangle, which is a 2D object. We could change the z-values, but it would make no essential difference.
To make actual 3D objects, we will need to combine triangles. You can see how triangles are combined in the above diagram. To take a simple example, the cube is simply two triangles placed together to create one side. Each side is made up of identical triangles combined the same way.
However, defining the 3D coordinates of every triangle in your game multiple times is more than just tedious. It's ridiculously complex! There's just no need to get that involved (and you'll see what I mean in the next lesson).
Instead of defining each and every corner of every triangle in the game, all you need to do is create a list of vertices, which contain the coordinates and information of each vertex, as well as what order they go in.
A primitive is a single element in a 3D environment, be it a triangle, a line, a dot, or whatever. Following is a list of ways primitives can be combined to create 3D objects.
1. Point Lists
2. Line Lists
3. Line Strips
4. Triangle Lists
5. Triangle Strips
6. Triangle Fans
A Point List is a list of vertices that are shown as individual points on the screen. These can be useful for rendering 3D starfields, creating dotted lines, displaying locations on minimaps and so on. This diagram illustrates how a Point List is shown on the screen (without the labels, of course).
A Point List (6 Primitives)
A Line List is a list of vertices that create separate line segments between each odd-numbered vertex and the next vertex. These can be used for a variety of effects, including 3D grids, heavy rain, waypoint lines, and so on. This diagram illustrates how a Line List is shown on the screen (this is the same set of vertices as before).
A Line List (3 Primitives)
A Line Strip is similar to a line list, but differs in that all vertices in such a list are connected by line segments. This is useful for creating many wire-frame images such as wire-frame terrain, blades of grass, and other non-model-based objects. It is also very useful in debugging programs. This diagram illustrates how a Line Strip is shown on the screen.
A Line Strip (5 Primitives)
A Triangle List is a list of vertices where every group of three vertices is used to make a single, separate triangle. This can be used in a variety of effects, such as force-fields, explosions, objects being pieced together, etc. This diagram illustrates how a Triangle List is shown on the screen.
A Line List (2 Primitives)
A Triangle Strip is a list of vertices that creates a series of triangles connected to one another. This is the most-used method when dealing with 3D graphics. These are mostly used to create the 3D models for your game. This diagram illustrates how a Triangle Strip is shown on the screen. Notice that the first three vertices create a single triangle, and each vertex thereafter creates an additional triangle based on the previous two.
A Triangle Strip (4 Primitives)
A Triangle Fan is similar to a triangle strip, with the exception that all the triangles share a single vertex. This is illustrated in this diagram:
A Triangle Fan (4 Primitives)
There is a slight quirk in drawing primitives where only one side of the primitive is shown. It is possible to show both sides, but usually a model is completely enclosed, and you cannot see the inside of it. If the model is completely enclosed, only one side of each triangle need be drawn. After all, drawing both sides of a primitive would take twice as much time. You will see an example of this in the next couple of lessons.
A triangle primitive is only drawn when its vertices are given in a clockwise order. If you flip it around, it becomes counter-clockwise, and is therefore not shown.
Primitive Only Visible When Drawn Clockwise
There is an easy way (though tedious when you get into larger games) to show both sides of a primitive, which is to show the primitive twice, giving one primitive clockwise and the other counter-clockwise.
Primitive Visible When Drawn Either Way
Color is a rather simple part of 3D programming. However, even if you are very familiar with color spectrums and the physics of light, it would be good to know that Direct3D does not follow the laws of this universe exactly. To do so would be a nightmare on graphics hardware and the CPU. It's just too much, and so we'll just leave graphics like that to the Matrix and make our own laws that we can cope with.
Light, of course, is a wavelength of particles that allows you to see and differentiate between various objects around you. Direct3D mimicks this with various mathematical algorithms performed by the graphics hardware. The image is then displayed on the screen appearing well lit. In this section we'll cover the mechanics of how Direct3D mimicks the light we see in nature.
In the younger years of your education you may have learned the primary colors to be red, blue and yellow. This isn't actually the case. The colors are actually magenta, cyan and yellow. And why this useless technical detail? To understand this, you must understand the concept of subtractive and additive color.
The difference between these two types of color have to do with whether or not the color refers to the color of light or the color of an object. Subtractive color is the color of an object, and has the primary colors magenta, cyan and yellow. Additive color is the color of light, and has the primary colors red, green and blue.
In a beam of light, the more primary colors you add the closer you get to white. The colors add together to make white, and thus it is called additive color.
Additive Colors Add Up to White
Above you can see the primary colors of light combine to make white. However, if you look, you will also see that when you combine two of the colors, you get one of the primary subtractive colors (magenta, cyan or yellow). If we take a look at these subtractive colors, we'll see why this is.
Subtractive colors are essentially the opposite of additive colors. They consist of the light that is not reflected off the surface of an object. For example, a red object illuminated by a white light only reflects red light and absorbs green and blue light. If you look at the above image, you will see that green and blue combined make cyan, and so cyan was subtracted from the white light, resulting in red.
Subtractive Colors Subtract Out to Black
In graphics programming, you will always use the additive colors (red, green and blue), because monitors consist of light. However, when building a 3D engine, it is good to understand what makes objects look the colors they do.
By the way, this is why you find magenta, cyan and yellow in printers, and red, green and blue on screens.
If you want to really get into color, then following is an article which gives a thorough rundown of color and the physics of light. If you're thinking of the future and DirectX 10's nextgen games, I'd seriously recommend knowing your color well. There's much more to it than you'd think at first, and it makes a big difference in making a great game engine.
Anyway, here's the article.
Alpha coloring is an additional element to the red-green-blue color of light. When you include some Alpha into your color, the graphic appears semi-transparent, allowing you to see through the object somewhat. This is useful for creating a semi-transparent display for your game, having units cloak (but still be seen somewhat by allies), and numerous other things. I'm sure your imagination can run rampant for some time on this one.
Color in Direct3D comes in the form of a 32-bit variable which stores all the information about the color. This includes the primary colors (refered to as RGB for Red, Green and Blue) and the amount of Alpha in the color. Each of these are refered to as channels, and each take up 8-bits, as showed here:
Bit Layout of Color
Following is the code that defines the above colors:
DWORD Color_A = 0xff00ff00;
DWORD Color_B = 0x88ff00cc;
There are also two functions we can use to build these colors for us, in case we need to plug variables into these values.
DWORD Color_A = D3DCOLOR_XRGB(0, 255, 0);
DWORD Color_B = D3DCOLOR_ARGB(136, 255, 0, 204);
The function D3DCOLOR_ARGB() returns a DWORD filled with the proper values for the color you are building. If you don't want to bother with Alpha, then you can use the D3DCOLOR_XRGB() which does the exact same thing, but automatically fills the Alpha channel with 255.
If you want to see an example of this, check out the example from Lesson 1 and 2, which clear the screen using the D3DCOLOR_XRGB() function.
I'm not going to cover everything about light here. I'll save that for a later lesson. For now, I just want to cover the basic light equasion, as you will have to understand parts of it before you actually add lighting into your program.
Light in nature is a very complicated subject mathematically speaking. When the sun shines, almost everything is lit by it, even though the sun is not shining on a lot of what can be seen. This is because light bounces around an area thousands of times, hitting just about everything, whether the sun shines there or not. To further add to this equation, as the sunlight travels through space, some of it is reflected off dust particles, which scatter the light in a completely uncalculatable pattern. Even if a computer could calculate all this, it could not run real-time.
Direct3D uses a system to mimick the light of a real-life environment. To do this, it breaks light down into three types of light that, when combined, closely approximate actual light. These three types of light are Diffuse Light, Ambient Light and Specular Light.
Diffuse Light is light that shines upon an object indirectly. This sphere is lit by diffuse lighting alone.
Later, you will learn about sources of light. This sphere is lit by one source, coming off from the left somewhere. The further the sphere curves away from the light, the less that portion is lit by the source.
Ambient Light is light that is considered to be everywhere. Unlike the diffuse light, it has no source, and if used alone appears a circle (because all parts are lit equally under this lighting). This sphere is the same sphere as last time, but this time has ambient lighting included to fill in the dark, unlit parts.
Diffuse and Ambient Lighting
This is sometimes refered to as Specular Highlight, because it highlights an object with a reflective color. This sphere is lit with Diffuse and Ambient Light, and has a Specular Highlight added to make it look more real.
Diffuse, Ambient and Specular Lighting
By now you should understand the basic underlying concepts of the third dimension, and how it is applied to game programming. Now let's go on and put all this theory into practice. In the next lesson, you will take what you know from this lesson and build a basic triangle.
Next Lesson: Drawing a Triangle
GO! GO! GO!
© 2006-2013 DirectXTutorial.com. All Rights Reserved. Expand | http://www.directxtutorial.com/Lesson.aspx?lessonid=9-4-3 | 13 |
14 | Unlike range and quartiles, the variance combines all the values in a data set to produce a measure of spread. The variance (symbolized by S2) and standard deviation (the square root of the variance, symbolized by S) are the most commonly used measures of spread.
We know that variance is a measure of how spread out a data set is. It is calculated as the average squared deviation of each number from the mean of a data set. For example, for the numbers 1, 2, and 3 the mean is 2 and the variance is 0.667.
[(1 - 2)2 + (2 - 2)2 + (3 - 2)2] ÷ 3 = 0.667
[squaring deviation from the mean] ÷ number of observations = variance
Variance (S2) = average squared deviation of values from mean
Calculating variance involves squaring deviations, so it does not have the same unit of measurement as the original observations. For example, lengths measured in metres (m) have a variance measured in metres squared (m2).
Taking the square root of the variance gives us the units used in the original scale and this is the standard deviation.
Standard deviation (S) = square root of the variance
Standard deviation is the measure of spread most commonly used in statistical practice when the mean is used to calculate central tendency. Thus, it measures spread around the mean. Because of its close links with the mean, standard deviation can be greatly affected if the mean gives a poor measure of central tendency.
Standard deviation is also influenced by outliers one value could contribute largely to the results of the standard deviation. In that sense, the standard deviation is a good indicator of the presence of outliers. This makes standard deviation a very useful measure of spread for symmetrical distributions with no outliers.
Standard deviation is also useful when comparing the spread of two separate data sets that have approximately the same mean. The data set with the smaller standard deviation has a narrower spread of measurements around the mean and therefore usually has comparatively fewer high or low values. An item selected at random from a data set whose standard deviation is low has a better chance of being close to the mean than an item from a data set whose standard deviation is higher.
Generally, the more widely spread the values are, the larger the standard deviation is. For example, imagine that we have to separate two different sets of exam results from a class of 30 students the first exam has marks ranging from 31% to 98%, the other ranges from 82% to 93%. Given these ranges, the standard deviation would be larger for the results of the first exam.
Standard deviation might be difficult to interpret in terms of how big it has to be in order to consider the data widely spread. The size of the mean value of the data set depends on the size of the standard deviation. When you are measuring something that is in the millions, having measures that are "close" to the mean value does not have the same meaning as when you are measuring the weight of two individuals. For example, a measure of two large companies with a difference of $10,000 in annual revenues is considered pretty close, while the measure of two individuals with a weight difference of 30 kilograms is considered far apart. This is why, in most situations, it is useful to assess the size of the standard deviation relative to the mean of the data set.
Although standard deviation is less susceptible to extreme values than the range, standard deviation is still more sensitive than the semi-quartile range. If the possibility of high values (outliers) presents itself, then the standard deviation should be supplemented by the semi-quartile range.
When using standard deviation keep in mind the following properties.
When analysing normally distributed data, standard deviation can be used in conjunction with the mean in order to calculate data intervals.
If = mean, S = standard deviation and x = a value in the data set, then
The variance for a discrete variable made up of n observations is defined as:
The standard deviation for a discrete variable made up of n observations is the positive square root of the variance and is defined as:
Use this step-by-step approach to find the standard deviation for a discrete variable.
A hen lays eight eggs. Each egg was weighed and recorded as follows:
60 g, 56 g, 61 g, 68 g, 51 g, 53 g, 69 g, 54 g.
|Weight (x)||(x - )||(x - )2|
The formulas for variance and standard deviation change slightly if observations are grouped into a frequency table. Squared deviations are multiplied by each frequency's value, and then the total of these results is calculated.
Thirty farmers were asked how many farm workers they hire during a typical harvest season. Their responses were:
4, 5, 6, 5, 3, 2, 8, 0, 4, 6, 7, 8, 4, 5, 7, 9, 8, 6, 7, 5, 5, 4, 2, 1, 9, 3, 3, 4, 6, 4
|Workers (x)||Tally||Frequency (f)||(xf)||(x - )||(x - )2||(x - )2f|
220 students were asked the number of hours per week they spent watching television. With this information, calculate the mean and standard deviation of hours spent watching television by the 220 students.
|Hours||Number of students|
|10 to 14||2|
|15 to 19||12|
|20 to 24||23|
|25 to 29||60|
|30 to 34||77|
|35 to 39||38|
|40 to 44||8|
Note: In this example, you are using a continuous variable that has been rounded to the nearest integer. The group of 10 to 14 is actually 9.5 to 14.499 (as the 9.5 would be rounded up to 10 and the 14.499 would be rounded down to 14). The interval has a length of 5 but the midpoint is 12 (9.5 + 2.5 = 12).
6,560 = (2 X 12 + 12 X 17 + 23 X 22 + 60 X 27 + 77 X 32 + 38 X 37 + 8 X 42)
Then, calculate the numbers for the xf, (x - ), (x - )2 and (x - )2f formulas.
Add them to the frequency table below.
|Hours||Midpoint (x)||Frequency (f)||xf||(x - )||(x - )2||(x - )2f|
|10 to 14||12||2||24||-17.82||317.6||635.2|
|15 to 19||17||12||204||-12.82||164.4||1,972.8|
|20 to 24||22||23||506||-7.82||61.2||1,407.6|
|25 to 29||27||60||1,620||-2.82||8.0||480.0|
|30 to 34||32||77||2,464||2.18||4.8||369.6|
|35 to 39||37||38||1,406||7.18||51.6||1,960.8|
|40 to 44||42||8||336||12.18||148.4||1,187.2|
Use the information found in the table above to find the standard deviation.
Note: During calculations, when a variable is grouped by class intervals, the midpoint of the interval is used in place of every other value in the interval. Thus, the spread of observations within each interval is ignored. This makes the standard deviation always less than the true value. It should, therefore, be regarded as an approximation.
Assuming the frequency distribution is approximately normal, calculate the interval within which 95% of the previous example's observations would be expected to occur.
= 29.82, s = 6.03
Calculate the interval using the following formula: - 2s < x < + 2s
29.82 - (2 X 6.03) < x < 29.82 + (2 X 6.03)
29.82 - 12.06 < x < 29.82 + 12.06
17.76 < x < 41.88
This means that there is about a 95% certainty that a student will spend between 18 hours and 42 hours per week watching television. | http://www.statcan.gc.ca/edu/power-pouvoir/ch12/5214891-eng.htm | 13 |
13 | Although celestial bodies of the inner solar system are predominantly composed of silicates and metals, surprisingly there are places where temperatures become sufficiently low enough that ice can exist on the surface.
Perhaps the most unlikely place to find ice is on the surface of Mercury, where the side that faces the sun can reach temperatures of up to 700K (400 degrees C). The temperatures on Mercury change from day to night. Before sunrise the temperature is as low as 100 K (-170 degrees C) and by noon it will rise to about 700 K (400 degrees C). These wide variations are due to Mercury's rotation and lack of atmosphere. During the day the temperature is so high that it could melt some metals, but during the night the temperature drops well below freezing.
This being said, the areas that are the coldest on Mercury are near the poles at the bottoms of craters. It is here that Earth-based radar imaging of Mercury has found about 20 circular areas of high radar reflectivity. The strength and polarization of these radar echoes-very different from the rest of Mercury's rocky surface-are similar to the radar characteristics of the south polar cap of Mars and the icy Galilean satellites, prompting researchers to suggest that Mercury's radar-reflective areas may be deposits of water ice or other volatile material.
So how, one might ask, given its proximity to the sun, low gravity and high surface temperatures, might ice survive on a planet with temperatures as wide-ranging as Mercury?
The answer lies in Mercury's lack of atmosphere. Water ice on the surface is directly exposed to a vacuum, leading to rapid escape unless it is extremely cold at all times, and never exposed to sunlight.. The only places on Mercury where such conditions might exist are within craters near the poles. Unlike the earth, where the 23.5 degree tilt of our spin axis gives us the seasons, Mercury's spin axis is barely tilted at all-only 0.1degree. Therefore, the strength of solar illumination on the surface does not vary, regardless of where Mercury is in its orbit.
Theoretical studies predict that typical craters at Mercury's poles may contain areas that never get warmer than 100 K and that water ice in the polar craters could have remained stable since the creation of the solar system. If this is the case, then ice may have originated from the insides of falling comets and meteorites that became trapped at the poles over billions of years.
An alternative theory to water ice is that the polar deposits may not be water ice at all, but rather some other material such as sulfur, which could have sublimated from minerals in surface rocks over the millennia to become trapped at the poles.
These hypotheses, among many others will be tested as part of the Messenger (MErcury Surface, Space ENvironment, GEochemistry, and Ranging) mission. The Messenger spacecraft, designed and built by the Applied Physics Laboratory (APL) at Johns Hopkins University, in conjunction with NASA, is en route to Mercury and will rely on several flybys of Venus before reaching the inner solar system and Mercury's orbit. Messenger's payload includes a Gamma Ray and Neutron Spectrometer (GRNS), which will help determine if hydrogen exists in the polar deposits, which would indicate the presence of water ice. Another instrument on the spacecraft, called the Energetic Particle and Plasma Spectrometer (EPPS), will help detect if there is sulfur ice in these polar deposits.
We are just beginning to understand how ices contribute to the formation and evolution of planets, moons and small bodies; however, we still have much to learn about the unique and fascinating role of ice in the solar system.
Understanding if and what types of ice may exist on Mercury is the beginning of a profound source of study about the possible origins of the inner solar system, and what it might mean for scientific discovery in the future.
Source: Prockter, Louise. Ice in the Solar System. Johns Hopkins APL Technical Digest, Volume 26, number 2 (2005).
Last Updated: 7 February 2011 | http://solarsystem.nasa.gov/scitech/display.cfm?ST_ID=1249 | 13 |
14 | The Convention on Biological Diversity is probably the most all-encompassing international agreement ever adopted. It seeks to conserve the diversity of life on Earth at all levels - genetic, population, species, habitat, and ecosystem - and to ensure that this diversity continues to maintain the life support systems of the biosphere overall. It recognizes that setting social and economic goals for the use of biological resources and the benefits derived from genetic resources is central to the process of sustainable development, and that this in turn will support conservation.
Achieving the goals of the Convention will require progress on many fronts. Existing knowledge must be used more effectively; a deeper understanding of human ecology and environmental effects must be gained and communicated to those who can stimulate and shape policy change; environmentally more benign practices and technologies must be applied; and unprecedented technical and financial cooperation at international level is needed.
International environmental agreements
Throughout history human societies have established rules and customs to keep the use of natural resources within limits in order to avoid long-term damage to the resource. Aspects of biodiversity management have been on the international agenda for many years, although early international environmental treaties were primarily concerned with controlling the excess exploitation of particular species.
The origins of modern attempts to manage global biological diversity can be traced to the United Nations Conference on Human Environment held in Stockholm in 1972, which explicitly identified biodiversity conservation as a priority. The Action Plan in Programme Development and Priorities adopted in 1973 at the first session of the Governing Council of UNEP identified the “conservation of nature, wildlife and genetic resources” as a priority area. The international importance of conservation was confirmed by the adoption, in the same decade, of the Convention on Wetlands (1971), the World Heritage Convention (1972), the Convention on International Trade in Endangered Species (1973), and the Convention on Migratory Species (1979) as well as various regional conventions.
Making the connections
By the 1980s, however, it was becoming apparent that traditional conservation alone would not arrest the decline of biological diversity, and new approaches would be needed to address collective failure to manage the human environment and to achieve equitable human development. Important declarations throughout the 1980s, such as the World Conservation Strategy (1980) and the resolution of the General Assembly of the United Nations on the World Charter for Nature (1982), stressed the new challenges facing the global community. In 1983 the General Assembly of the United Nations approved the establishment of a special independent commission to report on environment and development issues, including proposed strategies for sustainable development. The 1987 report of this World Commission on Environment and Development, entitled Our Common Future
(also known as the `Brundtland Report
'), argued that “the challenge of finding sustainable development paths ought to provide the impetus - indeed the imperative - for a renewed search for multilateral solutions and a restructured system of cooperation. These challenges cut across the divides of national sovereignty, of limited strategies for economic gain, and of separated disciplines of science”.
A growing consensus was emerging among scientists, policy-makers and the public, that the biosphere had to be seen as a single system, and that its conservation required multilateral action, since global environmental problems cannot by definition be addressed in isolation by individual States, or even by regional groupings.
By the end of the 1980s, international negotiations were underway that would lead to the United Nations Conference on Environment and Development (the `Earth Summit', or UNCED), held in Rio de Janeiro in June 1992. At this pivotal meeting, Agenda 21 (the `Programme of Action for Sustainable Development'), the Rio Declaration on Environment and Development, and the Statement of Forest Principles, were adopted, and both the United Nations Framework Convention on Climate Change and the Convention on Biological Diversity were opened for signature.
Financial resources for global environmental protection
During the same period there was an increasing interest in international mechanisms for environmental funding. With the debt crisis, commercial flows for development had become scarce, and the role of multilateral assistance had assumed greater importance in discussions on financial flows and debt rescheduling. Simultaneously, concern with new funding for environmental issues was growing - the Brundtland Report argued for a significant increase in financial support from international sources; the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer established a financial mechanism to provide financial and technical assistance to eligible Parties for the phasing out of chlorofluorocarbons (CFCs); and the concept of debt-for-nature swaps, that would promote `win-win' situations allowing developing countries to ease their debt burdens and finance environmental protection, was being examined.
A number of proposals for funds and mechanisms were made. Donor country readiness to increase the supply of funds was low and their willingness to support new international agencies even lower, but nevertheless recognition of the principle that additional environment-related funding would have to be provided to developing countries was emerging. During 1989 and 1990 discussions took place within the framework of the World Bank's Development Committee on a new funding mechanism for the environment. At the end of 1990 agreement was reached on the establishment of the Global Environment Facility under a tripartite agreement between the World Bank, UNDP and UNEP. The GEF would be a pilot initiative for a three-year period (1991-1994) to promote international cooperation and to foster action to protect the global environment. The grants and concessional funds disbursed would complement traditional development assistance by covering the additional costs (also known as `agreed incremental costs') incurred when a national, regional or global development project also targets global environmental objectives.
The GEF was given four focal areas, one of which was to be biological diversity.1
One of the first initiatives taken under the pilot phase was to support preparation of Biodiversity Country Studies in twenty-four developing countries and countries in transition. The primary objective of the Biodiversity Country Studies was to gather and analyse the data required to drive forward the process of developing national strategies, plans, or programmes for the conservation and sustainable use of biological diversity and to integrate these activities with other relevant sectoral or cross-sectoral plans, programs, or policies. This anticipated the provisions of key articles of the Convention on Biological Diversity, in particular the requirements in Article 6 for each country to have a national biodiversity strategy and to integrate the conservation and sustainable use of biodiversity into all sectors of national planning and in Article 7 to identify components of biological diversity important for its conservation and sustainable use.
The negotiation of the Convention on Biological Diversity
The World Conservation Union (IUCN) had been exploring the possibilities for a treaty on the conservation of natural resources, and between 1984 and 1989 had prepared successive drafts of articles for inclusion in a treaty. The IUCN draft articles concentrated on the global action needed to conserve biodiversity at the genetic, species and ecosystem levels, and focused on in-situ
conservation within and outside protected areas. It also included the provision of a funding mechanism to share the conservation burden between the North and the South.
In 1987 the Governing Council of UNEP established an Ad Hoc Working Group of Experts on Biological Diversity
to investigate “the desirability and possible form of an umbrella convention to rationalise current activities in this field, and to address other areas which might fall under such a convention”.
The Group of Experts concluded that while existing global and regional conventions addressed different aspects of biological diversity, the specific focus and mandates of these conventions did not constitute a regime that could ensure global conservation of biological diversity. On the other hand, it also concluded that the development of an umbrella agreement to absorb or consolidate existing conventions was legally and technically impossible. By 1990 the Group had reached a consensus on the need for a new global treaty on biological diversity, in the form of a framework treaty building on existing conventions.
The scope of such a convention was broadened to include all aspects of biological diversity, including in-situ
conservation of wild and domesticated species, sustainable use of biological resources, access to genetic resources and to relevant technology, including biotechnology, access to benefits derived from such technology, safety of activities related to living modified organisms, and provision of new and additional financial support.
In February 1991 the Group of Experts became the Intergovernmental Negotiating Committee for a Convention on Biological Diversity
(INC). The INC held seven negotiating sessions, aiming to have the Convention adopted in time for it to be signed by States at the Earth Summit in June 1992.
The relationship between the objectives of the Convention and issues relating to trade, to agriculture and to the emerging biotechnology sector were key issues in the minds of the negotiators. Part of the novelty of the Convention on Biological Diversity lies in the recognition that, to meet its objectives, the Convention would need to make sure that these objectives were acknowledged and taken account of by other key legal regimes. These included the trade regime that would enter into force in 1994 under the World Trade Organization; the FAO Global System on Plant Genetic Resources, in particular the International Undertaking on Plant Genetic Resources adopted in 1983; and the United Nations Convention on the Law of the Sea which was concluded in 1982 and would enter into force in 1994.
Those involved in negotiating the Convention on Biological Diversity, as well as those involved in the parallel negotiations on the United Nations Framework Convention on Climate Change, were consciously developing a new generation of environmental conventions. These conventions recognized that the problems they sought to remedy arose from the collective impacts of the activities of many major economic sectors and from trends in global production and consumption. They also recognized that, to be effective, they would need to make sure that the biodiversity and climate change objectives were taken into account in national policies and planning in all sectors, national legislation and relevant international legal regimes, the operations of relevant economic sectors, and by citizens of all countries through enhanced understanding and behavioural changes.
The text of the Convention was adopted in Nairobi on 22 May 1992, and between 5 and 14 June 1992 the Convention was signed in Rio de Janeiro by the unprecedented number of 156 States and one regional economic integration organization (the European Community). The early entry into force of the Convention only 18 months later, on 29 December 1993, was equally unprecedented, and by August 2001 the Convention had 181 Contracting Parties (Annex 2 and Map 18).
THE OBJECTIVES AND APPROACH OF THE CONVENTION
Objectives of the Convention
- Conservation of biological diversity
- Sustainable use of components of biological diversity
- Fair and equitable sharing of the benefits arising out of the use of genetic resources
The objectives of the Convention on Biological Diversity are “the conservation of biological diversity, the sustainable use of its components, and the fair and equitable sharing of the benefits arising out of the utilisation of genetic resources” (Article 1). These are translated into binding commitments in its normative provisions, contained in Articles 6 to 20.
A central purpose of the Convention on Biological Diversity, as with Agenda 21 and the Convention on Climate Change, is to promote sustainable development, and the underlying principles of the Convention are consistent with those of the other `Rio Agreements'. The Convention stresses that the conservation of biological diversity is a common concern of humankind, but recognizes that nations have sovereign rights over their own biological resources, and will need to address the overriding priorities of economic and social development and the eradication of poverty.
The Convention recognizes that the causes of the loss of biodiversity are diffuse in nature, and mostly arise as a secondary consequence of activities in economic sectors such as agriculture, forestry, fisheries, water supply, transportation, urban development, or energy, particularly activities that focus on deriving short-term benefits rather than long-term sustainability. Dealing with economic and institutional factors is therefore key to achieving the objectives of the Convention. Management objectives for biodiversity must incorporate the needs and concerns of the many stakeholders involved, from local communities upward.
A major innovation of the Convention is its recognition that all types of knowledge systems are relevant to its objectives. For the first time in an international legal instrument, the Convention recognises the importance of traditional knowledge - the wealth of knowledge, innovations and practices of indigenous and local communities that are relevant for the conservation and sustainable use of biological diversity. It calls for the wider application of such knowledge, with the approval and involvement of the holders, and establishes a framework to ensure that the holders share in any benefits that arise from the use of such traditional knowledge.
The Convention therefore places less emphasis on a traditional regulatory approach. Its provisions are expressed as overall goals and policies, with specific action for implementation to be developed in accordance with the circumstances and capabilities of each Party, rather than as hard and precise obligations. The Convention does not set any concrete targets, there are no lists, no annexes relating to sites or protected species, thus the responsibility of determining how most of its provisions are to be implemented at the national level falls to the individual Parties themselves.
INSTITUTIONAL STRUCTURE OF THE CONVENTION
The Convention establishes the standard institutional elements of a modern environmental treaty: a governing body, the Conference of the Parties; a Secretariat; a scientific advisory body; a clearing-house mechanism and a financial mechanism. Collectively, these translate the general commitments of the Convention into binding norms or guidelines, and assist Parties with implementation. The rôle of the institutions are summarised here and discussed in more detail in chapter 3.
Because the Convention is more than a framework treaty, many of its provisions require further collective elaboration in order to provide a clear set of norms to guide States and stakeholders in their management of biodiversity. Development of this normative basis centres around decisions of the Conference of the Parties (COP)
, as the governing body of the Convention process. The principal function of the COP is to regularly review implementation of the Convention and to steer its development, including establishing such subsidiary bodies as may be required. The COP meets on a regular basis and held five meetings in the period 1994 to 2000. At its fifth meeting (2000) the COP decided that it would henceforth meet every two years.
The Subsidiary Body on Scientific, Technical and Technological Advice (SBSTTA)
is the principal subsidiary body of the COP. Its mandate is to provide assessments of the status of biological diversity, assessments of the types of measures taken in accordance with the provisions of the Convention, and advice on any questions that the COP may put to it. SBSTTA met five times in the period 1995 to 2000 and, in the future, will meet twice in each two-year period between meetings of the COP.
The principal functions of the Secretariat
are to prepare for and service meetings of the COP and other subsidiary bodies of the Convention, and to coordinate with other relevant international bodies. The Secretariat is provided by UNEP and is located in Montreal, Canada.
The Convention provides for the establishment of a clearing-house mechanism
to promote and facilitate technical and scientific cooperation (Article 18). A pilot phase of the clearing-house mechanism took place from 1996 to 1998 and, following evaluation of this, the COP has approved a clearing-house mechanism strategic plan and a programme of work until 2004.
The Convention establishes a financial mechanism
for the provision of resources to developing countries for the purposes of the Convention. The financial mechanism is operated by the Global Environment Facility (GEF) and functions under the authority and guidance of, and is accountable to, the COP. GEF activities are implemented by the United Nations Development Programme (UNDP), UNEP and the World Bank. Under the provisions of the Convention, developed country Parties undertake to provide `new and additional financial resources to enable developing country Parties to meet the agreed full incremental cost of implementing the obligations of the Convention' (Article 20) and, in addition to the provision of resources through the GEF, these Parties may also provide financial resources through bilateral and multilateral channels.
The COP is able, if it deems it necessary, to establish inter-sessional bodies and meetings
to carry out work and provide advice between ordinary meetings of the COP. Those open-ended meetings that have been constituted so far include:
- Open ended Ad Hoc Working Group on Biosafety (met six times from 1996-1999 - see below)
- Workshop on Traditional Knowledge and Biological Diversity (met in 1997)
- Intersessional Meeting on the Operations of the Convention (ISOC) (met in 1999)
- Ad Hoc Working Group on Article 8(j) and Related Provisions (met in 2000, will meet again in 2002)
- Ad Hoc Open-ended Working Group on Access and Benefit Sharing (will meet in 2001)
- Meeting on the Strategic Plan, National Reports and Implementation of the Convention (MSP) (will meet in 2001)
Figure 2.1 Institutions of the Convention
Cartagena Protocol on Biosafety
The Convention requires the Parties to “consider the need for and modalities of a protocol setting out appropriate procedures, including, in particular, advance informed agreement, in the field of the safe transfer, handling and use of any living modified organism resulting from biotechnology that may have adverse effect on the conservation and sustainable use of biological diversity” (Article 19(3)).
At its second meeting, the COP established a negotiating process and an Ad Hoc Working Group on Biosafety that met six times between 1996 and 1999 to develop a draft protocol. The draft submitted by the Working Group was considered by an Extraordinary Meeting of the COP held in Cartagena, Colombia in February 1999 and in Montreal, Canada in January 2000, and on 29 January 2000 the text of the Cartagena Protocol on Biosafety to the Convention on Biological Diversity
was adopted. The Protocol was opened for signature during the fifth meeting of the COP in May 2000 where it was signed by 68 States. The number of signatures had risen to 103 by 1 August 2001, and five States had ratified the Protocol. It will enter into force after the fiftieth ratification.
The COP will serve as the meeting of the Parties to the Protocol. The meetings will however be distinct, and only Parties to the Convention who are also Parties to the Protocol may take decisions under the Protocol (States that are not a Party to the Convention cannot become Party to the Protocol). Pending the entry into force of the Protocol, an Intergovernmental Committee for the Cartagena Protocol (ICCP)
has been established to undertake the preparations necessary for the first meeting of the Parties. The first meeting of the Intergovernmental Committee was held in Montpellier, France in December 2000 and the second in Nairobi, Kenya in September-October 2001.
THE DECISION-MAKING PROCESS
The activities of the COP have been organized through programmes of work that identify the priorities for future periods. The first medium-term programme of work (1995 to 1997) saw a focus on developing the procedures and modus operandi
of the institutions, determining priorities, supporting national biodiversity strategies, and developing guidance to the financial mechanism. At its fourth meeting, the COP adopted a programme of work for its fifth, sixth and seventh meetings (1999-2004), and, at its fifth meeting, approved a longer-term programme of work for SBSTTA, and began the development of a strategic plan for the Convention.
The following are the key steps in the decision-making process.
The programme of work establishes a timetable indicating when the COP will consider in detail biological themes or ecosystems, or specific provisions of the Convention contained in the operative Articles. In addition to such ecosystem based programmes, the COP has addressed a number of key substantive issues in a broadly comprehensive manner. Such issues are collectively known as `cross-cutting issues', and these have an important rôle to play in bringing cohesion to the work of the Convention by linking the thematic programmes.
Submissions and Compilation of Information
The procedures by which the COP comes to adopt its decisions are broadly similar in each case. Firstly, current activities are reviewed to identify synergies and gaps within the existing institutional framework, or an overview of the state of knowledge on the issue under examination is developed. At the same time, Parties, international organizations, specialist scientific and non-governmental organizations are invited to provide information, such as reports or case studies. This review mechanism is coordinated by the Secretariat, supported in some cases by informal inter-agency task forces or liaison groups of experts.
Preparation of synthesis
Current ecosystem themes
Current cross-cutting issues
- Marine and coastal biological diversity
- Forest biological diversity
- Biological diversity of inland water ecosystems
- Agricultural biological diversity
- Biological diversity of dry and sub-humid lands
- Mountain ecosystems (to be considered at COP-7 in 2004)
- Identification, monitoring and assessment of biological diversity, and development of indicators
- Access to genetic resources
- Knowledge, innovations and practices of indigenous and local communities
- Sharing the benefit sharing arising from the utilisation of genetic resources
- Intellectual property rights
- The need to address a general lack of taxonomic capacity worldwide
- Alien species that threaten ecosystems, habitats or species
- Sustainable use, including tourism
- Protected areas (to be considered at COP-7 in 2004)
- Transfer of technology and technology cooperation (to be considered at COP-7 in 2004).
The Secretariat then prepares a preliminary synthesis of these submissions for consideration by SBSTTA. Where appropriate the Secretariat may use a liaison group to assist with this. In other cases SBSTTA may have established an ad hoc
technical expert group, with members drawn from rosters of experts nominated by Parties, to assist with the preparation of the synthesis. Where appropriate, the Secretariat may also identify relevant networks of experts and institutions, and coordinate their input to the preparation of the synthesis.
Scientific, Technical or Technological Advice
On the basis of the work of the Secretariat, of any ad hoc
technical expert group, and the findings of specialist meetings such as the Global Biodiversity Forum, SBSTTA will assess the status and trends of the biodiversity of the ecosystem in question or the relationship of the cross-cutting issue to the implementation of the Convention and develop its recommendation to the COP accordingly.
Supplementary Preparations for the COP
The advice of SBSTTA may be complemented by the work of the Secretariat in the inter-sessional period between the meeting of the SBSTTA and that of the COP. Such work may comprise issues not within the mandate of the SBSTTA, such as financial and legal matters, development of guidance to the financial mechanism, or relations with other institutions and processes that could contribute to implementation of the future decision of the COP.
The COP considers the recommendations of the SBSTTA and any other advice put before it. It will then advise Parties on the steps they should take to address the issue, in light of their obligations under the Convention. It may also establish a process or programme to develop the issue further. Such a programme would establish goals and identify the expected outcomes, including a timetable for these and the means to achieve them. The types of output to be developed could include: guidelines, codes of conduct, manuals of best practice, guidance for the institutions of the Convention, criteria, and so forth. The programme would proceed to develop these products, under the guidance of SBSTTA, and report results to the COP for review.
OBLIGATIONS ON PARTIES TO THE CONVENTION
The Convention constitutes a framework for action that will take place mainly at the national level. It places few precise binding obligations upon Parties, but rather provides goals and guidelines, and these are further elaborated by decisions of the COP. Most of the commitments of Parties under the Convention are qualified, and their implementation will depend upon the particular national circumstances and priorities of individual Parties, and the resources available to them. Nevertheless, Parties are obliged to address the issues covered by the Convention, the chief of which are outlined in the following sections.
Article 6: National strategies and plans
National biodiversity strategies and action plans
For most Parties, developing a national biodiversity strategy
- establishing the institutional framework for developing the strategy, including designating leadership and ensuring a participative approach
- allocating or obtaining financial resources for the strategy process
- assessing the status of biological diversity within its jurisdiction
- articulating and debating the vision and goals for the strategy through a national dialogue with relevant stakeholders
- comparing the actual situation to the objectives and targets
- formulating options for action that cover key issues identified
- establishing criteria and priorities to help choose from among options
- matching actions and objectives
Developing and implementing national biodiversity action plans
- assigning roles and responsibilities
- agreeing the tools and approaches to be used
- establishing timeframes and deadlines for completion of tasks
- obtaining the budget
- agreeing indicators and measurable targets against which progress can be assessed
- determining reporting responsibilities, intervals and formats
- establishing procedures for incorporating lessons learned into the revision and updating of the strategy
The implementation of the Convention requires the mobilisation of both information and resources at the national level. As a first step, the Convention requires Parties to develop national strategies, plans or programmes for the conservation and sustainable use of biodiversity, or to adapt existing plans or programmes for this purpose (Article 6(a)). This may require a new planning process, or a review of existing environmental management or other national plans.
The Convention also requires Parties to integrate conservation and sustainable use of biodiversity into relevant sectoral or cross-sectoral plans, programmes and policies, as well as into national decision-making (Article 6(b)). This is clearly a more complex undertaking, requiring an assessment of the impacts of other sectors on biodiversity management. It will also require coordination among government departments or agencies. A national biodiversity planning process can identify the impacts and opportunities for integration.
Given the importance of stakeholder involvement in the implementation of the Convention, national planning processes should provide plenty of scope for public consultation and participation. The COP has recommended the guidance for the development of national strategies found in: Guidelines for Preparation of Biodiversity Country Studies
(UNEP) and National Biodiversity Planning: Guidelines Based on Early Country Experiences
(World Resources Institute, UNEP and IUCN). The financial mechanism has supported 125 countries in the preparation of their national biodiversity strategies and action plans (see chapter 3).
Article 7: Identification and monitoring of biodiversity
In contrast to some previous international or regional agreements on conservation, the Convention does not contain an internationally agreed list of species or habitats subject to special measures of protection. This is in line with the country-focused approach of the Convention. Instead, the Convention requires Parties to identify for themselves components of biodiversity important for conservation and sustainable use (Article 7).
Information provides the key for the implementation of the Convention, and Parties will require a minimum set of information in order to be able to identify national priorities. Whilst it contains no lists, the Convention does indicate, in Annex I, the types of species and ecosystems that Parties might consider for particular attention (see Box). Work is also underway within the Convention to elaborate Annex I in order to assist Parties further.
Indicative categories to guide Parties in the identification and monitoring of biodiversity
Ecosystems and habitats
- with high diversity, large numbers of endemic or threatened species, or wilderness;
- required by migratory species
- of social, economic, cultural or scientific importance
- representative, unique or associated with key evolutionary or other biological processes
Species and communities
- wild relatives of domesticated or cultivated species
- of medicinal, agricultural or other economic value
- of social, scientific or cultural importance
- of importance for research into the conservation and sustainable use of biological diversity, such as indicator species
Described genomes or genes of social, scientific or economic importance
Parties are also required to monitor important components of biodiversity, and to identify processes or activities likely to have adverse effects on biodiversity. The development of indicators may assist Parties in monitoring the status of biological diversity and the effects of measures taken for its conservation and sustainable use.
Article 8: Conservation of biodiversity in-situ
The Convention addresses both in-situ
conservation, but the emphasis is on in-situ
measures, i.e. within ecosystems and natural habitats or, in the case of domesticated or cultivated species, in the surroundings where they have developed their distinctive properties. Article 8 sets out a comprehensive framework for in-situ
conservation and a Party's national biodiversity planning process should include consideration of the extent to which it currently addresses the following issues.
Parties should establish a system of protected areas or areas where special measures are required to conserve biological diversity, covering both marine and terrestrial areas. They are expected to develop guidelines for the selection, establishment and management of these areas, and to enhance the protection of such areas by the environmentally sound and sustainable development of adjacent areas.
Regulation and management of biological resources
Parties should regulate or manage important components of biological diversity whether found within protected areas or outside them. Legislation or other regulatory measures should therefore be introduced or maintained to promote the protection of ecosystems, natural and semi-natural habitats and the maintenance of viable populations of species in natural surroundings.
Regulation and management of activities
Under Article 7 Parties should attempt to identify activities that may be detrimental to biological diversity. Where such activities have been identified, Parties should take steps to manage them so as to reduce their impacts.
Rehabilitation and restoration
Parties should develop plans and management strategies for the rehabilitation and restoration of degraded ecosystems and the recovery of threatened species.
Parties should prevent the introduction of, and control or eradicate alien species which threaten ecosystems, habitats, or native species.
Living modified organisms
Parties should establish or maintain means to manage the risks associated with the use and release of living modified organisms (LMOs) resulting from biotechnology. Parties are thus required to take action at the national level to ensure that LMOs do not cause adverse effects to biodiversity.
Traditional knowledge and practices
The Convention recognizes that indigenous and local communities embodying traditional lifestyles have a crucial rôle to play in the conservation and sustainable use of biodiversity. It calls on Parties to respect, preserve and maintain the knowledge, innovations and practices of indigenous and local communities and to encourage their customary uses of biological resources compatible with the conservation and sustainable use of these resources. By this, the Convention acknowledges the significance of traditional knowledge and practices, which should be taken into account in the implementation of all aspects of the Convention.
Article 9: Conservation of biodiversity ex-situ
While prioritising in-situ
conservation, the Convention recognizes the contribution that ex-situ
measures and facilities, such as gene banks, botanic gardens and zoos, can make to the conservation and sustainable use of biological diversity. It specifies that, where possible, facilities for ex-situ
conservation should be established and maintained in the country of origin of the genetic resources concerned.
The Convention does not, however, apply its provisions on access and benefit-sharing to ex-situ
resources collected prior to the entry into force of the Convention. This is of particular concern to developing countries, from which natural resources have already been removed and stored in ex-situ
collections, without a mechanism to ensure the sharing of benefits. The issue of the status of ex-situ
resources is currently being reviewed within the context of the work of the Food and Agriculture Organization of the United Nations.
Article 10: Sustainable use
Although the term conservation has sometimes been taken to incorporate sustainable use of resources, in the Convention the two terms appear side by side, and a specific Article of the Convention is devoted to sustainable use. This reflects the view of many countries during the negotiation of the Convention that the importance of sustainable use of resources be accorded explicit recognition. Sustainable use is defined in the Convention as:
“the use of components of biological diversity in a way and at a rate that does not lead to the long-term decline of biological diversity, thereby maintaining its potential to meet the needs and aspirations of present and future generations”.
The practical implications of this definition in terms of management are difficult to assess. Article 10 does not suggest quantitative methods for establishing the sustainability of use, but sets out five general areas of activity: the need to integrate conservation and sustainable use into national decision-making; to avoid or minimize adverse impacts on biological diversity; to protect and encourage customary uses of biodiversity in accordance with traditional cultural practices; to support local populations to develop and implement remedial action in degraded areas; and to encourage cooperation between its governmental authorities and its private sector in developing methods for sustainable use of biological resources.
Articles 11-14: Measures to promote conservation and sustainable use
The Convention makes explicit reference to a number of additional policy and procedural measures to promote conservation and sustainable use. For example, it requires Parties to adopt economically and socially sound incentives for this purpose (Article 11). It also recognizes the importance of public education and awareness to the effective implementation of the Convention (Article 13). Parties are therefore required to promote understanding of the importance of biodiversity conservation, and of the measures needed.
Research and training are critical to the implementation of almost every substantive obligation. Some deficit in human capacity exists in all countries, particularly so in developing countries. The Convention requires Parties to establish relevant scientific and technical training programmes, to promote research contributing to conservation and sustainable use, and to cooperate in using research results to develop and apply methods to achieve these goals (Article 12). Special attention must be given to supporting the research and training needs of developing countries, and this is explicitly linked to the provisions on access to and transfer of technology, technical and scientific cooperation and financial resources.
Parties are required to introduce appropriate environment impact assessment (EIA) procedures for projects likely to have significant adverse effects on biodiversity (Article 14). Legislation on EIA will generally incorporate a number of elements, including a threshold for determining when an EIA will be required, procedural requirements for carrying it out, and the requirement that the assessment be taken into account when determining whether the project should proceed. In addition, Parties are required to consult with other States on activities under their jurisdiction and control that may adversely affect the biodiversity of other States, or areas beyond national jurisdiction.
Articles 15-21: Benefits
The Convention provides for scientific and technical cooperation to support the conservation and sustainable use of biological diversity, and a clearing-house mechanism is being developed to promote and facilitate this cooperation. The provisions on scientific and technical cooperation provide a basis for capacity-building activities. For example, the COP has requested the financial mechanism to support a Global Taxonomy Initiative
designed, among other things, to develop national, regional and sub-regional training programmes, and to strengthen reference collections in countries of origin. In addition to general provisions on cooperation, research and training, the Convention includes articles promoting access to the potential benefits resulting from the use of genetic resources, access to and transfer of relevant technology, and access to increased financial resources.
The potential benefits for developing country Parties under the Convention arise from the new position on conservation negotiated between developed and developing countries. The extent to which these benefits materialise is likely to be crucial to determining the long-term success of the Convention. Global biodiversity increases toward the tropics, and the Convention gives developing countries, in this zone and elsewhere, an opportunity to derive financial and technical benefits from their biological resources, while the world overall benefits from the goods and services that the biodiversity thus conserved will continue to provide.
Access to genetic resources and benefit-sharing
Before the negotiation of the Convention, genetic resources were considered to be freely available, despite their potential monetary value. However, the approach taken in the Convention is radically different. Article 15 reaffirms the sovereignty of Parties over their genetic resources, and recognizes the authority of States to determine access to those resources. While the Convention addresses sovereignty over resources, it does not address their ownership
, which remains to be determined at national level in accordance with national legislation or practice.
Although the sovereign rights of States over their genetic resources is emphasised, access to genetic resources for environmentally sound uses by scientific and commercial institutions under the jurisdiction of other Parties is to be facilitated. Since genetic resources are no longer regarded as freely available, the Convention paves the way for new types of regimes governing the relationship between providers and users of genetic resources.
Key elements in genetic resource use agreements
- the need to obtain the prior informed consent of the country of origin before obtaining access to resources
- the need for mutually agreed terms of access with the country of origin (and potentially with direct providers of genetic resources such as individual holders or local communities)
- the importance of benefit-sharing; the obligation to share, in a fair and equitable way, benefits arising from the use of genetic resources with the Party that provides those resources
It is generally agreed that benefit-sharing should extend not only to the government of the country of origin but also to indigenous and local communities directly responsible for the conservation and sustainable use of the genetic resources in question. National legislation might require bio-prospectors to agree terms with such communities for the use of resources, and this may be all the more crucial where bio-prospectors are seeking to draw upon not only the resources themselves, but also upon the knowledge of these communities about those resources and their potential use.
Access to and transfer of technologies
Under Article 16 of the Convention, Parties agree to share technologies relevant to the conservation of biological diversity and the sustainable use of its components, and also technologies that make use of genetic resources. Technology transfer under the Convention therefore incorporates both `traditional' technologies and biotechnology. Biotechnology is defined in the Convention as: any technological application that uses biological systems, living organisms, or derivatives thereof, to make or modify products or processes for specific use.
Technologies which make use of genetic resources are subject to special provisions aimed at allowing the country of origin of the resources to share in the benefits arising out of the development of these technologies. The Convention makes it a specific requirement that all Parties create a legislative, administrative or policy framework with the aim that such technologies are transferred, on mutually agreed terms, to those providing the genetic resources. This obligation extends to technology protected by patents and other intellectual property rights.
More generally, developing country Parties are to have access to technology under terms which are fair and most favourable, including on concessional and preferential terms, where mutually agreed. Article 16 provides that where relevant technology is subject to an intellectual property right such as a patent, the transfer must be on terms which recognize and are consistent with the adequate and effective protection of the property right. However, it also goes on to provide that Parties are to cooperate in ensuring that intellectual property rights are supportive of, and do not run counter to, the objectives of the Convention.
All Parties undertake to provide financial support and incentives for implementation of the Convention at the national level, in accordance with their capabilities. In addition, developed country Parties agree to make available to developing country Parties, new and additional financial resources to meet “the agreed full incremental costs” of implementing measures to fulfil their obligations. In addition to the financial mechanism mentioned earlier, developed country Parties may provide resources to improve implementation of the Convention through overseas development agencies and other bilateral channels.
The Convention explicitly recognizes that the extent to which developing country Parties will be able to implement their obligations under the Convention will depend on the developed country Parties fulfilling their obligations to provide resources. The Convention also acknowledges that economic and social development remains the overriding priority of developing countries, and in this regard recognizes the special circumstances and needs of the small island developing states. As a result of both these considerations, developed country Parties are expected to give due consideration to the dependence on, distribution and location of biological diversity within developing countries, in particular small island states and those that are most environmentally vulnerable, such as those with arid and semi-arid zones, coastal and mountainous areas.
ASSESSING IMPLEMENTATION OF THE CONVENTION
The Convention provides for Parties to present reports to the COP on measures taken to implement the provisions of the Convention and their effectiveness in meeting the objectives of the Convention (Article 26). At its second meeting, the COP decided that the first national reports should focus on implementation of Article 6 of the Convention. This article concerns the need to develop a national biodiversity strategy and action plan, and to ensure that the conservation and sustainable use of biological diversity is integrated with the policies and programmes of other sectors. The information in these reports was considered by the fourth meeting of the COP, which asked SBSTTA to give advice on the nature of the information required from Parties in order to assess the state of implementation of the Convention. A review of national implementation based on the information in the first national reports is contained in chapter 4.
At its fifth meeting, the COP adopted a methodology for national reporting that will enable Parties to provide information on the implementation of all their obligations, as derived from the articles of the Convention and from decisions of the COP that call for action by Parties. The reporting guidelines will permit Parties to consider the effectiveness of the measures taken and to identify national priorities, national capacity for implementation and constraints encountered. The COP will be able to identify issues that require further scientific or technical investigation, and to identify successes and constraints faced by Parties. In the latter case it will be better placed to decide what steps are necessary to support Parties, and to give appropriate guidance to the financial mechanism, institutions able to assist with capacity development, the Secretariat and to the Parties themselves.
Given the enormous breadth of the issues that the Convention seeks to address, there is need not only for cooperation between Parties, but also the need to develop institutional links and cooperative relationships with other international bodies. Mechanisms for coordinating these relationships are fundamental to the implementation of the Convention. Each meeting of the COP has reaffirmed the importance it attaches to cooperation and coordination between the Convention and other relevant conventions, institutions and processes, and has invited these to take an active rôle in the implementation of aspects of the Convention.
Equally importantly, the COP has reaffirmed the importance of the role to be played by groups other than States and international bodies. Non-state actors - national and international non-governmental organizations, scientific bodies, industrial and agricultural associations, and indigenous peoples' organizations, amongst others - have all been called upon to cooperate in scientific assessments, policy development, and implementation of the Convention's work programmes. In particular, as traditional knowledge about conserving and sustainably using biodiversity is central to the development and implementation of the work programmes, cooperation with the holders of traditional knowledge has been particularly emphasized.
The institutional structure of the Convention thus extends beyond those institutions established by the process itself. Cooperation is discussed in chapter 5.
1 The other three are climate change, international waters and depletion of the Earth's ozone layer. | http://www.cbd.int/gbo1/chap-02.shtml | 13 |
71 | Transcript: So, basically the last few weeks, we've been doing derivatives. Now, we're going to integrals. So -- OK, so more precisely, we are going to be talking about double integrals. OK, so just to motivate the notion, let me just remind you that when you have a function of one variable -- -- say, f of x, and you take its integrals from, say, a to b of f of x dx, well, that corresponds to the area below the graph of f over the interval from a to b.
OK, so the picture is something like you have a; you have b. You have the graph of f, and then what the integral measures is the area of this region. And, when we say the area of this region, of course, if f is positive, that's what happens. If f is negative, then we count negatively the area below the x axis. OK, so, now, when you have a function of two variables, then you can try to do the same thing. Namely, you can plot its graph. Its graph will be a surface in space. And then, we can try to look for the volume below the graph. And that's what we will call the double integral of the function over a certain region.
OK, so let's say that we have a function of two variables, x and y. Then, we'll look at the volume that's below the graph z equals f of xy. OK, so, let's draw a picture for what this means. I have a function of x and y. I can draw its graph. The graph will be the surface with equation z equals f of x and y. And, well, I have to decide where I will integrate the function. So, for that, I will choose some region in the xy plane.
And, I will integrate the function on that region. So, it's over a region, R, in the xy plane. So, I have this region R and I look at the piece of the graph that is above this region. And, we'll try to compute the volume of this solid here. OK, that's what the double integral will measure. So, we'll call that the double integral of our region, R, of f of xy dA and I will have to explain what the notation means. So, dA here stands for a piece of area. A stands for area. And, well, it's a double integral. So, that's why we have two integral signs.
And, we'll have to indicate somehow the region over which we are integrating. OK, we'll come up with more concrete notations when we see how to actually compute these things. That's the basic definition. OK, so actually, how do we define it, that's not really much of a definition yet. How do we actually define this rigorously? Well, remember, the integral in one variable, you probably saw a definition where you take your integral from a to b, and you cut it into little pieces.
And then, for each little piece, you take the value of a function, and you multiply by the width of a piece. That gives you a rectangular slice, and then you sum all of these rectangular slices together. So, here we'll do the same thing. So, well, let me put a picture up and explain what it does. So, we're going to cut our origin into little pieces, say, little rectangles or actually anything we want.
And then, for each piece, with the small area, delta A, we'll take the area delta a times the value of a function in there that will give us the volume of a small box that sits under the graph. And then, we'll add all these boxes together. That gives us an estimate of a volume. And then, to get actually the integral, the integral will be defined as a limit as we subdivide into smaller and smaller boxes, and we sum more and more pieces, OK?
So, actually, what we do, oh, I still have a board here. So, the actual definition involves cutting R into small pieces of area that's called delta A or maybe delta Ai, the area of the i'th piece. And then, OK, so maybe in the xy plane, we have our region, and we'll cut it may be using some grade. OK, and then we'll have each small piece. Each small piece will have area delta Ai and it will be at some point, let's call it xi, yi ... yi, xi.
And then, we'll consider the sum over all the pieces of f at that point, xi, yi times the area of a small piece. So, what that corresponds to in the three-dimensional picture is just I sum the volumes of all of these little columns that sit under the graph. OK, and then, so what I do is actually I take the limit as the size of the pieces tends to zero. So, I have more and more smaller and smaller pieces.
And, that gives me the double integral. OK, so that's not a very good sentence, but whatever. So, OK, so that's the definition. Of course, we will have to see how to compute it. We don't actually compute it. When you compute an integral in single variable calculus, you don't do that. You don't cut into little pieces and sum the pieces together. You've learned how to integrate functions using various formulas, and similarly here, we'll learn how to actually compute these things without doing that cutting into small pieces.
OK, any questions first about the concept, or what the definition is? Yes? Well, so we'll have to learn which tricks work, and how exactly. But, so what we'll do actually is we'll reduce the calculation of a double integral to two calculations of single integrals. And so, for V, certainly, all the tricks you've learned in single variable calculus will come in handy. OK, so, yeah that's a strong suggestion that if you've forgotten everything about single variable calculus, now would be a good time to actually brush up on integrals. The usual integrals, and the usual substitution tricks and easy trig in particular, these would be very useful. OK, so, yeah, how do we compute these things?
That's what we would have to come up with. And, well, going back to what we did with derivatives, to understand variations of functions and derivatives, what we did was really we took slices parallel to an axis or another one. So, in fact, here, the key is also the same. So, what we are going to do is instead of cutting into a lot of small boxes like that and summing completely at random, we will actually somehow scan through our region by parallel planes, OK?
So, let me put up, actually, a slightly different picture up here. So, what I'm going to do is I'm going to take planes, say in this picture, parallel to the yz plane. I'll take a moving plane that scans from the back to the front or from the front to the back. So, that means I set the value of x, and I look at the slice, x equals x0, and then I will do that for all values of x0. So, now in each slice, well, I get what looks a lot like a single variable integral. OK, and that integral will tell me, what is the area in this? Well, I guess it's supposed to be green, but it all comes as black, so, let's say the black shaded slice. And then, when I add all of these areas together, as the value of x changes, I will get the volume. OK, let me try to explain that again.
So, to compute this integral, what we do is actually we take slices. So, let's consider, let's call s of x the area of a slice, well, by a plane parallel to the yz plane. OK, so on the picture, s of x is just the area of this thing in the vertical wall. Now, if you sum all of these, well, why does that work? So, if you take the origin between two parallel slices that are very close to each other, what's the volume in these two things?
Well, it's essentially s of x times the thickness of this very thin slice, and the thickness would be delta x0 dx if you take a limit with more and more slices. OK, so the volume will be the integral of s of x dx from, well, what should be the range for x? Well, we would have to start at the very lowest value of x that ever happens in our origin, and we'd have to go all the way to the very largest value of x, from the very far back to the very far front. So, in this picture, we probably start over here at the back, and we'd end over here at the front.
So, let me just say from the minimum, x, to the maximum x. And now, how do we find S of x? Well, S of x will be actually again an integral. But now, it's an integral of the variable, y, because when we look at this slice, what changes from left to right is y. So, well let me actually write that down. For a given, x, the area S of x you can compute as an integral of f of x, y dy. OK, well, now x is a constant, and y will be the variable of integration. What's the range for y? Well, it's from the leftmost point here to the rightmost point here on the given slice.
So, there is a big catch here. That's a very important thing to remember. What is the range of integration? The range of integration for y depends actually on x. See, if I take the slice that's pictured on that diagram, then the range for y goes all the way from the very left to the very right. But, if I take a slice that, say, near the very front, then in fact, only a very small segment of it will be in my region.
So, the range of values for y will be much less. Let me actually draw a 2D picture for that. So, remember, we fix x, so, sorry, so we fix a value of x. OK, and for a given value of x, what we will do is we'll slice our graph by this plane parallel to the yz plane. So, now we mention the graph is sitting above that. OK, that's the region R. We have the region, R, and I have the graph of a function above this region, R. And, I'm trying to find the area between this segment and the graph above it in this vertical plane. Well, to do that, I have to integrate from y going from here to here.
I want the area of a piece that sits above this red segment. And, so in particular, the endpoints, the extreme values for y depend on x because, see, if I slice here instead, well, my bounds for y will be smaller. OK, so now, if I put the two things together, what I will get -- -- is actually a formula where I have to integrate -- -- over x -- -- an integral over y. OK, and so this is called an iterated integral because we iterate twice the process of taking an integral.
OK, so again, what's important to realize here, I mean, I'm going to say that several times over the next few days but that's because it's the single most important thing to remember about double integrals, the bounds here are just going to be numbers, OK, because the question I'm asking myself here is, what is the first value of x by which I might want to slice, and what is the last value of x? Which range of x do I want to look at to take my red slices? And, the answer is I would go all the way from here, that's my first slice, to somewhere here.
That's my last slice. For any value in between these, I will have some red segment, and I will want to integrate over that that. On the other hand here, the bounds will depend on the outer variable, x, because at a fixed value of x, what the values of y will be depends on x in general. OK, so I think we should do lots of examples to convince ourselves and see how it works. Yeah, it's called an iterated integral because first we integrated over y, and then we integrate again over x, OK? So, we can do that, well, I mean, y depends on x or x depends, no, actually x and y vary independently of each other inside here. What is more complicated is how the bounds on y depend on x.
But actually, you could also do the other way around: first integrate over x, and then over y, and then the bounds for x will depend on y. We'll see that on an example. Yes? So, for y, I'm using the range of values for y that corresponds to the given value of x, OK? Remember, this is just like a plot in the xy plane. Above that, we have the graph. Maybe I should draw a picture here instead. For a given value of x, so that's a given slice, I have a range of values for y, that is, from this picture at the leftmost point on that slice to the rightmost point on that slice. So, where start and where I stop depends on the value of x. Does that make sense? OK.
OK, no more questions? OK, so let's do our first example. So, let's say that we want to integrate the function 1-x^2-y^2 over the region defined by x between 0 and 1, and y between 0 and 1. So, what does that mean geometrically? Well, z = 1-x^2-y^2, and it's a variation on, actually I think we plotted that one, right? That was our first example of a function of two variables possibly. And, so, we saw that the graph is this paraboloid pointing downwards. OK, it's what you get by taking a parabola and rotating it.
And now, what we are asking is, what is the volume between the paraboloid and the xy plane over the square of side one in the xy plane over the square of side one in the xy plane, x and y between zero and one. OK, so, what we'll do is we'll, so, see, here I try to represent the square. And, we'll just sum the areas of the slices as, say, x varies from zero to one. And here, of course, setting up the bounds will be easy because no matter what x I take, y still goes from zero to one. See, it's easiest to do double integrals what the region is just a rectangle on the xy plane because then you don't have to worry too much about what are the ranges. OK, so let's do it. Well, that would be the integral from zero to one of the integral from zero to one of 1-x^2-y^2 dy dx.
So, I'm dropping the parentheses. But, if you still want to see them, I'm going to put that in very thin so that you see what it means. But, actually, the convention is we won't put this parentheses in there anymore. OK, so what this means is first I will integrate 1-x^2-y^2 over y, ranging from zero to one with x held fixed. So, what that represents is the area in this slice. So, see here, I've drawn, well, what happens is actually the function takes positive and negative values. So, in fact, I will be counting positively this part of the area. And, I will be counting negatively this part of the area, I mean, as usual when I do an integral.
OK, so what I will do to evaluate this, I will first do what's called the inner integral. So, to do the inner integral, well, it's pretty easy. How do I integrate this? Well, it becomes, so, what's the integral of one? It's y. Just anything to remember is we are integrating this with respect to y, not to x. The integral of x^2 is x^2 times y. And, the integral of y^2 is y^3 over 3. OK, and that we plug in the bounds, which are zero and one in this case. And so, when you plug y equals one, you will get one minus x^2 minus one third minus, well, for y equals zero you get 0, 0, 0, so nothing changes. OK, so you are left with two thirds minus x^2.
OK, and that's a function of x only. Here, you shouldn't see any y's anymore because y was your integration variable. But, you still have x. You still have x because the area of this shaded slice depends, of course, on the value of x. And, so now, the second thing to do is to do the outer integral. So, now we integrate from zero to one what we got, which is two thirds minus x^2 dx. OK, and we know how to compute that because that integrates to two thirds x minus one third x^3 between zero and one.
And, I'll let you do the computation. You will find it's one third. OK, so that's the final answer. So, that's the general pattern. When we have a double integral to compute, first we want to set it up carefully. We want to find, what will be the bounds in x and y? And here, that was actually pretty easy because our equation was very simple. Then, we want to compute the inner integral, and then we compute the outer integral. And, that's it.
OK, any questions at this point? No? OK, so, by the way, we started with dA in the notation, right? Here we had dA. And, that somehow became a dy dx. OK, so, dA became dy dx because when we do the iterated integral this way, what we're actually doing is that we are slicing our origin into small rectangles. OK, that was the area of this small rectangle here? Well, it's the product of its width times its height. So, that's delta x times delta y. OK, so, delta a equals delta x delta y becomes...
So actually, it's not just becomes, it's really equal. So, the small rectangles for. Now, it became dy dx and not dx dy. Well, that's a question of, in which order we do the iterated integral? It's up to us to decide whether we want to integrate x first, then y, or y first, then x. But, as we'll see very soon, that is an important decision when it comes to setting up the bounds of integration. Here, it doesn't matter, but in general we have to be very careful about in which order we will do things. Yes? Well, in principle it always works both ways.
Sometimes it will be that because the region has a strange shape, you can actually set it up more easily one way or the other. Sometimes it will also be that the function here, you actually know how to integrate in one way, but not the other. So, the theory is that it should work both ways. In practice, one of the two calculations may be much harder. OK. Let's do another example. Let's say that what I wanted to know was not actually what I computed, namely, the volume below the paraboloid, but also the negative of some part that's now in the corner towards me. But let's say really what I wanted was just the volume between the paraboloid and the xy plane, so looking only at the part of it that sits above the xy plane.
So, that means, instead of integrating over the entire square of size one, I should just integrate over the quarter disk. I should stop integrating where my paraboloid hits the xy plane. So, let me draw another picture. So, let's say I wanted to integrate, actually -- So, let's call this example two. So, we are going to do the same function but over a different region. And, the region will just be, now, this quarter disk here. OK, so maybe I should draw a picture on the xy plane. That's your region, R.
OK, so in principle, it will be the same integral. But what changes is the bounds. Why do the bounds change? Well, the bounds change because now if I set, if I fixed some value of x, then I want to integrate this part of the slice that's above the xy plane and I don't want to take this part that's actually outside of my disk. So, I should stop integrating over y when y reaches this value here. OK, on that picture here, on this picture, it tells me for a fixed value of x, the range of values for y should go only from here to here. So, that's from here to less than one.
OK, so for a given, x, the range of y is, well, so what's the lowest value of y that we want to look at? It's still zero. From y equals zero to,what's the value of y here? Well, I have to solve in the equation of a circle, OK? So, if I'm here, this is x^2 y^2 equals one. That means y is square root of one minus x^2. OK, so I will integrate from y equals zero to y equals square root of one minus x^2. And, now you see how the bound from y will depend on the value of x. OK, so while I erase, I will let you think about, what is the bound for x now?
It's a trick question. OK, so I claim that what we will do -- We write this as an iterated integral first dy then dx. And, we said for a fixed value of x, the range for y is from zero to square root of one minus x^2. What about the range for x? Well, the range for x should just be numbers. OK, remember, the question I have to ask now is if I look at all of these yellow slices, which one is the first one that I will consider? Which one is the last one that I want to consider? So, the smallest value of x that I want to consider is zero again.
And then, I will have actually a pretty big slice. And I will get smaller, and smaller, and smaller slices. And, it stops. I have to stop when x equals one. Afterwards, there's nothing else to integrate. So, x goes from zero to one. OK, and now, see how in the inner integral, the bounds depend on in the inner integral, the bounds depend on x. In the outer one, you just get numbers because the questions that you have to ask to set up this one and set up that one are different.
Here, the question is, if I fix a given, x, if I look at a given slice, what's the range for y? Here, the question is, what's the first slice? What is the last slice? Does that make sense? Everyone happy with that? OK, very good. So, now, how do we compute that? Well, we do the inner integral. So, that's an integral from zero to square root of one minus x^2 of one minus x^2 minus y^2 dy. And, well, that integrates to y-x^2y-y^3 over three from zero to square root of one minus x^2.
And then, that becomes, well, the root of one minus x^2 minus x^2 root of one minus x^2 minus y minus x^2 to the three halves over three. And actually, if you look at it for long enough, see, this says one minus x^2 times square root of one minus x^2. So, actually, that's also, so, in fact, that simplifies to two thirds of one minus x^2 to the three halves. OK, let me redo that, maybe, slightly differently. This was one minus x^2 times y. So --
-- one minus x^2 times y becomes square root of one minus x^2 minus y^3 over three. And then, when I take y equals zero, I get zero. So, I don't subtract anything. OK, so now you see this is one minus x^2 to the three halves minus a third of it. So, you're left with two thirds. OK, so, that's the integral. The outer integral is the integral from zero to one of two thirds of one minus x^2 to the three halves dx.
And, well, I let you see if you remember single variable integrals by trying to figure out what this actually comes out to be is it pi over two, or pi over eight, actually? I think it's pi over eight. OK, well I guess we have to do it then. I wrote something on my notes, but it's not very clear, OK? So, how do we compute this thing? Well, we have to do trig substitution. That's the only way I know to compute an integral like that, OK? So, we'll set x equal sine theta, and then square root of one minus x^2 will be cosine theta.
We are using sine squared plus cosine squared equals one. And, so that will become -- -- so, two thirds remains two thirds. One minus x^2 to the three halves becomes cosine cubed theta. dx, well, if x is sine theta, then dx is cosine theta d theta. So, that's cosine theta d theta. And, well, if you do things with substitution, which is the way I do them, then you should worry about the bounds for theta which will be zero to pi over two. Or, you can also just plug in the bounds at the end.
So, now you have the two thirds times the integral from zero to pi over two of cosine to the fourth theta d theta. And, how do you integrate that? Well, you have to use double angle formulas. OK, so cosine to the fourth, remember, cosine squared theta is one plus cosine two theta over two. And, we want the square of that. And, so that will give us -- -- of, well, we'll have, it's actually one quarter plus one half cosine to theta plus one quarter cosine square to theta d theta. And, how will you handle this guy? Well, using, again, the double angle formula. OK, so it's getting slightly nasty. So, but I don't know any simpler solution except for one simpler solution, which is you have a table of integrals of this form inside the notes. Yes?
No, I don't think so because if you take one half times cosine half times two, you will still have half, OK? So, if you do, again, the double angle formula, I think I'm not going to bother to do it. I claim you will get, at the end, pi over eight because I say so. OK, so exercise, continue calculating and get pi over eight. OK, now what does the show us? Well, this shows us, actually, that this is probably not the right way to do this. OK, the right way to do this will be to integrate it in polar coordinates. And, that's what we will learn how to do tomorrow.
So, we will actually see how to do it with much less trig. So, that will be easier in polar coordinates. So, we will see that tomorrow. OK, so we are almost there. I mean, here you just use a double angle again and then you can get it. And, it's pretty straightforward. OK, so one thing that's kind of interesting to know is we can exchange the order of integration. Say we have an integral given to us in the order dy dx, we can switch it to dx dy. But, we have to be extremely careful with the bounds. So, you certainly cannot just swap the bounds of the inner and outer because there you would end up having this square root of one minus x^2 on the outside, and you would never get a number out of that.
So, that cannot work. It's more complicated than that. OK, so, well, here's a first baby example. Certainly, if I do integral from zero to one, integral from zero to two dx dy, there, I can certainly switch the bounds without thinking too much. What's the reason for that? Well, the reason for that is this corresponds in both cases to integrating x from zero to two, and y from zero to one. It's a rectangle. So, if I slice it this way, you see that y goes from zero to one for any x between zero and two. It's this guy. If I slice it that way, then x goes from zero to two for any value of y between zero and one. And, it's this one. So, here it works. But in general, I have to draw picture of my region, and see how the slices look like both ways.
OK, so let's do a more interesting one. Let's say that I want to compute an integral from zero to one of integral from x to square root of x of e^y over y dy dx. So, why did I choose this guy? Which is the guy because as far as I can tell, there's no way to integrate e^y over y. So, this is an integral that you cannot compute this way. So, it's a good example for why this can be useful. So, if you do it this way, you are stuck immediately. So, instead, we will try to switch the order. But, to switch the order, we have to understand, what do these bounds mean? OK, so let's draw a picture of the region. Well what I am saying is y equals x to y equals square root of x.
Well, let's draw y equals x, and y equals square root of x. Well, maybe I should actually put this here, y equals x to y equals square root of x. OK, and so I will go, for each value of x I will go from y equals xo to y equals square root of x. And then, we'll do that for values of x that go from x equals zero to x equals one, which happens to be exactly where these things intersect. So, my region will consist of all this, OK? So now, if I want to do it the other way around, I have to decompose my region.
The other way around, I have to, so my goal, now, is to rewrite this as an integral. Well, it's still the same function. It's still e to the y over y. But now, I want to integrate dx dy. So, how do I integrate over x? Well, I fix a value of y. And, for that value of y, what's the range of x? Well, the range for x is from here to here. OK, what's the value of x here? Let's start with an easy one.
This is x equals y. What about this one? It's x equals y^2. OK, so, x goes from y2 to y, and then what about y? Well, I have to start at the bottom of my region. That's y equals zero to the top, which is at y equals one. So, y goes from zero to one. So, switching the bounds is not completely obvious. That took a little bit of work. But now that we've done that, well, just to see how it goes, it's actually going to be much easier to integrate because the inner integral, well, what's the integral of e^y over y with respect to x? It's just that times x, right, from x equals y^2 to y.
So, that will be, well, if I plug x equals y, I will get e to the y minus, if I plug x equals y^2, I will get e to the y over y times y^2 into the y times y, OK? So, now, if I do the outer integral, I will have the integral from zero to one of e to the y minus y^e to the y dy. And, that one actually is a little bit easier. So, we know how to integrate e^y. We don't quite know how to integrate ye^y. But, let's try. So, let's see, what's the derivative of ye^y? Well, there's a product rule that's one times e^y plus y times the derivative of e^y is ye^y.
So, if we do, OK, let's put a minus sign in front. Well, that's almost what we want, except we have a minus e^y instead of a plus e^y. So, we need to add 2e^y. And, I claim that's the antiderivative. OK, if you got lost, you can also integrate by integrating by parts, by taking the derivative of y and integrating these guys. Or, but, you know, that works. Just, your first guess would be, maybe, let's try minus y^e to the y.
Take the derivative of that, compare, see what you need to do to fix. And so, if you take that between zero and one, you'll actually get e minus two. OK, so, tomorrow we are going to see how to do double integrals in polar coordinates, and also applications of double integrals, how to use them for interesting things. | http://xoax.net/math/crs/multivariable_calculus_mit/lessons/Lecture16/ | 13 |
13 | The War of 1812 was fought between the United States of America and the British Empire - particularly Great Britain and the provinces of British North America, the antecedent of Canada. It lasted from 1812 to 1815. It was fought chiefly on the Atlantic Ocean and on the land, coasts and waterways of North America.
There were several immediate stated causes for the U.S. declaration of war. In 1807, Britain introduced a series of trade restrictions to impede American trade with France, a country with which Britain was at war. The United States contested these restrictions as illegal under international law. Both the impressment of American citizens into the Royal Navy, and Britain's military support of American Indians who were resisting the expansion of the American frontier into the Northwest further aggravated the relationship between the two countries. In addition, the United States sought to uphold national honor in the face of what they considered to be British insults, including the Chesapeake affair.
Indian raids hindered the expansion of United States into potentially valuable farmlands in the Northwest Territory, comprising the modern states of Ohio, Indiana, Illinois, Michigan, and Wisconsin. Some Canadian historians in the early 20th century maintained that Americans had wanted to seize parts of Canada, a view that many Canadians still share. Others argue that inducing the fear of such a seizure had merely been a U.S. tactic designed to obtain a bargaining chip. Some members of the British Parliament and dissident American politicians such as John Randolph of Roanoke claimed then that land hunger rather than maritime disputes was the main motivation for the American declaration. Although the British made some concessions before the war on neutral trade, they insisted on the right to reclaim their deserting sailors. The British also had the long-standing goal of creating a large "neutral" Indian state that would cover much of Ohio, Indiana and Michigan. They made the demand as late as 1814 at the peace conference, but lost battles that would have validated their claims.
The war was fought in four theaters. Warships and privateers of both sides attacked each other's merchant ships. The British blocked the Atlantic coast of the United States and mounted large-scale raids in the later stages of the war. Battles were also fought on the frontier, which ran along the Great Lakes and Saint Lawrence River and separated the United States from Upper and Lower Canada, and along the coast of the Gulf of Mexico. During the war, the Americans and British invaded each other's territory. These invasions were either unsuccessful or gained only temporary success. At the end of the war, the British held parts of Maine and some outposts in the sparsely populated West while the Americans held Canadian territory near Detroit, but these occupied territories were restored at the end of the war.
In the United States, battles such as New Orleans and the earlier successful defence of Baltimore (which inspired the lyrics of the U.S. national anthem, The Star-Spangled Banner) produced a sense of euphoria over a "second war of independence" against Britain. It ushered in an "Era of Good Feelings," in which the partisan animosity that had once verged on treason practically vanished. Canada also emerged from the war with a heightened sense of national feeling and solidarity. Britain, which had regarded the war as a sideshow to the Napoleonic Wars raging in Europe, was less affected by the fighting; its government and people subsequently welcomed an era of peaceful relations with the United States.
The war was fought between the United States and the British Empire, particularly Great Britain and her North American colonies of Upper Canada (Ontario), Lower Canada (Québec), New Brunswick, Newfoundland, Nova Scotia, Prince Edward Island, Cape Breton Island (then a separate colony from Nova Scotia), and Bermuda.
In May of 1812, William Hull lead an invading force of 2,000 soldiers across the Detroit River and occupied the Canadian town of Sandwich (now a neighborhood of Windsor, Ontario). British Major General Isaac Brock attacked the supply lines of the occupying force with a battle group comprised of British regulars, local militias, and Native Americans. By August, Hull and his troops (now numbering 2,500 with the addition of 500 Canadians) retreated to Detroit where, on August 16, Hull surrendered without a shot fired. The surrender cost the U.S. not only the city of Detroit, but the Michigan territory as well. Several months later the U.S. launched a second invasion of Canada, this time at the Niagara peninsula. On October 13, U.S. forces were again defeated at the Battle of Queenston Heights, where General Brock was killed.
The American strategy relied in part on state-raised militias, which had the deficiencies of poor training, resisting service or being incompetently led. Financial and logistical problems also plagued the American effort. Military and civilian leadership was lacking and remained a critical American weakness until 1814. New England opposed the war and refused to provide troops or financing. Britain had excellent financing and logistics, but the war with France had a higher priority, so in 1812–13, it adopted a defensive strategy. After the abdication of Napoleon in 1814, the British were able to send veteran armies to the U.S., but by then the Americans had learned how to mobilise and fight.
At sea, the powerful Royal Navy blockaded much of the coastline, though it was allowing substantial exports from New England, which traded with Britain and Canada in defiance of American laws. The blockade devastated American agricultural exports, but it helped stimulate local factories that replaced goods previously imported. The American strategy of using small gunboats to defend ports was a fiasco, as the British raided the coast at will. The most famous episode was a series of British raids on the shores of Chesapeake Bay, including an attack on Washington, D.C. that resulted in the British burning of the White House, the Capitol, the Navy Yard, and other public buildings, later called the "Burning of Washington." The British power at sea was sufficient to allow the Royal Navy to levy "contributions" on bayside towns in return for not burning them to the ground. The Americans were more successful in ship-to-ship actions, and built several fast frigates in its shipyard at Sackets Harbor, New York. They sent out several hundred privateers to attack British merchant ships; British commercial interests were damaged, especially in the West Indies.
The decisive use of naval power came on the Great Lakes and depended on a contest of building ships. In 1813, the Americans won control of Lake Erie and cut off British and Native American forces to the west from their supplies. Thus, the Americans gained one of their main objectives by breaking a confederation of tribes. Tecumseh, the leader of the tribal confederation, was killed at the Battle of the Thames. While some Natives continued to fight alongside British troops, they subsequently did so only as individual tribes or groups of warriors, and where they were directly supplied and armed by British agents. Control of Lake Ontario changed hands several times, with neither side able or willing to take advantage of the temporary superiority. The Americans ultimately gained control of Lake Champlain, and naval victory there forced a large invading British army to turn back in 1814.
Once Britain defeated France in 1814, it ended the trade restrictions and impressment of American sailors, thus removing another cause of the war. Great Britain and the United States agreed to a peace that left the prewar boundaries intact.
After two years of warfare, the major causes of the war had disappeared. Neither side had a reason to continue or a chance of gaining a decisive success that would compel their opponents to cede territory or advantageous peace terms. As a result of this stalemate, the two countries signed the Treaty of Ghent on December 24, 1814. News of the peace treaty took two months to reach the U.S., during which fighting continued. In this interim, the Americans defeated a British invasion army in the Battle of New Orleans, with American forces' sustaining 71 casualties compared with 2,000 British. The British went on to capture Fort Bowyer only to learn the next day of the war's end.
The war had the effect of uniting the populations within each country. Canadians celebrated the war as a victory because they avoided conquest. Americans celebrated victory personified in Andrew Jackson. He was the hero of the defence of New Orleans, and in 1828, was elected the 7th President of the United States.
On June 18, the United States declared war on Britain. The war had many causes, but at the centre of the conflict was Britain's ongoing war with Napoleon’s France. The British, said Jon Latimer in 2007, had only one goal: "Britain's sole objective throughout the period was the defeat of France." If America helped France, then America had to be damaged until she stopped, or "Britain was prepared to go to any lengths to deny neutral trade with France." Latimer concludes, "All this British activity seriously angered Americans."
The British were engaged in war with the First French Empire and did not wish to allow the Americans to trade with France, regardless of their theoretical neutral rights to do so. As Horsman explains, "If possible, England wished to avoid war with America, but not to the extent of allowing her to hinder the British war effort against France. Moreover… a large section of influential British opinion, both in the government and in the country, thought that America presented a threat to British maritime supremacy."
The United States Merchant Marine had come close to doubling between 1802 and 1810. Britain was the largest trading partner, receiving 80% of all U.S. cotton and 50% of all other U.S. exports. The United States Merchant Marine was the largest neutral fleet in the world by a large margin. The British public and press were resentful of the growing mercantile and commercial competition. The United States' view was that Britain was in violation of a neutral nation's right to trade with others it saw fit.
During the Napoleonic Wars, the Royal Navy expanded to 175 ships of the line and 600 ships overall, requiring 140,000 sailors. While the Royal Navy could man its ships with volunteers in peacetime, in war, it competed with merchant shipping and privateers for a small pool of experienced sailors and turned to impressment when it was unable to man ships with volunteers alone. A sizeable number of sailors (estimated to be as many as 11,000 in 1805) in the United States merchant navy were Royal Navy veterans or deserters who had left for better pay and conditions. The Royal Navy went after them by intercepting and searching U.S. merchant ships for deserters. Such actions, especially the Chesapeake-Leopard Affair, incensed the Americans.
The United States believed that British deserters had a right to become United States citizens. Britain did not recognise naturalised United States citizenship, so in addition to recovering deserters, it considered United States citizens born British liable for impressment. Exacerbating the situation was the widespread use of forged identity papers by sailors. This made it all the more difficult for the Royal Navy to distinguish Americans from non-Americans and led it to impress some Americans who had never been British. (Some gained freedom on appeal.) American anger at impressment grew when British frigates stationed themselves just outside U.S. harbors in U.S. territorial waters and searched ships for contraband and impressed men in view of U.S. shores. "Free trade and sailors' rights" was a rallying cry for the United States throughout the conflict.
American expansion into the Northwest Territory (the modern states of Ohio, Indiana, Michigan, Illinois and Wisconsin) was being obstructed by indigenous leaders like Tecumseh, supplied and encouraged by the British. Americans on the frontier demanded that interference be stopped. Before 1940, some historians held that United States expansionism into Canada was also a reason for the war. However, one subsequent historian wrote, "Almost all accounts of the 1811–1812 period have stressed the influence of a youthful band, denominated War Hawks, on Madison's policy. According to the standard picture, these men were a rather wild and exuberant group enraged by Britain's maritime practices, certain that the British were encouraging the Indians and convinced that Canada would be an easy conquest and a choice addition to the national domain. Like all stereotypes, there is some truth in this tableau; however, inaccuracies predominate. First, Perkins has shown that those favoring war were older than those opposed. Second, the lure of the Canadas has been played down by most recent investigators." Some Canadian historians propounded the notion in the early 20th century, and it survives in public opinion in Ontario. This view was also shared by a member of the British Parliament at the time.
Madison and his advisers believed that conquest of Canada would be easy and that economic coercion would force the British to come to terms by cutting off the food supply for their West Indies colonies. Furthermore, possession of Canada would be a valuable bargaining chip. Frontiersmen demanded the seizure of Canada not because they wanted the land, but because the British were thought to be arming the Indians and thereby blocking settlement of the West. As Horsman concluded, "The idea of conquering Canada had been present since at least 1807 as a means of forcing England to change her policy at sea. The conquest of Canada was primarily a means of waging war, not a reason for starting it." Hickey flatly stated, "The desire to annex Canada did not bring on the war." Brown (1964) concluded, "The purpose of the Canadian expedition was to serve negotiation, not to annex Canada." Burt, a leading Canadian scholar, agreed completely, noting that Foster—the British minister to Washington—also rejected the argument that annexation of Canada was a war goal.
The majority of the inhabitants of Upper Canada (Ontario) were either exiles from the United States (United Empire Loyalists) or postwar immigrants. The Loyalists were hostile to union with the U.S., while the other settlers seem to have been uninterested. The Canadian colonies were thinly populated and only lightly defended by the British Army. Americans then believed that many in Upper Canada would rise up and greet a United States invading army as liberators, which did not happen. One reason American forces retreated after one successful battle inside Canada was that they could not obtain supplies from the locals. But the possibility of local assistance suggested an easy conquest, as former President Thomas Jefferson seemed to believe in 1812: "The acquisition of Canada this year, as far as the neighborhood of Quebec, will be a mere matter of marching, and will give us the experience for the attack on Halifax, the next and final expulsion of England from the American continent."
The declaration of war was passed by the smallest margin recorded on a war vote in the United States Congress. On May 11, Prime Minister Spencer Perceval was shot and killed by an assassin, resulting in a change of the British government, putting Lord Liverpool in power. Liverpool wanted a more practical relationship with the United States. He issued a repeal of the Orders in Council, but the U.S. was unaware of this, as it took three weeks for the news to cross the Atlantic.
Although the outbreak of the war had been preceded by years of angry diplomatic dispute, neither side was ready for war when it came. Britain was heavily engaged in the Napoleonic Wars, most of the British Army was engaged in the Peninsular War (in Spain), and the Royal Navy was compelled to blockade most of the coast of Europe. The number of British regular troops present in Canada in July 1812 was officially stated to be 6,034, supported by Canadian militia. Throughout the war, the British Secretary of State for War and the Colonies was the Earl of Bathurst. For the first two years of the war, he could spare few troops to reinforce North America and urged the commander in chief in North America (Lieutenant General Sir George Prevost) to maintain a defensive strategy. The naturally cautious Prevost followed these instructions, concentrating on defending Lower Canada at the expense of Upper Canada (which was more vulnerable to American attacks) and allowing few offensive actions. In the final year of the war, large numbers of British soldiers became available after the abdication of Napoleon Bonaparte. Prevost launched an offensive of his own into Upper New York State, but mishandled it and was forced to retreat after the British lost the Battle of Plattsburgh.
The United States was not prepared to prosecute a war, for President Madison assumed that the state militias would easily seize Canada and negotiations would follow. In 1812, the regular army consisted of fewer than 12,000 men. Congress authorised the expansion of the army to 35,000 men, but the service was voluntary and unpopular, it offered poor pay, and there were very few trained and experienced officers, at least initially. The militia called in to aid the regulars objected to serving outside their home states, were not amenable to discipline, and performed poorly in the presence of the enemy when outside of their home state. The U.S. had great difficulty financing its war. It had disbanded its national bank, and private bankers in the Northeast were opposed to the war.
The early disasters brought about chiefly by American unpreparedness and lack of leadership drove United States Secretary of War William Eustis from office. His successor, John Armstrong, Jr., attempted a coordinated strategy late in 1813 aimed at the capture of Montreal, but was thwarted by logistical difficulties, uncooperative and quarrelsome commanders and ill-trained troops. By 1814, the United States Army's morale and leadership had greatly improved, but the embarrassing Burning of Washington led to Armstrong's dismissal from office in turn. The war ended before the new Secretary of War James Monroe could put a new strategy into effect.
American prosecution of the war also suffered from its unpopularity, especially in New England, where antiwar spokesmen were vocal. The failure of New England to provide militia units or financial support was a serious blow. Threats of secession by New England states were loud; Britain immediately exploited these divisions, blockading only southern ports for much of the war and encouraging smuggling.
The war was conducted in three theatres of operations:
In 1812, Britain's Royal Navy was the world's largest, with over 600 cruisers in commission, plus a number of smaller vessels. Although most of these were involved in blockading the French navy and protecting British trade against (usually French) privateers, the Royal Navy nevertheless had 85 vessels in American waters. By contrast, the United States Navy comprised only 8 frigates, 14 smaller sloops and brigs, and no ships of the line whatsoever. However some American frigates were exceptionally large and powerful for their class. Whereas the standard British frigate of the time was rated as a 38 gun ship, with its main battery consisting of 18-pounder guns, the USS Constitution, USS President, and USS United States were rated as 44-gun ships and were capable of carrying 56 guns, with a main battery of 24-pounders.
The British strategy was to protect their own merchant shipping to and from Halifax, Canada and the West Indies, and to enforce a blockade of major American ports to restrict American trade. Because of their numerical inferiority, the Americans aimed to cause disruption through hit-and-run tactics, such as the capture of prizes and engaging Royal Navy vessels only under favorable circumstances. Days after the formal declaration of war, however, two small squadrons sailed, including the frigate USS President and the sloop USS Hornet under Commodore John Rodgers, and the frigates USS United States and USS Congress, with the brig USS Argus under Captain Stephen Decatur. These were initially concentrated as one unit under Rodgers, and it was his intention to force the Royal Navy to concentrate its own ships to prevent isolated units being captured by his powerful force. Large numbers of American merchant ships were still returning to the United States, and if the Royal Navy was concentrated, it could not watch all the ports on the American seaboard. Rodgers' strategy worked, in that the Royal Navy concentrated most of its frigates off New York Harbor under Captain Philip Broke and allowed many American ships to reach home. However, his own cruise captured only five small merchant ships, and the Americans never subsequently concentrated more than two or three ships together as a unit.
Meanwhile, the USS Constitution, commanded by Captain Isaac Hull, sailed from Chesapeake Bay on July 12. On July 17, Broke's British squadron gave chase off New York, but the Constitution evaded her pursuers after two days. After briefly calling at Boston to replenish water, on August 19, the Constitution engaged the British frigate HMS Guerriere. After a 35-minute battle, Guerriere had been dismasted and captured and was later burned. Hull returned to Boston with news of this significant victory. On October 25, the USS United States, commanded by Captain Decatur, captured the British frigate HMS Macedonian, which he then carried back to port. At the close of the month, the Constitution sailed south, now under the command of Captain William Bainbridge. On December 29, off Bahia, Brazil, she met the British frigate HMS Java. After a battle lasting three hours, Java struck her colours and was burned after being judged unsalvageable. The USS Constitution, however, was undamaged in the battle and earned the name "Old Ironsides."
The successes gained by the three big American frigates forced Britain to construct five 40-gun, 24-pounder heavy frigates and two of its own 50-gun "spar-decked" frigates (HMS Leander and HMS Newcastle) and to razee three old 74-gun ships of the line to convert them to heavy frigates. The Royal Navy acknowledged that there were factors other than greater size and heavier guns. The United States Navy's sloops and brigs had also won several victories over Royal Navy vessels of approximately equal strength. While the American ships had experienced and well-drilled volunteer crews, the enormous size of the overstretched Royal Navy meant that many ships were shorthanded and the average quality of crews suffered, and constant sea duties of those serving in North America interfered with their training and exercises.
The capture of the three British frigates stimulated the British to greater exertions. More vessels were deployed on the American seaboard and the blockade tightened. On June 1, 1813, off Boston Harbor, the frigate USS Chesapeake, commanded by Captain James Lawrence, was captured by the British frigate HMS Shannon under Captain Sir Philip Broke. Lawrence was mortally wounded and famously cried out, "Don't give up the ship! Hold on, men!" Although the Chesapeake was only of equal strength to the average British frigate and the crew had mustered together only hours before the battle, the British press reacted with almost hysterical relief that the run of American victories had ended. It should be noted that this single victory was by ratio one of the bloodiest contests recorded during this age of sail with more dead and wounded than the HMS Victory suffered in 4 hours of combat at Trafalgar. Captain Lawrence was killed and Captain Broke would never again hold a sea command due to wounds.
In January 1813, the American frigate USS Essex, under the command of Captain David Porter, sailed into the Pacific in an attempt to harass British shipping. Many British whaling ships carried letters of marque allowing them to prey on American whalers, and nearly destroyed the industry. The Essex challenged this practice. She inflicted considerable damage on British interests before she was captured off Valparaiso, Chile by the British frigate HMS Phoebe and the sloop HMS Cherub on March 28, 1814.
The British 6th-rate Cruizer class brig-sloops did not fare well against the American ship-rigged sloops of war. The USS Hornet and USS Wasp constructed before the war were notably powerful vessels, and the Frolic class built during the war even more so (although USS Frolic was trapped and captured by a British frigate and a schooner). The British brig-rigged sloops tended to suffer fire to their rigging far worse than the American ship-rigged sloops, while the ship-rigged sloops could back their sails in action, giving them another advantage in manoeuvering.
Following their earlier losses, the British Admiralty instituted a new policy that the three American heavy frigates should not be engaged except by a ship of the line or smaller vessels in squadron strength. An example of this was the capture of the USS President by a squadron of four British frigates in January 1815 (although the action was fought on the British side mainly by HMS Endymion). A month later, however, the USS Constitution managed to engage and capture two smaller British warships, HMS Cyane and HMS Levant, sailing in company.
The blockade of American ports later tightened to the extent that most American merchant ships and naval vessels were confined to port. The American frigates USS United States and USS Macedonian ended the war blockaded and hulked in New London, Connecticut. Some merchant ships were based in Europe or Asia and continued operations. Others, mainly from New England, were issued licenses to trade by Admiral Sir John Borlase Warren, commander in chief on the American station in 1813. This allowed Wellington's army in Spain to receive American goods and to maintain the New Englanders' opposition to the war. The blockade nevertheless resulted in American exports decreasing from $130-million in 1807 to $7-million in 1814.
The operations of American privateers (some of which belonged to the United States Navy, but most of which were private ventures) were extensive. They continued until the close of the war and were only partially affected by the strict enforcement of convoy by the Royal Navy. An example of the audacity of the American cruisers was the depredations in British home waters carried out by the American sloop USS Argus. It was eventually captured off St. David's Head in Wales by the British brig HMS Pelican on August 14, 1813. A total of 1,554 vessels were claimed captured by all American naval and privateering vessels, 1300 of which were captured by privateers. However, insurer Lloyd's of London reported that only 1,175 British ships were taken, 373 of which were recaptured, for a total loss of 802.
As the Royal Navy base that supervised the blockade, the Halifax profited greatly during the war. British privateers based there seized many French and American ships and sold their prizes in Halifax.
The war was the last time the British allowed privateering, since the practice was coming to be seen as politically inexpedient and of diminishing value in maintaining its naval supremacy. It was the swan song of Bermuda's privateers, who had vigorously returned to the practice after American lawsuits had put a stop to it two decades earlier. The nimble Bermuda sloops captured 298 enemy ships. British naval and privateering vessels between the Great Lakes and the West Indies captured 1,593.
Preoccupied in their pursuit of American privateers when the war began, the British naval forces had some difficulty in blockading the entire U.S. coast. The British government, having need of American foodstuffs for its army in Spain, benefited from the willingness of the New Englanders to trade with them, so no blockade of New England was at first attempted. The Delaware River and Chesapeake Bay were declared in a state of blockade on December 26, 1812.
This was extended to the coast south of Narragansett by November 1813 and to the entire American coast on May 31, 1814. In the meantime, illicit trade was carried on by collusive captures arranged between American traders and British officers. American ships were fraudulently transferred to neutral flags. Eventually, the U.S. government was driven to issue orders to stop illicit trading; this put only a further strain on the commerce of the country. The overpowering strength of the British fleet enabled it to occupy the Chesapeake and to attack and destroy numerous docks and harbors.
Additionally, commanders of the blockading fleet, based at the Bermuda dockyard, were given instructions to encourage the defection of American slaves by offering freedom, as they did during the Revolutionary War. Thousands of black slaves went over to the Crown with their families and were recruited into the 3rd (Colonial) Battalion of the Royal Marines on occupied Tangier Island, in the Chesapeake. A further company of colonial marines was raised at the Bermuda dockyard, where many freed slaves—men, women, and children—had been given refuge and employment. It was kept as a defensive force in case of an attack. These former slaves fought for Britain throughout the Atlantic campaign, including the attack on Washington, D.C. and the Louisiana Campaign, and most were later re-enlisted into British West India regiments or settled in Trinidad in August 1816, where seven hundred of these ex-marines were granted land (they reportedly organised in villages along the lines of military companies). Many other freed American slaves were recruited directly into West Indian regiments or newly created British Army units. A few thousand freed slaves were later settled at Nova Scotia by the British.
Maine, then part of Massachusetts, was a base for smuggling and illegal trade between the U.S. and the British. From his base in New Brunswick, in September 1814, Sir John Coape Sherbrooke led 500 British troops in the "Penobscot Expedition". In 26 days, he raided and looted Hampden, Bangor, and Machias, destroying or capturing 17 American ships. He won the Battle of Hampden (losing two killed while the Americans lost one killed) and occupied the village of Castine for the rest of the war. The Treaty of Ghent returned this territory to the United States. The British left in April 1815, at which time they took 10,750 pounds obtained from tariff duties at Castine. This money, called the "Castine Fund", was used in the establishment of Dalhousie University, in Halifax, Nova Scotia.
The strategic location of the Chesapeake Bay near America's capital made it a prime target for the British. Starting in March 1813, a squadron under Rear Admiral George Cockburn started a blockade of the bay and raided towns along the bay from Norfolk to Havre de Grace.
On July 4, 1813, Joshua Barney, a Revolutionary War naval hero, convinced the Navy Department to build the Chesapeake Bay Flotilla, a squadron of twenty barges to defend the Chesapeake Bay. Launched in April 1814, the squadron was quickly cornered in the Patuxent River, and while successful in harassing the Royal Navy, they were powerless to stop the British campaign that ultimately led to the "Burning of Washington." This expedition, led by Cockburn and General Robert Ross, was carried out between August 19 and 29, 1814, as the result of the hardened British policy of 1814 (although British and American commissioners had convened peace negotiations at Ghent in June of that year). As part of this, Admiral Warren had been replaced as commander in chief by Admiral Alexander Cochrane, with reinforcements and orders to coerce the Americans into a favourable peace.
Governor-in-chief of British North America Sir George Prevost had written to the Admirals in Bermuda, calling for retaliation for the American sacking of York (now Toronto). A force of 2,500 soldiers under General Ross—aboard a Royal Navy task force composed of the HMS Royal Oak, three frigates, three sloops, and ten other vessels—had just arrived in Bermuda. Released from the Peninsular War by British victory, the British intended to use them for diversionary raids along the coasts of Maryland and Virginia. In response to Prevost's request, they decided to employ this force, together with the naval and military units already on the station, to strike at Washington, D.C.
On August 24, U.S. Secretary of War John Armstrong insisted that the British would attack Baltimore rather than Washington, even when the British army was obviously on its way to the capital. The inexperienced American militia, which had congregated at Bladensburg, Maryland, to protect the capital, was routed in the Battle of Bladensburg, opening the route to Washington. While Dolley Madison saved valuables from the Presidential Mansion, President James Madison was forced to flee to Virginia.
The British commanders ate the supper that had been prepared for the President before they burned the Presidential Mansion; American morale was reduced to an all-time low. The British viewed their actions as retaliation for destructive American raids into Canada, most notably the Americans' burning of York (now Toronto) in 1813. Later that same evening, a furious storm swept into Washington, D.C., sending one or more tornadoes into the city that caused more damage but finally extinguished the fires with torrential rains. The naval yards were set afire at the direction of U.S. officials to prevent the capture of naval ships and supplies. The British left Washington, D.C. as soon as the storm subsided. Having destroyed Washington's public buildings, including the President's Mansion and the Treasury, the British army next moved to capture Baltimore, a busy port and a key base for American privateers. The subsequent Battle of Baltimore began with the British landing at North Point, where they were met by American militia. An exchange of fire began, with casualties on both sides. General Ross was killed by an American sniper as he attempted to rally his troops. The sniper himself was killed moments later, and the British withdrew. The British also attempted to attack Baltimore by sea on September 13 but were unable to reduce Fort McHenry, at the entrance to Baltimore Harbor.
The Battle of Fort McHenry was no battle at all. British guns had range on American cannon, and stood off out of U.S. range, bombarding the fort, which returned no fire. Their plan was to coordinate with a land force, but from that distance coordination proved impossible, so the British called off the attack and left. All the lights were extinguished in Baltimore the night of the attack, and the fort was bombarded for 25 hours. The only light was given off by the exploding shells over Fort McHenry, illuminating the flag that was still flying over the fort. The defence of the fort inspired the American lawyer Francis Scott Key to write a poem that would eventually supply the lyrics to "The Star-Spangled Banner."
American leaders assumed that Canada could be easily overrun. Former President Jefferson optimistically referred to the conquest of Canada as "a matter of marching." Many Loyalist Americans had migrated to Upper Canada after the Revolutionary War, and it was assumed they would favor the American cause, but they did not. In prewar Upper Canada, General Prevost found himself in the unusual position of purchasing many provisions for his troops from the American side. This peculiar trade persisted throughout the war in spite of an abortive attempt by the American government to curtail it. In Lower Canada, much more populous, support for Britain came from the English elite with strong loyalty to the Empire, and from the French elite, who feared American conquest would destroy the old order by introducing Protestantism and weakening the Catholic Church, Anglicization, republican democracy, and commercial capitalism. The French inhabitants feared the loss to potential American immigrants of a shrinking area of good lands.
In 1812–13, British military experience prevailed over inexperienced American commanders. Geography dictated that operations would take place in the west: principally around Lake Erie, near the Niagara River between Lake Erie and Lake Ontario, and near the Saint Lawrence River area and Lake Champlain. This was the focus of the three-pronged attacks by the Americans in 1812. Although cutting the St. Lawrence River through the capture of Montreal and Quebec would have made Britain's hold in North America unsustainable, the United States began operations first in the western frontier because of the general popularity there of a war with the British, who had sold arms to the American natives opposing the settlers.
The British scored an important early success when their detachment at St. Joseph Island, on Lake Huron, learned of the declaration of war before the nearby American garrison at the important trading post at Mackinac Island, in Michigan. A scratch force landed on the island on July 17, 1812, and mounted a gun overlooking Fort Mackinac. After the British fired one shot from their gun, the Americans, taken by surprise, surrendered. This early victory encouraged the natives, and large numbers of them moved to help the British at Amherstburg.
An American army under the command of William Hull invaded Canada on July 12, with his forces chiefly composed of militiamen. Once on Canadian soil, Hull issued a proclamation ordering all British subjects to surrender, or "the horrors, and calamities of war will stalk before you." He also threatened to kill any British prisoner caught fighting alongside a native. The proclamation helped stiffen resistance to the American attacks. The senior British officer in Upper Canada, Major General Isaac Brock, decided to oppose Hull's forces, and felt that he should make a bold action to calm the settler population in Canada, and to try and convince the aboriginals that were needed to defend the region that Britain was strong. Hull was worried that his army was too weak to achieve its objectives, and engaged in minor skirmishing and felt more vulnerable after the British captured a vessel on Lake Erie carrying his baggage, medical supplies, and important papers. On July 17, without a fight, the American fort on Mackinac Island surrendered after a group of soldiers, fur traders, and native warriors ordered by Brock to capture the settlement deployed a piece of artillery overlooking the post before the fort realised it, which led to its capitulation. This capture secured British fur trade operations in the area and maintained a British connection to the Native American tribes in the Mississippi region, as well as inspiring a sizeable number of Natives of the upper lakes region to combat the United States. Hull, believing after he learned about the capture that the tribes along the Detroit border would rise up and oppose him and perhaps attack Americans on the frontier, on August 8 withdrew most of his army from Canada back to secure Detroit whilst sending a request for reinforcements and ordering the American garrison at Fort Dearborn to abandon the post for fear of an aboriginal attack.
Brock advanced on Fort Detroit with 1,200 men. Brock sent a fake correspondence and allowed the letter to be captured by the Americans, saying they required only 5,000 Native warriors to capture Detroit. Hull feared the natives and their threats of torture and scalping. Believing the British had more troops than they did, Hull surrendered at Detroit without a fight on August 16. Fearing British-instigated indigenous attacks on other locations, Hull ordered the evacuation of the inhabitants of Fort Dearborn (Chicago) to Fort Wayne. After initially being granted safe passage, the inhabitants (soldiers and civilians) were attacked by Potowatomis on August 15 after traveling two miles (3 km) in what is known as the Battle of Fort Dearborn. The fort was subsequently burned.
Brock promptly transferred himself to the eastern end of Lake Erie, where American General Stephen Van Rensselaer was attempting a second invasion. An armistice (arranged by Prevost in the hope the British renunciation of the Orders in Council to which the United States objected might lead to peace) prevented Brock from invading American territory. When the armistice ended, the Americans attempted an attack across the Niagara River on October 13, but suffered a crushing defeat at Queenston Heights. Brock was killed during the battle. While the professionalism of the American forces would improve by the war's end, British leadership suffered after Brock's death. A final attempt in 1812 by American General Henry Dearborn to advance north from Lake Champlain failed when his militia refused to advance beyond American territory.
In contrast to the American militia, the Canadian militia performed well. French Canadians, who found the anti-Catholic stance of most of the United States troublesome, and United Empire Loyalists, who had fought for the Crown during the American Revolutionary War, strongly opposed the American invasion. However, many in Upper Canada were recent settlers from the United States who had no obvious loyalties to the Crown. Nevertheless, while there were some who sympathised with the invaders, the American forces found strong opposition from men loyal to the Empire.
After Hull's surrender of Detroit, General William Henry Harrison was given command of the U.S. Army of the Northwest. He set out to retake the city, which was now defended by Colonel Henry Procter in conjunction with Tecumseh. A detachment of Harrison's army was defeated at Frenchtown along the River Raisin on January 22, 1813. Procter left the prisoners with an inadequate guard, who could not prevent some of his North American aboriginal allies from attacking and killing perhaps as many as sixty Americans, many of whom were Kentucky militiamen. The incident became known as the "River Raisin Massacre." The defeat ended Harrison's campaign against Detroit, and the phrase "Remember the River Raisin!" became a rallying cry for the Americans.
In May 1813, Procter and Tecumseh set siege to Fort Meigs in northern Ohio. American reinforcements arriving during the siege were defeated by the natives, but the fort held out. The Indians eventually began to disperse, forcing Procter and Tecumseh to return to Canada. A second offensive against Fort Meigs also failed in July. In an attempt to improve Indian morale, Procter and Tecumseh attempted to storm Fort Stephenson, a small American post on the Sandusky River, only to be repulsed with serious losses, marking the end of the Ohio campaign.
On Lake Erie, American commander Captain Oliver Hazard Perry fought the Battle of Lake Erie on September 10, 1813. His decisive victory ensured American control of the lake, improved American morale after a series of defeats, and compelled the British to fall back from Detroit. This paved the way for General Harrison to launch another invasion of Upper Canada, which culminated in the U.S. victory at the Battle of the Thames on October 5, 1813, in which Tecumseh was killed. Tecumseh's death effectively ended the North American indigenous alliance with the British in the Detroit region. American control of Lake Erie meant the British could no longer provide essential military supplies to their aboriginal allies, who therefore dropped out of the war. The Americans controlled the area during the war.
Because of the difficulties of land communications, control of the Great Lakes and the St. Lawrence River corridor was crucial. When the war began, the British already had a small squadron of warships on Lake Ontario and had the initial advantage. To redress the situation, the Americans established a Navy yard at Sackett's Harbor, New York. Commodore Isaac Chauncey took charge of the large number of sailors and shipwrights sent there from New York; they completed the second warship built there in a mere 45 days. Ultimately, 3000 men worked at the shipyard, building eleven warships and many smaller boats and transports. Having regained the advantage by their rapid building program, Chauncey and Dearborn attacked York (now called Toronto), the capital of Upper Canada, on April 27, 1813. The Battle of York was an American victory, marred by looting and the burning of the Parliament buildings and a library. However, Kingston was strategically more valuable to British supply and communications along the St. Lawrence. Without control of Kingston, the U.S. navy could not effectively control Lake Ontario or sever the British supply line from Lower Canada.
On May 27, 1813, an American amphibious force from Lake Ontario assaulted Fort George on the northern end of the Niagara River and captured it without serious losses. The retreating British forces were not pursued, however, until they had largely escaped and organised a counteroffensive against the advancing Americans at the Battle of Stoney Creek on June 5. On June 24, with the help of advance warning by Loyalist Laura Secord, another American force was forced to surrender by a much smaller British and native force at the Battle of Beaver Dams, marking the end of the American offensive into Upper Canada. Meanwhile, Commodore James Lucas Yeo had taken charge of the British ships on the lake and mounted a counterattack, which was nevertheless repulsed at the Battle of Sackett's Harbor. Thereafter, Chauncey and Yeo's squadrons fought two indecisive actions, neither commander seeking a fight to the finish.
Late in 1813, the Americans abandoned the Canadian territory they occupied around Fort George. They set fire to the village of Newark (now Niagara-on-the-Lake) on December 15, 1813, incensing the British and Canadians. Many of the inhabitants were left without shelter, freezing to death in the snow. This led to British retaliation following the Capture of Fort Niagara on December 18, 1813, and similar destruction at Buffalo on December 30, 1813.
In 1814, the contest for Lake Ontario turned into a building race. Eventually, by the end of the year, Yeo had constructed the HMS St. Lawrence, a first-rate ship of the line of 112 guns that gave him superiority, but the Engagements on Lake Ontario were an indecisive draw.
The British were potentially most vulnerable over the stretch of the St. Lawrence where it formed the frontier between Upper Canada and the United States. During the early days of the war, there was illicit commerce across the river. Over the winter of 1812 and 1813, the Americans launched a series of raids from Ogdensburg on the American side of the river, which hampered British supply traffic up the river. On February 21, Sir George Prevost passed through Prescott on the opposite bank of the river with reinforcements for Upper Canada. When he left the next day, the reinforcements and local militia attacked. At the Battle of Ogdensburg, the Americans were forced to retire.
For the rest of the year, Ogdensburg had no American garrison, and many residents of Ogdensburg resumed visits and trade with Prescott. This British victory removed the last American regular troops from the Upper St. Lawrence frontier and helped secure British communications with Montreal. Late in 1813, after much argument, the Americans made two thrusts against Montreal. The plan eventually agreed upon was for Major General Wade Hampton to march north from Lake Champlain and join a force under General James Wilkinson that would embark in boats and sail from Sackett's Harbor on Lake Ontario and descend the St. Lawrence. Hampton was delayed by bad roads and supply problems and also had an intense dislike of Wilkinson, which limited his desire to support his plan. On October 25, his 4,000-strong force was defeated at the Chateauguay River by Charles de Salaberry's smaller force of French-Canadian Voltigeurs and Mohawks. Wilkinson's force of 8,000 set out on October 17, but was also delayed by bad weather. After learning that Hampton had been checked, Wilkinson heard that a British force under Captain William Mulcaster and Lieutenant Colonel Joseph Wanton Morrison was pursuing him, and by November 10, he was forced to land near Morrisburg, about 150 kilometers (90 mi.) from Montreal. On November 11, Wilkinson's rear guard, numbering 2,500, attacked Morrison's force of 800 at Crysler's Farm and was repulsed with heavy losses. After learning that Hampton could not renew his advance, Wilkinson retreated to the U.S. and settled into winter quarters. He resigned his command after a failed attack on a British outpost at Lacolle Mills.
By the middle of 1814, American generals, including Major Generals Jacob Brown and Winfield Scott, had drastically improved the fighting abilities and discipline of the army. Their renewed attack on the Niagara peninsula quickly captured Fort Erie. Winfield Scott then gained a victory over an inferior British force at the Battle of Chippawa on July 5. An attempt to advance further ended with a hard-fought but inconclusive battle at Lundy's Lane on July 25.
The outnumbered Americans withdrew but withstood a prolonged Siege of Fort Erie. The British suffered heavy casualties in a failed assault and were weakened by exposure and shortage of supplies in their siege lines. Eventually the British raised the siege, but American Major General George Izard took over command on the Niagara front and followed up only halfheartedly. The Americans lacked provisions, and eventually destroyed the fort and retreated across the Niagara.
Meanwhile, following the abdication of Napoleon, 15,000 British troops were sent to North America under four of Wellington’s ablest brigade commanders. Fewer than half were veterans of the Peninsula and the rest came from garrisons. Along with the troops came instructions for offensives against the United States. British strategy was changing, and like the Americans, the British were seeking advantages for the peace negotiations. Governor-General Sir George Prevost was instructed to launch an invasion into the New York–Vermont region. The army available to him outnumbered the American defenders of Plattsburgh, but control of this town depended on being able to control Lake Champlain. On the lake, the British squadron under Captain George Downie and the Americans under Master Commandant Thomas MacDonough were more evenly matched.
On reaching Plattsburgh, Prevost delayed the assault until the arrival of Downie in the hastily completed 36-gun frigate HMS Confiance. Prevost forced Downie into a premature attack, but then unaccountably failed to provide the promised military backing. Downie was killed and his naval force defeated at the naval Battle of Plattsburgh in Plattsburgh Bay on September 11, 1814. The Americans now had control of Lake Champlain; Theodore Roosevelt later termed it "the greatest naval battle of the war." The successful land defence was led by Alexander Macomb. To the astonishment of his senior officers, Prevost then turned back, saying it would be too hazardous to remain on enemy territory after the loss of naval supremacy. Prevost's political and military enemies forced his recall. In London, a naval court-martial of the surviving officers of the Plattsburgh Bay debacle decided that defeat had been caused principally by Prevost’s urging the squadron into premature action and then failing to afford the promised support from the land forces. Prevost died suddenly, just before his own court-martial was to convene. Prevost's reputation sank to a new low, as Canadians claimed that their militia under Brock did the job and he failed. Recently, however, historians have been more kindly, measuring him not against Wellington but against his American foes. They judge Prevost’s preparations for defending the Canadas with limited means to be energetic, well-conceived, and comprehensive; and against the odds, he had achieved the primary objective of preventing an American conquest.
Far to the west of where regular British forces were fighting, more than 65 forts were built in the Illinois Territory, mostly by American settlers. Skirmishes between settlers and U.S. soldiers against natives allied to the British occurred throughout the Mississippi River valley during the war. The Sauk were considered the most formidable tribe.
At the beginning of the war, Fort Osage, the westernmost U.S. outpost along the Missouri River, was abandoned. In September 1813, Fort Madison, an American outpost in what is now Iowa, was abandoned after it was attacked and besieged by natives, who had support from the British. This was one of the few battles fought west of the Mississippi. Black Hawk participated in the siege of Fort Madison, which helped to form his reputation as a resourceful Sauk leader.
Little of note took place on Lake Huron in 1813, but the American victory on Lake Erie and the recapture of Detroit isolated the British there. During the ensuing winter, a Canadian party under Lieutenant Colonel Robert McDouall established a new supply line from York to Nottawasaga Bay on Georgian Bay. When he arrived at Fort Mackinac with supplies and reinforcements, he sent an expedition to recapture the trading post of Prairie du Chien in the far west. The Siege of Prairie du Chien ended in a British victory on July 20, 1814.
Earlier in July, the Americans sent a force of five vessels from Detroit to recapture Mackinac. A mixed force of regulars and volunteers from the militia landed on the island on August 4. They did not attempt to achieve surprise, and at the brief Battle of Mackinac Island, they were ambushed by natives and forced to re-embark. The Americans discovered the new base at Nottawasaga Bay, and on August 13, they destroyed its fortifications and a schooner that they found there. They then returned to Detroit, leaving two gunboats to blockade Mackinac. On September 4, these gunboats were taken unawares and captured by enemy boarding parties from canoes and small boats. This Engagement on Lake Huron left Mackinac under British control.
The British garrison at Prairie du Chien also fought off another attack by Major Zachary Taylor. In this distant theatre, the British retained the upper hand until the end of the war, through the allegiance of several indigenous tribes that received British gifts and arms. In 1814 U.S. troops retreating from the Battle of Credit Island on the upper Mississippi attempted to make a stand at Fort Johnson, but the fort was soon abandoned, along with most of the upper Mississippi valley.
After the U.S. was pushed out of the Upper Mississippi region, they held on to eastern Missouri and the St. Louis area. Two notable battles fought against the Sauk were the Battle of Cote Sans Dessein, in April 1815, at the mouth of the Osage River in the Missouri Territory, and the Battle of the Sink Hole, in May 1815, near Fort Cap au Gris.
At the conclusion of peace, Mackinac and other captured territory was returned to the United States. Fighting between Americans, the Sauk, and other indigenous tribes continued through 1817, well after the war ended in the east.
In March 1814, Jackson led a force of Tennessee militia, Choctaw, Cherokee warriors, and U.S. regulars southward to attack the Creek tribes, led by Chief Menawa. On March 26, Jackson and General John Coffee decisively defeated the Creek at Horseshoe Bend, killing 800 of 1,000 Creeks at a cost of 49 killed and 154 wounded out of approximately 2,000 American and Cherokee forces. Jackson pursued the surviving Creek until they surrendered. Most historians consider the Creek War as part of the War of 1812, because the British supported them.
By 1814, both sides, weary of a costly war that seemingly offered nothing but stalemate, were ready to grope their way to a settlement and sent delegates to Ghent, Belgium. The negotiations began in early August and dragged on until Dec. 24, when a final agreement was signed; both sides had to ratify it before it could take effect. Meanwhile both sides planned new invasions.
It is difficult to measure accurately the costs of the American war to Britain, because they are bound up in general expenditure on the Napoleonic War in Europe. But an estimate may be made based on the increased borrowing undertaken during the period, with the American war as a whole adding some £25 million to the national debt. In the U.S., the cost was $105 million, although because the British pound was worth considerably more than the dollar, the costs of the war to both sides were roughly equal. The national debt rose from $45 million in 1812 to $127 million by the end of 1815, although by selling bonds and treasury notes at deep discounts—and often for irredeemable paper money due to the suspension of specie payment in 1814—the government received only $34 million worth of specie. By this time, the British blockade of U.S. ports was having a detrimental effect on the American economy. Licensed flour exports, which had been close to a million barrels in 1812 and 1813, fell to 5,000 in 1814. By this time, insurance rates on Boston shipping had reached 75%, coastal shipping was at a complete standstill, and New England was considering secession. Exports and imports fell dramatically as American shipping engaged in foreign trade dropped from 948,000 tons in 1811 to just 60,000 tons by 1814. But although American privateers found chances of success much reduced, with most British merchantmen now sailing in convoy, privateering continued to prove troublesome to the British. With insurance rates between Liverpool, England and Halifax, Nova Scotia rising to 30%, the Morning Chronicle complained that with American privateers operating around the British Isles, "We have been insulted with impunity." The British could not fully celebrate a great victory in Europe until there was peace in North America, and more pertinently, taxes could not come down until such time. Landowners particularly balked at continued high taxation; both they and the shipping interests urged the government to secure peace.
Britain, which had forces in uninhabited areas near Lake Superior and Lake Michigan and two towns in Maine, demanded the ceding of large areas, plus turning most of the Midwest into a neutral zone for Indians. American public opinion was outraged when Madison published the demands; even the Federalists were now willing to fight on. The British were planning three invasions. One force burned Washington but failed to capture Baltimore, and sailed away when its commander was killed. In New York, 10,000 British veterans were marching south until a decisive defeat at the Battle of Plattsburgh forced them back to Canada. Nothing was known of the fate of the third large invasion force aimed at capturing New Orleans and southwest. The Prime Minister wanted the Duke of Wellington to command in Canada and finally win the war; Wellington said no, because the war was a military stalemate and should be promptly ended:
I think you have no right, from the state of war, to demand any concession of territory from America ... You have not been able to carry it into the enemy's territory, notwithstanding your military success and now undoubted military superiority, and have not even cleared your own territory on the point of attack. You can not on any principle of equality in negotiation claim a cessation of territory except in exchange for other advantages which you have in your power ... Then if this reasoning be true, why stipulate for the uti possidetis? You can get no territory: indeed, the state of your military operations, however creditable, does not entitle you to demand any.
With a rift opening between Britain and Russia at the Congress of Vienna and little chance of improving the military situation in North America, Britain was prepared to end the war promptly. In concluding the war, the Prime Minister, Lord Liverpool, was taking into account domestic opposition to continued taxation, especially among Liverpool and Bristol merchants—keen to get back to doing business with America—and there was nothing to gain from prolonged warfare.
On December 24, 1814, diplomats from the two countries, meeting in Ghent, United Kingdom of the Netherlands (now in Belgium), signed the Treaty of Ghent. This was ratified by the Americans on February 16, 1815. The British government approved the treaty within a few hours of receiving it and the Prince Regent signed it on December 27, 1814.
Unaware of the peace, Andrew Jackson's forces moved to New Orleans, Louisiana in late 1814 to defend against a large-scale British invasion. Jackson defeated the British at the Battle of New Orleans on January 8, 1815. At the end of the day, the British had a little over 2,000 casualties: 278 dead (including three senior generals Pakenham, Gibbs, and Major General Keane), 1186 wounded, and 484 captured or missing. The Americans had 71 casualties: 13 dead, 39 wounded, and 19 missing. It was hailed as a great victory for the U.S., making Jackson a national hero and eventually propelling him to the presidency.
The British gave up on New Orleans but moved to attack the Gulf Coast port of Mobile, Alabama, which the Americans had seized from the Spanish in 1813. In one of the last military actions of the war, 1,000 British troops won the Battle of Fort Bowyer on February 12, 1815. When news of peace arrived the next day, they abandoned the fort and sailed home. In May 1815, a band of British-allied Sauk, unaware that the war had ended months ago, attacked a small band of U.S. soldiers northwest of St. Louis. Intermittent fighting, primarily with the Sauk, continued in the Missouri Territory well into 1817, although it is unknown if the Sauk were acting on their own or on behalf of Great Britain. Several uncontacted isolated warships continued fighting well into 1815 and were the last American forces to take offensive action against the British.
British losses in the war were about 1,600 killed in action and 3,679 wounded; 3,321 British died from disease. American losses were 2,260 killed in action and 4,505 wounded. While the number of Americans who died from disease is not known, it is estimated that 17,000 perished. These figures do not include deaths among American or Canadian militia forces or losses among native tribes.
In addition, at least 3,000 American slaves escaped to the British because of their offer of freedom, the same as they had made in the American Revolution. Many other slaves simply escaped in the chaos of war and achieved their freedom on their own. The British settled some of the newly freed slaves in Nova Scotia. Four hundred freedmen were settled in New Brunswick. The Americans protested that Britain's failure to return the slaves violated the Treaty of Ghent. After arbitration by the Czar of Russia the British paid $1,204,960, in damages to Washington, which reimbursed the slaveowners.
The war was ended by the Treaty of Ghent, signed on December 24, 1814 and taking effect February 18, 1815. The terms stated that fighting between the United States and Britain would cease, all conquered territory was to be returned to the prewar claimant, the Americans were to gain fishing rights in the Gulf of Saint Lawrence, and that the United States and Britain agreed to recognise the prewar boundary between Canada and the United States.
The Treaty of Ghent, which was promptly ratified by the Senate in 1815, ignored the grievances that led to war. American complaints of Indian raids, impressment and blockades had ended when Britain's war with France (apparently) ended, and were not mentioned in the treaty. The treaty proved to be merely an expedient to end the fighting. Mobile and parts of western Florida remained permanently in American possession, despite objections by Spain. Thus, the war ended with no significant territorial losses for either side.
Neither side lost territory in the war, nor did the treaty that ended it address the original points of contention—and yet it changed much between the United States of America and Britain.
The Treaty of Ghent established the status quo ante bellum; that is, there were no territorial changes made by either side. The issue of impressment was made moot when the Royal Navy stopped impressment after the defeat of Napoleon. Except for occasional border disputes and the circumstances of the American Civil War, relations between the United States and Britain remained generally peaceful for the rest of the nineteenth century, and the two countries became close allies in the twentieth century.
Border adjustments between the United States and British North America were made in the Treaty of 1818. A border dispute along the Maine-New Brunswick border was settled by the 1842 Webster-Ashburton Treaty after the bloodless Aroostook War, and the border in the Oregon Territory was settled by splitting the disputed area in half by the 1846 Oregon Treaty. Yet, according to Winston Churchill, "The lessons of the war were taken to heart. Anti-American sentiment in Britain ran high for several years, but the United States was never again refused proper treatment as an independent power."
The U.S. ended the aboriginal threat on its western and southern borders. The nation also gained a psychological sense of complete independence as people celebrated their "second war of independence." Nationalism soared after the victory at the Battle of New Orleans. The opposition Federalist Party collapsed, and the Era of Good Feelings ensued. The U.S. did make one minor territorial gain during the war, though not at Britain's expense, when it captured Mobile, Alabama from Spain.
No longer questioning the need for a strong Navy, the United States built three new 74-gun ships of the line and two new 44-gun frigates shortly after the end of the war. (Another frigate had been destroyed to prevent it being captured on the stocks.) In 1816, the U.S. Congress passed into law an "Act for the gradual increase of the Navy" at a cost of $1,000,000 a year for eight years, authorizing nine ships of the line and 12 heavy frigates. The Captains and Commodores of the U.S. Navy became the heroes of their generation in the United States. Decorated plates and pitchers of Decatur, Hull, Bainbridge, Lawrence, Perry, and Macdonough were made in Staffordshire, England, and found a ready market in the United States. Three of the war heroes used their celebrity to win national office: Andrew Jackson (elected President in 1828 and 1832), Richard Mentor Johnson (elected Vice President in 1836), and William Henry Harrison (elected President in 1840).
New England states became increasingly frustrated over how the war was being conducted and how the conflict was affecting them. They complained that the United States government was not investing enough in the states' defences militarily and financially and that the states should have more control over their militia. The increased taxes, the British blockade, and the occupation of some of New England by enemy forces also agitated public opinion in the states. As a result, at the Hartford Convention (December 1814–January 1815) held in Connecticut, New England representatives asked New England to have its states' powers fully restored. Nevertheless, a common misconception propagated by newspapers of the time was that the New England representatives wanted to secede from the Union and make a separate peace with the British. This view is not supported by what happened at the Convention.
Slaveholders primarily in the South suffered considerable loss of property as tens of thousands of slaves escaped to British lines or ships for freedom, despite the difficulties. The planters' complacency about slave contentment was shocked by their seeing slaves who would risk so much to be free.
Today, American popular memory includes the British capture and destruction of the U.S. Presidential Mansion in August 1814, which necessitated its extensive renovation. From this event has arisen the tradition that the building's new white paint inspired a popular new nickname, the White House. However, the tale appears apocryphal; the name "White House" is first attested in 1811. Another memory is the successful American defence of Fort McHenry in September 1814, which inspired the lyrics of the U.S. national anthem, The Star-Spangled Banner.
The War of 1812 was seen by Loyalists in British North America (which formed the Dominion of Canada in 1867) as a victory, as they had successfully defended their borders from an American takeover. The outcome gave Empire-oriented Canadians confidence and, together with the postwar "militia myth" that the civilian militia had been primarily responsible rather than the British regulars, was used to stimulate a new sense of Canadian nationalism.
A long-term implication of the militia myth — which was false, but remained popular in the Canadian public at least until World War I — was that Canada did not need a regular professional army. The U.S. Army had done poorly, on the whole, in several attempts to invade Canada, and the Canadians had shown that they would fight bravely to defend their country. But the British did not doubt that the thinly populated territory would be vulnerable in a third war. "We cannot keep Canada if the Americans declare war against us again," Admiral Sir David Milne wrote to a correspondent in 1817.
The Battle of York demonstrated the vulnerability of Upper and Lower Canada. In the 1820s, work began on La Citadelle at Quebec City as a defence against the United States; the fort remains an operational base of the Canadian Forces. Additionally, work began on the Halifax citadel to defend the port against American attacks. This fort remained in operation through World War II.
In the 1830s, the Rideau Canal was built to provide a secure waterway from Montreal to Lake Ontario, avoiding the narrows of the St. Lawrence River, where ships could be vulnerable to American cannon fire. To defend the western end of the canal, the British also built Fort Henry at Kingston, which remained operational until 1891.
The Native Americans allied to Great Britain lost their cause. The British proposal to create a "neutral" Indian zone in the American West was rejected at the Ghent peace conference and never resurfaced. In the decade after 1815, many white Americans assumed that the British continued to conspire with their former native allies in an attempt to forestall U.S. hegemony in the Great Lakes region. Such perceptions were faulty. After the Treaty of Ghent, the natives became an undesirable burden to British policymakers who now looked to the United States for markets and raw materials. British agents in the field continued to meet regularly with their former native partners, but they did not supply arms or encouragement for Indian campaigns to stop U.S. expansionism in the Midwest. Abandoned by their powerful sponsor, Great Lakes-area natives ultimately migrated or reached accommodations with the American authorities and settlers. In the Southeast, Indian resistance had been crushed by General Andrew Jackson; as President (1829–37), Jackson systematically removed the major tribes to reservations west of the Mississippi.
Bermuda had been largely left to the defences of its own militia and privateers prior to U.S. independence, but the Royal Navy had begun buying up land and operating from there in 1795, as its location was a useful substitute for the lost U.S. ports. It originally was intended to be the winter headquarters of the North American Squadron, but the war saw it rise to a new prominence. As construction work progressed through the first half of the century, Bermuda became the permanent naval headquarters in Western waters, housing the Admiralty and serving as a base and dockyard. The military garrison was built up to protect the naval establishment, heavily fortifying the archipelago that came to be described as the "Gibraltar of the West." Defence infrastructure would remain the central leg of Bermuda's economy until after World War II.
The war was scarcely noticed then and is barely remembered in Britain because it was overshadowed by the far-larger conflict against the French Empire under Napoleon. Britain's goals of impressing seamen and blocking trade with France had been achieved and were no longer needed. The Royal Navy was the world's dominant nautical power in the early 19th century (and would remain so for another century). During the War of 1812, it had used its overwhelming strength to cripple American maritime trade and launch raids on the American coast. The United States Navy had only 14 frigates and smaller ships to crew at the start of the war, while Britain maintained 85 ships in North American waters alone. Yet—as the Royal Navy was acutely aware—the U.S. Navy had won most of the single-ship duels during the war. The causes of the losses were many, but among those were the heavier broadside of the American 44-gun frigates and the fact that the large crew on each U.S. Navy ship was hand-picked from among the approximately 55,000 unemployed merchant seamen in American harbors. The crews of the British fleet, which numbered some 140,000 men, were rounded out with impressed ordinary seamen and landsmen. In an order to his ships, Admiral John Borlase Warren ordered that less attention be paid to spit-and-polish and more to gunnery practice. It is notable that the well-trained gunnery of HMS Shannon allowed her victory over the untrained crew of the USS Chesapeake.
The War of 1812 was fought between the British Empire and the United States from 1812 to 1814 on land in North America and at sea. More than half of the British forces were made up of Canadian militia (volunteers) because British soldiers had to fight Napoleon in Europe. The British defeated the attacking American forces. In the end, the war created a greater sense of nationalism in both Canada and the United States.
Some people in the United States wanted to maintain their independence. Some also wanted the United States to take over Canada. The war began when the United States started to attack the Canadian provinces in 1812 and 1813, but the borders were successfully defended by the British. In 1813, British and American ships fought in Lake Erie in a battle known as the Battle of Lake Erie. Americans under Oliver Hazzard Perry won.
In 1814, British soldiers landed in the United States. They burned Washington, D.C. to the ground and also attacked Baltimore. It was during this battle that a poem was written by an American soldier, Francis Scott Key. The poem was used as the new national anthem for the United States: "The Star Spangled Banner." The final battle of the war took place in January of 1815. The British attacked New Orleans and were beaten by the Americans and General Andrew Jackson. The battle took place after the peace treaty had been signed.
The War of 1812 ended in 1815 even though the signing of the Treaty of Ghent, which was supposed to end the war, happened on Dec 24, 1814, in Belgium. Both sides thought they had won, but no great changes took place. News of the peace treaty did not reach the US until after the battle in New Orleans in January 1815. | http://www.thefullwiki.org/War_of_1812 | 13 |
22 | An n dimensional pyramid or cone is a geometric figure consisting of an (n-1) dimensional base and a vertical axis such that the cross-section of the figure at any height y is a scaled down version of the base. The cross-section becomes zero at some height H. The point at which the cross-section is zero is called the vertex. The distinction between a pyramid and cone is that the base of a pyramid is a geometric figure with a finite number of sides whereas there are no such restrictions for the base of a cone (and thus a pyramid is a special case of a cone).
A two dimensional pyramid is just a triangle and a three dimensional pyramid is the standard type pyramid with a polygonal base and triangular sides composed of the sides of the base connected to the vertex. The area-volume formulas for these two cases are well known: i.e.
In order to deal with the general n dimensional case it is necessary to derive the area of the triangle systematically. The area of a triangle can be found as the limit of a sequence of approximations in which the triangle is covered by a set of rectangles as shown in the diagrams below.
In the above construction the vertical axis of the triangle is divided into m equal intervals. The width of a rectangle used in the covering is the width of the triangle at that height. As the subdivision of the vertical axis of the disk becomes finer and finer the sum of the areas of the rectangles approaches a limit which is called the area of the triangle.
The process can be represented algebraically. For a pyramid/cone of height H the distance from the vertex is H-y where y is the distance from the base. Let s=(1-y/H) be the scale factor for a cross-section of the cone at a height y above the base. The area of a the (n-1)-dimensional cross-section is equal the area of the base multiplied by a factor of sn-1.
The n-dimensional volume of the cone, Vn(B,H), is approximated by the sum of the volumes of the prisms created by the subdivision of the vertical axis. The limit of that sum as the subdivision becomes finer and finer can be expressed as an integral; i.e.,
The general formula is then:
The above general formula can be used to establish a relationship between the volume of an n-dimensional ball and the (n-1)-dimensional area which bounds it. Consider the approximation of the area of a disk of radius r by triangles as shown below:
Each of the triangles has a height of r so the sum of the areas of the triangles is equal to the height r times the sum of the bases. In the limit the sum of the bases is equal to the perimeter of the circle so the area of the disk is equal to (1/2)r(2πr) = πr2. Likewise the volume of a ball can be approximated by triangulating the spherical surface and creating pyramids whose verices are all at the center of the ball and whose bases are the triangles at the surface. The height of all these pyramids is radius of the ball r. Thus the volue is equal to one third of the height r times the sum of the base areas. In the limit the sum of the base areas is equal to the area of the sphere, 4πr2. Thus the volume of the ball of radius r is equal to (1/3)r(πr2); i.e., (4/3)πr3.
Generalizing, this means that
Unfortunately this relation is of no practical help in finding the formula for the volume of an n-dimensional ball in that the formula for the area of the surface of an n-dimensional ball is more obscure that that of the volume. Nevertheless it is interesting to perceive an n-dimensional ball as being composed of n-dimensional pyramids.
HOME PAGE OF Thayer Watkins | http://www.sjsu.edu/faculty/watkins/npyramid.htm | 13 |
11 | This section describes two methods for checking the primality of an
integer n, one with order of growth Θ(√n), and a
“probabilistic” algorithm with order of growth Θ(
log n). The exercises at the end of this section suggest programming
projects based on these algorithms.
Searching for divisors
Since ancient times, mathematicians have been fascinated by problems concerning prime numbers, and many people have worked on the problem of determining ways to test if numbers are prime. One way to test if a number is prime is to find the number’s divisors. The following program finds the smallest integral divisor (greater than 1) of a given number n. It does this in a straightforward way, by testing n for divisibility by successive integers starting with 2.
(define (smallest-divisor n) (find-divisor n 2)) (define (find-divisor n test-divisor) (cond ((> (square test-divisor) n) n) ((divides? test-divisor n) test-divisor) (else (find-divisor n (+ test-divisor 1))))) (define (divides? a b) (= (remainder b a) 0))
We can test whether a number is prime as follows: n is prime if and only if n is its own smallest divisor.
(define (prime? n) (= n (smallest-divisor n)))
The end test for
find-divisor is based on the fact that if n
is not prime it must have a divisor less than or equal to
means that the algorithm need only test divisors between 1 and
√n. Consequently, the number of steps required to identify
n as prime will have order of growth Θ(√n).
The Fermat test
log n) primality test is based on a result from number theory known as Fermat’s Little Theorem.
Fermat’s Little Theorem:
If n is a prime number and a is any positive integer less than n, then a raised to the nth power is congruent to a modulo n.
(Two numbers are said to be congruent modulo n if they both have the same remainder when divided by n. The remainder of a number a when divided by n is also referred to as the remainder of a modulo n, or simply as a modulo n.)
If n is not prime, then, in general, most of the numbers a< n will not satisfy the above relation. This leads to the following algorithm for testing primality: Given a number n, pick a random number a < n and compute the remainder of an modulo n. If the result is not equal to a, then n is certainly not prime. If it is a, then chances are good that n is prime. Now pick another random number a and test it with the same method. If it also satisfies the equation, then we can be even more confident that n is prime. By trying more and more values of a, we can increase our confidence in the result. This algorithm is known as the Fermat test.
To implement the Fermat test, we need a procedure that computes the exponential of a number modulo another number:
(define (expmod base exp m) (cond ((= exp 0) 1) ((even? exp) (remainder (square (expmod base (/ exp 2) m)) m)) (else (remainder (* base (expmod base (- exp 1) m)) m))))
The Fermat test is performed by choosing at random a number a
between 1 and n - 1 inclusive and checking whether the remainder
modulo n of the nth power of a is equal to a. The random number a is chosen using the procedure
random, which we assume is
included as a primitive in Scheme.
Random returns a
nonnegative integer less than its integer input. Hence, to obtain a random
number between 1 and n - 1, we call
random with an input of n - 1 and add 1 to the result:
(define (fermat-test n) (define (try-it a) (= (expmod a n n) a)) (try-it (+ 1 (random (- n 1)))))
The following procedure runs the test a given number of times, as specified by a parameter. Its value is true if the test succeeds every time, and false otherwise.
(define (fast-prime? n times) (cond ((= times 0) true) ((fermat-test n) (fast-prime? n (- times 1))) (else false)))
The Fermat test differs in character from most familiar algorithms, in which one computes an answer that is guaranteed to be correct. Here, the answer obtained is only probably correct. More precisely, if n ever fails the Fermat test, we can be certain that n is not prime. But the fact that n passes the test, while an extremely strong indication, is still not a guarantee that n is prime. What we would like to say is that for any number n, if we perform the test enough times and find that n always passes the test, then the probability of error in our primality test can be made as small as we like.
Unfortunately, this assertion is not quite correct. There do exist numbers that fool the Fermat test: numbers n that are not prime and yet have the property that an is congruent to a modulo n for all integers a < n. Such numbers are extremely rare, so the Fermat test is quite reliable in practice. There are variations of the Fermat test that cannot be fooled. In these tests, as with the Fermat method, one tests the primality of an integer n by choosing a random integer a<n and checking some condition that depends upon n and a. (See exercise 1.28 for an example of such a test.) On the other hand, in contrast to the Fermat test, one can prove that, for any n, the condition does not hold for most of the integers a < n unless n is prime. Thus, if n passes the test for some random choice of a, the chances are better than even that n is prime. If n passes the test for two random choices of a, the chances are better than 3 out of 4 that n is prime. By running the test with more and more randomly chosen values of a we can make the probability of error as small as we like.
The existence of tests for which one can prove that the chance of error becomes arbitrarily small has sparked interest in algorithms of this type, which have come to be known as probabilistic algorithms. There is a great deal of research activity in this area, and probabilistic algorithms have been fruitfully applied to many fields. | http://sicpinclojure.com/?q=sicp/1-2-6-example-testing-primality | 13 |
12 | Every physical measurement has three parts: a value, a unit, and a precision. The value is the numerical part of the measurement. The unit is the part that comes after the value: grams, or feet, or gallons, for example. The precision indicates the confidence we have in the value. For example, $29.87 is more precise than $30. This handout will outline a method which can be used to solve about 70% of the numerical problems you will encounter in college. If you learn only one thing from this course, it should be unit factor analysis. (I recommend that you learn more than one thing but if you're trying to economize ...) Remember one of the Chemistry Department mottos: Units Are Your Friends!
What follows is alternately known as "unit factor analysis," "dimensional analysis," or, in Dr. Dunn parlance, "the hotdog method." Let's consider some common hotdogs used in unit conversions:
|Liter/Gallon||( 3.79 L / 1 gal )|
|Ounce/Pound||( 16 oz / 1 lb )|
|Inch/Foot||( 12 in / 1 ft )|
|Centimeter/Inch||( 2.54 cm / 1 inch )|
|Weight Percent||( xxx g something / hg something that contains it )|
|Water Density||( 1.00 kg water / L water )|
|Gram/Pound||( 454 g / 1 lb )|
|mL/cm3||( 1 mL / 1 cm3 )|
Each one of these factors comes from an equality. For example 12 inches equals one foot. If this is true then ( 12 in / 1 ft ) must equal one, or unity since the top and bottom of the fraction are equal. All unit factors equal one, which is why they are called unit factors. Dr. Dunn calls them "hotdogs" because the parentheses remind him of a hotdog bun. The powerful thing is that you can always multiply by one without changing the value of the thing you multiplied. You can multiply by one over and over, no problem. Unit Factor Analysis uses this fact as the basis for what it does. You just keep multiplying by one (in the guise of a unit factor) until your answer has the correct units.
Certain prefixes imply unit factors.
Now that we have some hotdogs, we can use them to solve a problem. Here are the steps in unit factor analysis:
Consider a mead recipe which calls for 15 pounds of honey to make 5 gallons of mead. As a unit factor this becomes ( 15 pounds honey / 5 gallons mead ). We wish to make a smaller batch of mead, say 1.75 Liters (in a 2 L bottle). How much honey should we use? We could choose several units for our answer, pounds, ounces, or grams. Since honey is sold by the ounce in grocery stores, we will choose the ounce as the unit of our answer.
Ounces honey = 1.75 Liters mead ( 1 gallon / 3.79 liters )( 15.0 pounds honey / 5.0 gallons mead)( 16 ounces / pound)
= 22.163588 ounces honey
= 22 ounces honey
Now suppose the grocery store only has bottles with 12, 16, and 32 ounces of honey. The closest we can come will be 2*12=24 ounces. If we want to keep the same ratio of honey to mead without wasting our honey, how much mead should we make?
Liters mead = 24 ounces honey ( 1 pound / 16 ounces )( 5 gallons mead / 15 pounds honey)( 3.79 liters / 1 gallon)
= 1.895 liters mead
= 1.9 liters mead
That is, if we want to use 24 ounces of honey, we will have to fill our 2 L bottle almost full to keep to the recipe proportions.
As the semester progresses, we will add new hotdogs to the menu. Any time we encounter an equality we can generate a new kind of hotdog. We will learn, for example, that 1 mole of carbon weighs 12 grams, that 1 mole of glucose weighs 180 grams, and that there are 6 moles of carbon in a mole of glucose. Now, at this point you probably don't even know what a mole is. But you cans still work problems like the following:
What is the weight percent of carbon in glucose?
Well, let's start with the unit of the answer.
( g C / hg glucose ) =
Here hg stands for hectagrams which is 100 grams. We need something that has units of grams of carbon on the right hand side. We only know one thing about grams of carbon and that is that 12 g C = 1 mole C. That is, ( 12 g C / 1 mol C ) is a unit factor.
( g C / hg glucose ) = ( 12 g C / mol C )
We need to get rid of mole C and so we use the other hotdog that has moles C in it.
( g C / hg glucose ) = ( 12 g C / mol C )( 6 mol C / 1 mol glucose)
Yes, and now we have a pesky mol of glucose to get rid of and so we use our final piece of information, (180 g glucose/mol glucose).
( g C / hg glucose ) = ( 12 g C / mol C )( 6 mol C / 1 mol glucose)( 1 mol glucose / 180 g glucose )
Finally we need to convert grams to hectagrams: (100 g/hg)
( g C / hg glucose ) = ( 12 g C / mol C )( 6 mol C / 1 mol glucose)( 1 mol glucose / 180 g glucose )( 100 g / hg )
= 40.000 ( g C / hg glucose )
= 40% carbon in glucose
We have just solved a common general chemistry problem and we don't even know what a mole is. That's some kind of powerful method. All you need is the units of the answer and enough hotdogs to get rid of the units you don't like and introduce the units you do like.
Many times you may know the dimensions of a container and you need to know the volume. If the container is rectangular, simply multiply the height, width, and depth. For a cylindrical container, the volume is (3.14)r2h (r=radius). For a spherical container, the volume is 4(3.14)r3/3. Notice that in each case the unit of volume is (length)3. You can also work backwards: if you know the volume of the container and its shape, you can get the dimensions.
Try your hand at these for practice:
Your gas mileage is 32 miles per gallon. Gas costs $1.37 per gallon. A mile is 5280 feet. How many dollars does it cost you to drive 10 kilometers?
An aquarium measures 36"x24"x12". How many gallons of water does it hold?
An aquarium measures 36"x24"x12". How many pounds of water does it hold?
This project is passed by quiz alone. When you are ready, I will give you a single problem to work by Unit Factor Analysis. I will expect that you have memorized the table of unit factors above. In addition, I may give you information that can be turned into unit factors. You will work this problem without notes, but you may use a calculator. If you do not get the correct answer, you fail. You may, however, keep taking the test (one per day) until you pass. Of course the problems will be different from day to day. | http://cavemanchemistry.com/oldcave/projects/ufa/index.html | 13 |
14 | Focus: Students conduct a classwide inventory of human traits, construct histograms of the data they collect, and play a brief game that introduces the notion of each individual's uniqueness.
Major Concepts: Humans share many basic characteristics, but there is a wide range of variation in human traits. Most human traits are multifactorial: They are influenced by multiple genes and environmental factors.
Objectives: After completing this activity, students will
. understand that they share many traits;
. understand the extent of genetic similarity and variation among humans;
. be able to explain that most human traits are multifactorial, involving complex interactions of multiple genes and environmental factors; and
. understand that genetic variation can be beneficial, detrimental, or neutral.
Prerequisite Knowledge: Students should be familiar with constructing and interpreting histograms.
Basic Science-Health Connection: This opening activity introduces human variation as a topic that can be systematically studied using the methods of science (for example, gathering and analyzing data). This idea sets the stage for Activity 2, in which students consider the significance of human genetic variation at the molecular level.
This activity introduces the module by focusing explicitly on human variation. The primary vehicle is a class inventory of human traits that highlights similarities and differences. Although variation, both phenotypic and genotypic, is the central focus of all five activities in the module, this concept is less explicit in subsequent activities than in this activity.
One goal of the Human Genome Project was to provide the complete sequence of the human genome. Another goal of the genome project is to illuminate the extent of human genetic variation by providing a detailed picture of human similarities and differences at the molecular level. Research indicates that any two individuals are 99.9 percent identical at the level of the DNA. The 0.1 percent where we vary from one another (about 1 out of 1,000 DNA bases) is clearly very important. It is within this small fraction of the genome that we find clues to the molecular basis for the phenotypic differences that distinguish each one of us from all others.
In this activity, students are introduced to the notion that although we are very similar to one another, we also are very different, and our differences reflect a complex interplay between genetic and environmental factors. This understanding sets the stage for subsequent activities in the module in which students learn about the molecular differences that help explain our phenotypic differences, and also consider some of the medical and ethical implications of scientists' growing understanding of these differences.
You will need to prepare the following materials before conducting this activity:
. plant, fish, prepared slide of bacteria
. Master 1.1, An Inventory of a Few Human Traits (make 1 copy per student)
. labeled axes on the board or wall in which students can enter data
Construct four sets of axes on the board or the classroom wall (use masking tape). Label the axes as shown in Figure 14.
. 120 3 X 5 cards (4 per student; required only if you construct the axes on the wall)
. tape measure (1 per pair of students)
. Master 1.2, Thinking About Human Variation (make 1 copy per student)
1. Begin the activity by telling the class something like, "If a visitor from another planet walked into this classroom, he might easily conclude that humans all look very much alike." If students complain that this is not true, answer with something like, "You certainly are more like one another than you are like this plant [point to the plant]. Or this fish [point to the fish]. And for sure, you are more alike than any one of you is like the bacteria on this slide [wave the prepared slide of bacteria in the air]. Humans—Homo sapiens—have a set of traits that define us as a species, just like all other species have a set of traits that define them."
2. Continue the activity by saying, "Let's see just how similar you are." Distribute one copy of Master 1.1, An Inventory of a Few Human Traits, to each student and ask students to work in pairs to complete them.
If students are unfamiliar with the following terms, provide the definitions below.
detached earlobes: Earlobes hang free, forming a distinct lobe.
hitchhiker's thumb: Most distal joint of thumb can form almost a 90 degree angle with the next most proximal joint.
middigital hair: Hair is present on digits distal to knuckles.
cross left thumb over right: Natural tendency is to cross left thumb over right when clasping hands together.
Figure 14 - Construct the four sets of axes shown here on the board or on a wall of your classroom. D
3. As students complete the inventory, direct their attention to the four sets of labeled axes you prepared. Ask the students to enter their data at the appropriate place on each set of axes.
If you constructed the axes on the board, students can use chalk to record their data. If you used masking tape to construct the axes on the wall, ask students to record their data by taping one 3 X 5 card in the appropriate place on each set of axes.
Tip from the field test: You may wish to give males one color of chalk or 3 X 5 card to use in recording their data and give females a different color. This strategy will allow the class to determine if any of the three characteristics other than sex (for example, height) shows differences related to sex.
4. After the students have finished collecting and recording their data, ask them to look at the four histograms they built and identify what evidence they see in those data that they share many traits with other members of their class.
Students may answer that all people have only one nose, and all people are only one sex or the other.
5. Continue the activity by saying, "But now that I look around the room, it is clear that you are different. What evidence do you see in these data that people are different?"
Students should recognize that not everyone is the same height and not everyone has the same hair color.
As students look at the data, you may wish to ask them to compare the shapes of the histograms for sex and height. The sex histogram has two distinct peaks because there are only two categories of individuals—female and male. That is, sex is a discontinuous trait. In contrast, height is a continuous trait that has many categories of individuals, ranging from very short to very tall. The shape of the height histogram may begin to approach a bell curve, or normal distribution. It may also have two peaks—a bimodal distribution—with one peak representing the female students and the other peak representing the male students.
6. Challenge the students to try to describe just how different they are by guessing how many traits they would have to consider to identify any given student in the room as unique. Write the students' predictions on the board.
7. Conduct the game described below with several volunteers.
. Choose a volunteer to determine his or her "uniqueness" as compared with the other students.
. Ask all of the students to stand.
. Invite the volunteer to begin to identify his or her phenotype for each of the 13 human traits listed on An Inventory of a Few Human Traits. Begin with the first trait and proceed sequentially. As the volunteer lists his or her phenotype for each trait, direct the students who share the volunteer's phenotype for that trait to remain standing. Direct all other students to sit.
. Continue in this fashion until the volunteer is the only person still standing. Count how many traits the class had to consider to distinguish the volunteer from all other students in the class. Compare this number with the students' predictions.
. Repeat as desired with another volunteer.
|Collect and review the students' completed worksheets to assess their understanding of the activity's major concepts.|
|Increasing evidence indicates that all human diseases have genetic and environmental components. Point out that diseases such as cancer, heart disease, and diabetes as traits that show an interaction between genetic and environmental factors. Students will consider this concept in Activity 4, Are You Susceptible?|
8. Ask students to work in pairs to answer the questions on Master 1.2, Thinking About Human Variation.
Question 1 Some human traits can be changed by human intervention and some cannot. Provide examples of each of these types of traits.
Biological sex and blood type cannot be changed. Hair color, skin color, and even height and mental abilities can be changed by human intervention. Students also may suggest that body piercing alters human traits.
Question 2 You probably already know that some traits are genetic and others are environmental. But most human traits reflect an interaction between genetic and environmental factors. Name some traits that might fall into this category and explain why you think they do.
Height, weight, intelligence, and artistic or athletic ability are examples of traits that are influenced by genetic and environmental factors. Some students may mention disorders such as certain types of cancer or even psychiatric disorders. We know that these types of traits are both genetic and environmental because we see evidence that they run in families and because we know we can modify them by changing the environment.
|Figure 15 - Most variation occurs within populations. A Venn diagram is a useful way to illustrate this idea to students. Note that the amount of genetic information that different populations have in common (areas where circles overlap) is much greater than the amount that is unique (areas where there is no overlap). D|
Question 3 Describe some of the benefits of human genetic variation. What are some of the potential problems that it can cause?
Students may mention a number of benefits, such as allowing people to be distinguished from one another and increasing the diversity of abilities, interests, and perspectives among humans. Some students may recognize that genetic variation also benefits the species because it is the basis for evolution by natural selection. Students will consider this aspect of variation in Activity 2, The Meaning of Genetic Variation.
Expect students to recognize that just as being different from one another has advantages, it also has disadvantages. For example, genetic variation makes successful tissue and organ transplants more difficult to accomplish than if we were all genetically identical. Students also may note that the existence of real (or perceived) differences among members of a population can allow prejudice and discrimination to exist.
You may wish to point out that research reveals that more variation exists
within populations than between them (Figure 15). As noted in Understanding
Human Genetic Variation, an examination of human proteins demonstrated
that about 90 percent of all variation occurred within populations, whereas
only 10 percent occurred between populations. That is, we are more "like"
people with other ethnic or geographic origins than we might think.
|These open-ended questions invite students to step back from the activity's details to consider its broader implications. Another way to invite such reflection is to ask students to identify the most important or the most interesting idea they learned as a result of completing the activity.|
9. Invite students to summarize the activity's major concepts by asking, "What has this activity illustrated about how one human compares with another human? What has it illustrated about human variation in general?"
Expect students to recognize that humans share many traits. Students also may note that there is a wide range of variation in human traits and one does not have to consider very many traits before a given person's uniqueness is demonstrated. Students should point out that some traits can be changed by human intervention and some cannot, and that although some traits are genetic and others are environmental, most human traits reflect an interaction between genetic and environmental factors (that is, most are multifactorial). You may wish to introduce the term "multifactorial" at this point; students will study multifactorial traits in more detail in Activity 4, Are You Susceptible?
Be sure that students generalize their responses to focus on variation in populations, not variation simply between themselves and their partners. Point out that the concept of variation in populations will reappear in different, but less obvious, ways in the other activities in this module.
This activity introduces students to several ideas that you may wish them to explore in more depth. For example, assign students to use their textbooks to identify the biological mechanisms that lead to and maintain diversity in populations.
Alternatively, ask students to list some of the advantages and disadvantages of genetic variation in nonhuman populations. Invite them to locate and report on cases where scientists are concerned that it may be diminishing (for example, in domesticated crops and in populations of endangered species being maintained in zoos and other protected settings).
Finally, to extend the discussion of the multifactorial nature of most human traits, challenge students to suggest ways that scientists might investigate the relative contributions that heredity and the environment make to such traits (for example, twin studies or studies of adopted children in relation to their adoptive and biologic parents).
Copyright | Credits | Accessibility | http://science.education.nih.gov/supplements/nih1/genetic/guide/activity1.htm | 13 |
27 | The exploration of Mars has been an important part of the space exploration programs of the Soviet Union (later Russia), the United States, Europe, and Japan. Dozens of robotic spacecraft, including orbiters, landers, and rovers, have been launched toward Mars since the 1960s. These missions were aimed at gathering data about current conditions and answering questions about the history of Mars. The questions raised by the scientific community are expected to not only give a better appreciation of the red planet but also yield further insight into the past, and possible future, of Earth.
The exploration of Mars has come at a considerable financial cost with roughly two-thirds of all spacecraft destined for Mars failing before completing their missions, with some failing before they even begin. Such a high failure rate can be attributed to the complexity and large number of variables involved in an interplanetary journey, and has led researchers to jokingly speak of The Great Galactic Ghoulwhich subsists on a diet of Mars probes. This phenomenon is also informally known as the Mars Curse.As of June 2009, there are two functioning pieces of equipment on the surface of Mars beaming signals back to Earth: the Spirit rover and the Opportunity rover.
The planet Mars
Mars has long been the subject of human fascination. Early telescopic observations revealed color changes on the surface which were originally attributed to seasonal vegetation as well as apparent linear features which were ascribed to intelligent design. These early and erroneous interpretations led to widespread public interest in Mars. Further telescopic observations found Mars' two moons - Phobos and Deimos, the polar ice caps and the feature now known as Olympus Mons, the solar system's tallest mountain. These discoveries piqued further interest in the study and exploration of the red planet. Mars is a rocky planet, like Earth, that formed around the same time, yet with only half the diameter of Earth, and a far thinner atmosphere, it has a cold and desert-like surface. It is notable, however, that although the planet has only one quarter of the surface area of the Earth, it has about the same land area, since only one quarter of the surface area of the Earth is land.
In order to understand the history of the robotic exploration of Mars it is important to note that minimum-energy launch windows occur at intervals of 2.135 years, i.e. 780 days (the planet's synodic period with respect to Earth). This is a consequence of the Hohmann transfer orbit for minimum-energy interplanetary transfer. Launch windows were/will be in:
Like the outbound launch windows, minimum energy inbound (Mars to Earth) launch windows also occur at intervals of 780 (Earth) days.
In addition to these minimum-energy trajectories, which occur when the planets are aligned so that the Earth to Mars transfer trajectory goes halfway around the sun, an alternate trajectory which has been proposed goes first inward toward Venus orbit, and then outward, resulting in a longer trajectory which goes about 360 degrees around the sun ("opposition-class trajectory"). Although this transfer orbit takes longer, and also requires more energy, it is sometimes proposed as a mission trajectory for human missions.
Early flyby probes and orbiters
Early Soviet missions
The Marsnik program, was the first Soviet unmanned spacecraft interplanetary exploration program, which consisted of two flyby probes launched towards Mars in October 1960, Marsnik 1 and 2 dubbed Mars 1960A and Mars 1960B (also known as Korabl 4 and Korabl 5 respectively). After launch, the third stage pumps on both Marsnik launchers were unable to develop enough thrust to commence ignition, so Earth parking orbit was not achieved. The spacecraft reached an altitude of 120 km before reentry.
Mars 1962A a Mars fly-by mission, launched on October 24, 1962 and Mars 1962B a lander mission, launched in late December of the same year both failed from either breaking up as they were going into Earth orbit or having the upper stage explode in orbit during the burn to put the spacecraft into the Mars trajectory.
Mars 1 (1962 Beta Nu 1) an automatic interplanetary station launched to Mars on November 1, 1962 was the first probe of the Soviet Mars probe program. Mars 1 was intended to fly by the planet at a distance of about 11,000 km and take images of the surface as well as send back data on cosmic radiation, micrometeoroid impacts and Mars' magnetic field, radiation environment, atmospheric structure, and possible organic compounds. Sixty-one radio transmissions were held, initially at two day intervals and later at 5 days in which a large amount of interplanetary data was collected. On 21 March 1963, when the spacecraft was at a distance of 106,760,000 km from Earth, on its way to Mars, communications ceased, due to failure of the spacecraft's antenna orientation system.
In 1964, both Soviet probe launches, of Zond 1964A on June 4, and Zond 2 on November 30, (part of the Zond program), resulted in failures. Zond 1964A had a failure at launch, while communication was lost with Zond 2 en route to Mars after a mid-course maneuver, in early May 1965.
The USSR intended to have the first artificial satellite of Mars beating the planned American Mariner 8 and Mariner 9 martian orbiters. But on May 5, 1971 Cosmos 419 (Mars 1971C), a heavy probe of Soviet Mars probe progam M-71, failed on launch. This spacecraft was designed as an orbiter only while the second and third probes of project M-71, Mars 2 and Mars 3, were multi-aimed combinations of orbiter and lander.
In 1964, NASA's Jet Propulsion Laboratory made two attempts at reaching Mars. Mariner 3 and Mariner 4 were identical spacecraft designed to carry out the first flybys of Mars. Mariner 3 was launched on November 5, 1964, but the shroud encasing the spacecraft atop its rocket failed to open properly, and it failed to reach Mars. Three weeks later, on November 28, 1964, Mariner 4 was launched successfully on a 7½-month voyage to the red planet.
Mariner 4 flew past Mars on July 14, 1965, providing the first close-up photographs of another planet. The pictures, gradually played back to Earth from a small tape recorder on the probe, showed lunar-type impact craters.
NASA continued the Mariner program with another pair of Mars flyby probes, Mariner 6 and 7, at the next launch window. These probes reached the planet in 1969. During the following launch window the Mariner program again suffered the loss of one of a pair of probes. Mariner 9 successfully entered orbit about Mars, the first spacecraft ever to do so, after the launch time failure of its sister ship, Mariner 8. When Mariner 9 reached Mars, it and two Soviet orbiters (Mars 2 and Mars 3, see Mars probe program below) found that a planet-wide dust storm was in progress. The mission controllers used the time spent waiting for the storm to clear to have the probe rendezvous with, and photograph, Phobos. When the storm cleared sufficiently for Mars' surface to be photographed by Mariner 9, the pictures returned represented a substantial advance over previous missions. These pictures were the first to offer evidence that liquid water might at one time have flowed on the planetary surface.
The following is a map of landings on Mars.
The first probes to impact and land on Mars were the Soviet Union's Mars 2 and Mars 3, as part of the Mars probe program M-71 in 1971. The Mars 2 and 3 probes each carried a lander, both of which failed upon landing. Mars 3 was the first successful martian lander and was able to send data and image from the surface of Mars for the first time during 20 seconds of operation.
The high failure rate of missions launched from Earth attempting to explore Mars has become informally known as the Mars Curse. The Galactic Ghoul is a fictional space monster that consumes Mars probes, a term coined in 1997 by Time Magazine journalist Donald Neff.
Of 38 launches from Earth in an attempt to reach the planet, only 19 succeeded, a success rate of 50%. Twelve of the missions included attempts to land on the surface, but only seven transmitted data after landing. The majority of the failed missions occurred in the early years of space exploration and can be explained by human error and technical failure. Modern missions have an improved success rate; however, the challenge, complexity and length of the missions make it inevitable that failures will occur.
The U.S. NASA Mars exploration program has had a somewhat better record of success in Mars exploration, achieving success in 13 out of 18 missions launched (a 72% success rate), and succeeding in six out of seven (an 86% success rate) of the launches of Mars landers.
Many people have long advocated a manned mission to Mars as the next logical step for a manned space program after lunar exploration. Aside from the prestige such a mission would bring, advocates argue that humans would easily be able to outperform robotic explorers, justifying the expenses. Critics contend, however, that robots can perform better than humans at a fraction of the expense. A list of Mars Manned missions proposals is located at Manned mission to Mars.
Timeline of Mars exploration
Published - July 2009
Copyright 2004-2013 © by Airports-Worldwide.com | http://www.airports-worldwide.com/articles/article0134.php | 13 |
23 | The point P(x,y) lies on the line 7y=x+23 and is 5 units from the point (2,0). Calculate the co-ordinates of the two possible positions of P.
I am not clear of whether the 5 units refer to the x-axis or the y-axis.
How you identify a straight line is by checking the powers of x and y.
If they are both 1, that's a line,
because you can rearrange it to y=mx+c form.
this is a line passing through
You need to draw a picture (can be a rough sketch) of the axes.
Draw the line roughly.
Take (2,0) as a circle centre and draw a circle of radius 5.
See it now?
If not, you could try to find the point on 7y= x+ 23 closest to (2, 0). A line from (2, 0) to that nearest point will be perpendicular to the line and so will give two right triangles with hypotenuse equal to 5 and one leg equal to the distance from (2,0) to that "nearest" point. You can use the Pythagorean theorem to find the other leg and then use that to find the coordinates of the two points. That will, I suspect, involve a lot harder algebra than the "equation of the circle" method so I suggest you use the equation of the circle.
Sorry dude, I have not learnt circles yet... I asked if the line was a curve earlier on because he told me it was a circle and so I was puzzled that a straight line graph could make a circle...
I asked again the second time because I suspected that this was under the topic, circles. I am really sorry about that.
And to your question, I do not know the equation...
I didn't tell you that the line was a circle.
I asked if you would draw a sketch with a circle centred at (2,0).
I told you the equation was that of a straight line.
HallsofIvy showed you how to work it out using triangles.
The point (2,0) is on the x-axis.
There are two points on the line for which you wrote the equation, that are 5 units away from (2,0).
If you have a wheel 5 units radius,
isn't every point on the wheel circumference 5 units from the centre ?
The circle is a geometric shape that can be used to visualise the problem.
First make sure you have a picture of the geometry before proceeding. | http://mathhelpforum.com/algebra/124224-alien-question.html | 13 |
10 | Although most meteorite research is performed in a laboratory, fieldwork is the only way to collect meteorites and to study the effects of impacts.
Finding meteorites is not easy. Meteoriticists scour the Earth looking for these rare rocks. Large distances are covered, often on foot and staring at the ground, just to find one or two meteorites. Most are found in hot or cold deserts, where they erode slower and are less likely to be obscured by vegetation. Antarctica is the best place to find meteorites. This is because meteorites are consumed by and stored in the ice. This ice is continually moving. When it hits an obstacle, like the Transarctic Mountains, it gets pushed upwards and reveals the meteorites. A number of scientists from the Natural History Museum have been to the Antarctic to find meteorites.
Studying the effects of meteorite impacts also involves getting out into the field. There are around 140 meteorite craters exposed to Earth's surface. Geologists study the structure and rocks of these craters to try and understand how they were made. Geophysicists can also use Earth's magnetic field and seismic waves to investigate craters that have been covered up over time.
When a meteorite crashes to Earth, the impact causes debris to be thrown up. This falls as layers of dust for miles around. Meteoriticists can study these layers to learn about the effect that the meteorite impact had on its surroundings. Dust thrown up from very large impacts can be thrown into the atmosphere and travels all over the world. By studying this we can learn about the effect that meteorites can have on the whole planet. | http://www.nhm.ac.uk/print-version/?p=/nature-online/space/meteorites-dust/studying-meteorites/field/index.html | 13 |
11 | Sep. 11, 2005 Using primitive meteorites called chondrites as their models, earth and planetary scientists at Washington University in St. Louis have performed outgassing calculations and shown that the early Earth's atmosphere was a reducing one, chock full of methane, ammonia, hydrogen and water vapor.
In making this discovery Bruce Fegley, Ph.D., Washington University professor of earth and planetary sciences in Arts & Sciences, and Laura Schaefer, laboratory assistant, reinvigorate one of the most famous and controversial theories on the origins of life, the 1953 Miller-Urey experiment, which yielded organic compounds necessary to evolve organisms.
Chondrites are relatively unaltered samples of material from the solar nebula, According to Fegley, who heads the University's Planetary Chemistry Laboratory, scientists have long believed them to be the building blocks of the planets. However, no one has ever determined what kind of atmosphere a primitive chondritic planet would generate.
"We assume that the planets formed out of chondritic material, and we sectioned up the planet into layers, and we used the composition of the mix of meteorites to calculate the gases that would have evolved from each of those layers," said Schaefer. "We found a very reducing atmosphere for most meteorite mixes, so there is a lot of methane and ammonia."
In a reducing atmosphere, hydrogen is present but oxygen is absent. For the Miller-Urey experiment to work, a reducing atmosphere is a must. An oxidizing atmosphere makes producing organic compounds impossible. Yet, a major contingent of geologists believe that a hydrogen-poor, carbon dioxide -rich atmosphere existed because they use modern volcanic gases as models for the early atmosphere. Volcanic gases are rich in water , carbon dioxide, and sulfur dioxide but contain no ammonia or methane.
"Geologists dispute the Miller-Urey scenario, but what they seem to be forgetting is that when you assemble the Earth out of chondrites, you've got slightly different gases being evolved from heating up all these materials that have assembled to form the Earth. Our calculations provide a natural explanation for getting this reducing atmosphere," said Fegley.
Schaefer presented the findings at the annual meeting of the Division of Planetary Sciences of the American Astronomical Society, held Sept. 4-9 in Cambridge, England.
Schaefer and Fegley looked at different types of chondrites that earth and planetary scientists believe were instrumental in making the Earth. They used sophisticated computer codes for chemical equilibrium to figure out what happens when the minerals in the meteorites are heated up and react with each other. For example, when calcium carbonate is heated up and decomposed, it forms carbon dioxide gas.
"Different compounds in the chondritic Earth decompose when they're heated up, and they release gas that formed the earliest Earth atmosphere," Fegley said.
The Miller-Urey experiment featured an apparatus into which was placed a reducing gas atmosphere thought to exist on the early Earth. The mix was heated up and given an electrical charge and simple organic molecules were formed. While the experiment has been debated from the start, no one had done calculations to predict the early Earth atmosphere.
"I think these computations hadn't been done before because they're very difficult; we use a special code" said Fegley, whose work with Schaefer on the outgassing of Io, Jupiter's largest moon and the most volcanic body in the solar system, served as inspiration for the present early Earth atmosphere work.
NASA's Astrobiology Institute supported the Washington University research. Fegley is a member of the National Aeronautics and Space Administration's Goddard Astrobiology team.
Other social bookmarking and sharing tools:
Note: Materials may be edited for content and length. For further information, please contact the source cited above.
Note: If no author is given, the source is cited instead. | http://www.sciencedaily.com/releases/2005/09/050911103921.htm | 13 |
36 | Kuiper Belt Objects: solution to short-period comets?
Have recent ‘Kuiper Belt’ discoveries solved the evolutionary/long-age dilemma?
Recently, astronomers have discovered that several KBOs (‘Kuiper Belt Objects’) are binary—they consist of two co-orbiting masses. What are the implications for Creation?
Comets—icy masses that orbit the sun in elliptical paths—are one of many evidences that the solar system is much younger than billions of years. Every time a comet passes near the sun, it loses some of its icy material to evaporation. This stream of lost material is what gives rise to the characteristic comet tail. A comet can only survive a certain number of orbits before it runs out of material completely.1 If the solar system were billions of years old, there should be no comets left. This is explained in detail in Dr Danny Faulkner’s article Comets and the Age of the Solar System.
Evolutionary astronomers, who assume the solar system is billions of years old, must propose a ‘source’ that will supply new comets as old ones are destroyed. The Kuiper Belt2 is one such proposed source for short-period comets (comets that take less than 200 years to orbit the sun). The Kuiper belt is a hypothetical massive flattened disc of billions of icy planetesimals supposedly left over from the formation of the solar system. (The other proposed source is the Oort Cloud,3 which we have already addressed—see More problems for the Oort cloud.)
These planetesimals are assumed to exist in (roughly) circular orbits in the outer regions of the solar system—beyond Neptune (extending from 30 AU4 out to around 100 AU). It is thought that these objects are occasionally disturbed by gravitational interactions and are sent hurtling into the inner solar system to become short-period comets. In this fashion, new comets supposedly are injected into the inner solar system as old ones are depleted.
Astronomers have detected a number of small objects beyond the orbit of Neptune. The term ‘Kuiper Belt Object’ (KBO) is being applied to these objects. The first of these5 was discovered in 1992, and many more have now been detected. What are we to make of these discoveries? Do these objects confirm the existence of a ‘Kuiper Belt’ as the evolutionists were expecting?
There is no reason to expect that the solar system would end abruptly at Pluto’s orbit, or that minor planets could not exist beyond the orbit of Neptune. Many thousands of asteroids exist in the inner solar system, so we should not be surprised that some objects have been discovered beyond the orbits of Neptune and Pluto.6 Several hundred of these ‘KBOs’ have now been observed.7 But a Kuiper Belt would need around a billion icy cores in order to replenish the solar system’s supply of comets. It remains to be seen whether KBOs exist in such abundance. Currently, this is merely an evolutionary speculation.
It should also be noted that the observed KBOs are much larger than comet nuclei. The diameter of the nucleus of a typical comet is around 10 kilometers. However, the recently discovered KBOs are estimated to have diameters ranging from about 100 to 500 kilometers.8 This calls into question the idea that these objects are precursors of short-period comets. So, the discovery of objects beyond Neptune does not in any way confirm a Kuiper Belt—at least not the kind of Kuiper Belt that evolutionary astronomers require. As such, the term ‘Kuiper Belt Object’ is a bit misleading. ‘Trans-Neptunian Object’ (TNO) would be a more descriptive term for these distant minor planets—and many astronomers use these terms (TNO and KBO) interchangeably.
Interestingly, astronomers have recently discovered that several TNOs are binary.9 That is, they consist of two objects in close proximity; these orbit each other as they orbit the sun. The tremendous controversy on the (evolutionary) origin of Earth’s moon (see The Moon: The light that rules the night) highlights the difficulty of forming (by random processes) two co-orbiting masses. Currently, giant impacts are being invoked to explain the origin of Earth’s moon as well as Pluto’s moon Charon. But these involve unlikely ‘chance’ collisions at precise angles and have other difficulties as well. Yet, we are finding that binary objects are far more common than previously thought.10 Might this point to a Creative Designer?
Some astronomers would classify Pluto as a (particularly large) Trans-Neptunian Object. Indeed, Pluto may have far more in common with TNOs than it has with the other eight planets—such as its icy composition and its orbital properties. In fact, a substantial fraction of the newly discovered TNOs have an orbital period nearly identical to that of Pluto.11 These are called ‘Plutinos’ (little Plutos). So, while Pluto is a dwarf among planets, it may be ‘King’ of the TNOs. Since Pluto’s moon Charon is so large (relative to Pluto), Pluto is often considered a binary system. As such, Pluto could be considered not only the largest TNO, but the largest binary TNO as well. As these new discoveries continue to pour in, Creationists should delight in the marvellous complexity and structure of the universe God has created.
References and notes:
- Gravitational encounters with the planets can also deplete comets. A comet might be ejected from the solar system or (more rarely) collide with a planet. Return to text.
- The Kuiper belt is named after Gerard Kuiper who proposed its existence in 1951. Return to text.
- In evolutionary thinking, a spherical ‘Oort cloud’ is supposed to explain the existence of long-period comets. Creationists would not be surprised to find some objects at that distance, but (as with the Kuiper Belt) we would question whether there are enough objects to explain the origin of long-period comets. Currently, there is no evidence whatsoever of a massive Oort cloud. Moreover, there is tremendous difficulty in forming an Oort cloud of sufficient mass (through natural processes) in the first place! Hence, long-period comets also present a serious challenge to a multi-billion year old solar system. Return to text.
- An AU (Astronomical Unit) is the average distance from the Earth to the sun. It is roughly equal to 150 million kilometers or 93 million miles. Neptune orbits the sun at 30 AU. Pluto’s distance from the Sun varies in its orbit from about 30 AU to 50 AU with an average distance of around 40 AU. Return to text.
- An object named ‘1992 QB1’ was the first KBO (or TNO) to be discovered (besides Pluto and Charon, if they are counted). Its orbital period is computed to be 296 years. Return to text.
- A handful of small objects exist in between the orbits of Jupiter and Neptune. These are called Centaurs. Chiron, for example orbits between Saturn and Uranus. Chiron was originally classified as an asteroid, but it now appears that its composition is icy — like a comet. Centaurs are not nearly as plentiful as TNOs; the proximity of the giant planets would tend to make such orbits unstable. Return to text.
- Nearly 600 KBOs have been discovered as of May 2002. Undoubtedly, more TNOs will be discovered. Recent observations suggest that these objects may taper off rather abruptly at 50 AU — and not extend to 100 AU as originally thought. See The Edge of the Solar System, 24 October 2000. Return to text.
- If such a large object were to fall into the inner solar system, it would make a very impressive comet! Alas, no observed comets have been this large. A particularly large KBO (named 2001 KX76) was recently discovered. It is over 1,000 km across—about the size of Pluto’s moon Charon. See The Kuiper Belt, Spacetech’s Orerry. Return to text.
- Seven binary TNOs have been discovered as of May 2002. See Distant EKOs: The Kuiper Belt Electronic Newsletter 22, March 2002. Return to text.
- Many asteroids are now known to be binary as well. Beattie, J.K., Asteroid Chasers Are Seeing Double, Sky and Telescope. Return to text.
- These Plutinos orbit the sun at an average distance of about 40 AU with a period of 248 years—the same as Pluto. This is no coincidence; this orbital period is particularly stable because it is a 2:3 resonance with Neptune. Pluto and the Plutinos orbit the sun twice for every three orbits of Neptune. Return to text. | http://creation.com/kuiper-belt-objects-solution-to-short-period-comets | 13 |
13 | Geometry: Castles and ShadowsContent Guide - Andee Rubin
Supplies Needed for Workshop #3:
pencils, paper, scissors, rulers, calculators, tape, a variety of colored markers
About the WorkshopWhat is the theme of the workshop?
For some people, geometry was the most frightening part of mathematics; for others, it was the only part that made sense. We will try to shed some new light on the study of geometry by inviting you to use your hands and eyes to explore both familiar and unusual geometric objects.
Whom do we see? What happens in the videoclips?
We'll see students at a range of grade levels investigating relationships between two-and three-dimensional objects. In all of these classrooms, we'll see students working with their hands: building three-dimensional models, cutting and gluing paper shapes, making perspective drawings, and folding flat paper into solid objects.
What issues does this workshop address?
One issue we will explore is how teachers help students who are having trouble with a geometric task. What is the balance between helping students develop independence and guiding them - sometimes subtly, sometimes more explicitly - toward the mathematical goals of the lesson?
What teaching strategy does this workshop offer?
In two of the videos, the teachers have asked students to work in groups and then to report their work to the whole class. We will consider how teachers might comment on student work in these situations to make the experience meaningful both for the presenters and for the rest of the class.
To which NCTM Standards does this workshop relate?
In grades 1-4, this workshop is related to Standard 9: Geometry and Spatial Sense, and in grades 5-8 to Standard 12: Geometry. Both standards stress description, modeling, and comparison of geometric shapes; exploring the results of transformations on these shapes; and, in general, the development of spatial sense. You will also see Standard 1: Mathematics as Problem Solvingand Standard 3: Mathematics as Reasoning in Action.
Suggested Classroom ActivitiesDesign Your Own Instructions
Each student or group of students builds an object (e.g., a building) out of interlocking cubes. They then write and/or draw instructions for making the object and pass them on to another group. The second group follows the instructions and compares the results with the original object. This activity focuses on mathematics as communication and encourages students to think about the meaning of mathematical vocabulary.
More on Silhouettes
Each group of students builds a building with interlocking cubes, then draws the front, right, and top silhouettes. They trade their silhouettes with another group of students, who tries to reconstruct the original building. Interesting questions to discuss: Is there more than one building that fits a given set of silhouettes? What determines whether a set of silhouettes will fit only one building? How many buildings can you make that will have all three silhouettes look the same?
Do the pre-workshop activity with your class. Extend the activity by exploring all the nets for 2 x 1 boxes - there are a lot more. Can you (or your students) come up with a labeling system that makes it easy to determine whether any two nets are the same?
Drawing from Memory
Make transparencies of simple designs (some examples are given below). Use an overhead projector to show students one of the designs for a few seconds, then take it away and ask them to draw it. Put the design up again for a few seconds, then remove it and let students revise or complete their drawing. Finally, show the design once more - leaving it up this time - so that everyone can check their work. Ask students to describe how they remembered the picture: What shapes did they see? How did each drawing relate to the previous one? This activity can be done as early as kindergarten with very simple two-dimensional figures; it can be challenging for middle school students if the drawings are of three-dimensional objects.
Suggested StrategiesAll of the lessons we will see in the videos require additional materials other than the standard pencils, paper, rulers, etc. Consider how you use different kinds of materials in your classroom. How do they affect students' learning? What, if any, classroom management issues can arise when you use special materials, and how might you deal with them?
- Marco Ramirez spends a long time with one student clarifying the meaning of the word "side." How might you have handled the same situation?
- Language plays a major role in Marco Ramirez's bilingual classroom. In addition to the discussion referred to in Question 1, there are several situations in which Marco Ramirez encourages students to connect language and mathematics. For example, as students present shapes, he labels the shapes with their formal names. When students create an unusual shape, he allows them to name it (e.g., a Z with 2 heads). He encourages students to write the names of shapes in the best way they can, even if they don't know the exact spelling. How do you react to these techniques? Would you use them in your classroom? More generally, what do you think about the issue of mathematical vocabulary?
- There is great variety in the mathematical sophistication with which students in Nan Sepada's classroom classify the hexominoes. While we do not see her offer comments to any of the groups, how do you think she might have responded? What would you have done? In particular, how would you have responded to the group that used letters of the alphabet to classify the shapes?
Pre-Workshop Assignment for Workshop #4
Please conduct the following survey:
There are two groups of rectangles: Group 1 and Group 2. Show both groups to 20 people, and ask them to select one rectangle from each group which is the best looking or most pleasing. Record their responses on the tally chart (p. 52). You may also make a note of the age range of your respondents. Bring your results with you to Workshop.
There may be some surprises in your data. During the workshop, you'll can see how your results compare to results from teachers in other parts of the country.
Survey Directions Print out the Survey Grid and rectangles below. Show subjects the two groups of rectangles (Groups 1 and Groups 2). Ask them which rectangle in each group is the best looking or most pleasing. Mark one rectangle per group for each person you survey. Tally the results and enter them under Totals.
Survey Grid for recording results. | http://learner.org/workshops/math/work_3.html | 13 |
19 | Knowledge base Congestion and contention
The phrases congestion and contention are used to describe the way a communications link is used, and they have subtly different meanings.
Congestion is what happens when a link is full up. Basically it means that the link cannot handle any more traffic that it is carrying, and as more traffic is trying to use the link it is delayed or dropped. The way the Internet works is that when a link is full there is a buffer (or queue) of packets waiting to be sent on the link. When that queue is full then additional packets are thrown away. These two effects are called latency and loss.
Congestion is not always a wrong thing. Where there is any link between two computers, the computers will try and fill the link between them to the limit. There will always be a limit of some sort even if using a direct cable, and so the fact that the link is full is to be expected and is quite normal. This is because of the way TCP works. TCP is the main transport protocol used for transferring bulk information over the Internet. TCP uses the packet loss to tell it that a link is full and to back off a bit.
The way TCP works also applies on any shared link. A shared link is, again, quite normal, and you could call the Internet one big shared link. At the end of the day there will always be some bottleneck in the link across the Internet somewhere. Ideally it is the link at your end and not elsewhere on the Internet, but the bottleneck on a web site could just as easily be at the web site itself.
In an ideal world shared links will be managed to avoid congestion. If you have an Internet service with lots of customers connected then each user cannot demand more than their link speed. On average not everyone is transferring data at the same time even if they do want their full link speed when they are. As such the shared links in the Internet do not have to be big enough for every user to transfer at full speed all of the time.
This is no different to any other network like water, gas or electricity. If everyone in a street all ran their taps at full speed they would find they slow to a trickle. This is simply because people don't do that so the big pipes in the street do not need to have enough capacity to handle it. Internet access is like that. It does not have to be, obviously. It could be that shared links are built to handle the capacity of all lines at once - the problem is (just like having huge water pipes and pumping stations for water) that would cost a lot. And someone else can provide the same uncongested service by not doing that, and so at a lower price. Having extra spare unused capacity does not improve the service - if a link is not full, i.e. not congested, then it is the same regardless of how not full it is, i.e. how much extra is available.
So, at the end of the day, managing congestion is about statistics. It is about having shared links which are not normally full, and if they get full then buying more capacity.
That said, there are commercial reasons to run links full - it is cheaper and allows a cheaper service to be run. If the slow down because the link gets full is not a lot, i.e. lots of people sharing, so each only sees a little less than they want, then that is a viable service to offer because it is a better price. The difference between Internet service providers can be down to how those ISPs manage full links. In some cases, the reason links get full may be very specific types of traffic, and clever systems to slow down just some customers means everyone else gets to use an uncongested service. Again, how this is done is down to the ISP you choose. AAISP do not have protocol specific shaping and we aim not to be the bottleneck and as a result we cost a bit more. Even so we can never completely eliminate all congestion from our network as there is always a chance everyone will try and download something at the same time, or at least more people than we planned for.
Contention is often confused with congestion. Contention is about the planning rules for shared links. Basically, a contention ratio is a ratio of possible demand against total capacity. So a ratio of 50:1 means there could be 50 times as much demand for usage on a shared link as it actually has available to use.
There are several problems with a contention ratio being quoted. Normally a contention ratio only makes sense when used as a planning rule - it allows you to know where to start on a new shared link and work out costs. It can be wrong - you could find the actual usage fills the shared link, which means you need to revise your planning rules. It could be that you don't care the link gets full - you sell different services based on different planning rules at different costs. This is all a commercial decision for the ISP. Unfortunately people try to use contention ratios as a way of comparing ISPs, which simply does not work...
- A contention ratio covers only a specific link - the Internet as a whole has lots of shared links, and for most of them it is impossible to even calculate the total possible demand so as to work out a ratio. E.g. if you have a web server, it is possible every single person on the planet may want to get your web site all at the same time - so what is the total of all other end links speeds? That makes even a gigabit link to a web server an incredibly high contention ratio.
- A contention ratio does not tell you the size of the pipes involved. There is a huge difference between small pipes and few end users and large pipes and lots of end users. E.g. if you have a 2M link and two of you share a 2M shared back-haul link, (e.g. a good sounding 2:1 ratio) you will get half the speed if the other person is also using their link, and that is quite likely. If you are one of 1000 people with a 2M link and you all share a 40M pipe (50:1 ratio) that will be much better. Simply having that many people means that the average usage will be more stable and less than 40M (ideally). If it is less than 40M then everyone that needs their full 2M will get it when they need it and the service is uncongested, the same as an uncontended (1:1) link.
- A contention ratio does not take in to account actual usage. You could be on a 100:1 link where average usage of many thousands of lines is below 1% of what they could use. That would then be an uncongested link and the same as 1:1. On the other hand you could be on a 10:1 link but average usage is over 10%. That would be a congested link and would be slow. Comparing the ratio without knowing what sort of end users there are and what the will be using means you cannot tell if the links will be congested or not. Ut means you cannot compare one ISP with another unless you know they have exactly the same type of end users, which is unlikely.
- A contention ratio tries to work on links speeds in various ways. But end users will typically use an average amount of data - e.g. residential users probably average 100Kb/s (yes, low isn't it!). If someone upgrades from 2Mb/s to 100Mb/s they will not necessarily download more. They will download it faster but that means less time downloading. On average with thousands of uses it looks the same if they all have 2M or they all have 100M links. But the contention ratio on a shared link is massively different in those cases. So with link speeds changing, contention ratios do not even make sense as a planning rule.
- The Internet is constantly changing - not only the links speeds but the usage patterns. Ultimately the higher link speeds do lead to more content rich web sites and resources on the Internet and that changes average levels of download. This means the planning rules have to constantly adapt making them even less useful.
Is there a better way? Probably. You probably need to look at the average usage per end user and perhaps the standard deviation (which would take in to account how many users and their link speeds to some extent). This would allow planning based on number of users and typical average usage rather than a ratio. At A&A we simply aim to not be the bottle neck, and we price based on usage. If we tried pricing a fixed amount per end user we would need these planning rules a lot more.
As a result of all of these issues we are pleased to say that BT do not quote contention ratios on their newer services. They used to quote 20:1 (business) and 50:1 (residential) on their 500K/1M/2M services. Now they too are aiming not to be a bottleneck in effect as every time they are we hassle them to increase capacity in their network.
An uncongested link is the same as an uncontended (1:1) link - it means no delay or loss for your packets. So if you think you need to know the contention ratio of AAISP you are asking the wrong question. | http://aaisp.co.uk/kb-broadband-contention.html | 13 |
10 | In order to understand how the Universe has changed from its initial simple state following the Big Bang into the magnificent Universe we see as we look at the night sky, we must understand how stars, galaxies and planets are formed over time. The Universe is comprised of mostly hydrogen and helium. in fact, these two elements make up 98% of the visible matter in the Universe. Nevertheless, our world and everything it contains–even life itself–is possible only because of the existence of heavier elements such as carbon, nitrogen, oxygen, silicon, iron, and many, many others.
How long did it take the first generations of stars to seed our Universe with the heavy elements we see on Earth today? When in the history of the Universe was there a sufficient supply of heavy elements to allow the formation of prebiotic molecules and terrestrial-like planets upon which those molecules might combine to form life?
Our big question: "How did the universe originate and evolve to produce the galaxies, stars and planets we see today?"
*Sort missions by clicking the column headers.
The Herschel Space Observatory is a space-based telescope that will study the Universe by the light of the far-infrared and submillimeter portions of the spectrum. It is expected to reveal new information about the earliest, most distant stars and galaxies, ...
|20090514 May 14, 2009||3Operating|
Hubble Space Telescope (HST)
Hubble Space Telescope (HST), an ultraviolet, visible and infrared orbiting telescope, has expanded our understanding of star birth, star death, and galaxy evolution, and has helped move black holes from theory to fact. It has recorded over 100,000 images in ...
|19900424 April 24, 1990||3Operating|
James Webb Space Telescope (formerly the Next Generation Space Telescope) is designed for observations in the far visible to the mid infrared part of the spectrum. JWST will probe the era when stars and galaxies started to form; it will ...
SOFIA is the world's largest airborne observatory. It studies the universe over a wide range of the electromagnetic spectrum, from the optical to the far infrared.
Spitzer Space Telescope (formerly known as SIRTF) conducts infrared astronomy from space. From an unusual heliocentric orbit designed to keep its sensitive instruments away from Earth's radiated heat, Spitzer has detected several distant objects, including several supermassive black holes, that ...
|20030825 August 25, 2003||3Operating| | http://science.nasa.gov/about-us/smd-programs/cosmic-origins/ | 13 |
18 | March 12, 2007: When scientists announce they're about to calibrate their instruments, science writers normally put away their pens. It's hard to write a good story about calibration. This may be the exception:
On Feb. 25, 2007, NASA scientists were calibrating some cameras aboard the STEREO-B spacecraft and they pointed the instruments at the sun. Here is what they saw:
"What an extraordinary view," says Lika Guhathakurta, STEREO Program Scientist at NASA headquarters. The fantastically-colored star is our own sun as STEREO sees it in four wavelengths of extreme ultraviolet light. The black disk is the Moon. "We caught a lunar transit of the sun," she explains.
The purpose of the experiment was to measure the 'dark current' of STEREO-B's CCD detectors. The idea is familiar to amateur astronomers: Point your telescope at something black and see how much 'dark current' trickles out of the CCD. Later, when real astrophotography is taking place, the dark current is subtracted to improve the image.
"The images have an alien quality," notes Guhathakurta. "It's not just the strange colors of the sun. Look at the size of the Moon; it's very odd." When we observe a lunar transit from Earth, the Moon appears to be the same size as the sunâa coincidence that produces intoxicatingly beautiful solar eclipses. The silhouette STEREO-B saw, on the other hand, was only a fraction of the sun's diameter. "It's like being in the wrong solar system."
The Moon seems small because of STEREO-B's location. The spacecraft circles the sun in an Earth-like orbit, but it lags behind Earth by one million miles. This means STEREO-B is 4.4 times further from the Moon than we are, and so the Moon looks 4.4 times smaller.
Right: STEREO A and B orbit the sun on either side of Earth. [animation]
STEREO-B has a sister ship named STEREO-A. Both are on a mission to study the sun. While STEREO-B lags behind Earth, STEREO-A orbits one million miles ahead ("B" for behind, "A" for ahead). The gap is deliberate: it allows the two spacecraft to capture offset views of the sun. Researchers can then combine the images to produce 3D stereo movies of solar storms.
Of particular interest are coronal mass ejections (CMEs), billion ton clouds of electrified gas hurled into space by explosions on the sun. "STEREO's ability to see these clouds in 3-dimensions will revolutionize our understanding of CMEs and improve our ability to predict when they will hit Earth," she says.
The STEREO mission is still in its early stages. The two spacecraft were launched in Oct. 2006 and reached their stations on either side of Earth in January 2007. Now it's time for check-out and calibration. The first 3D views of solar storms are expected in April.
So science writers, ready your pens. If the calibration runs are any indication, the actual data will be something to write about.
STEREO home page -- at the Goddard Space Flight Center
Warning: the following movie is 32 MB in length and may require some time to download.
Bonus movie -- this version of the STEREO-B eclipse movie is a composite of data from the spacecraft's coronagraph and extreme ultraviolet imager.
New Solar Images Herald Better Solar Storm Tracking -- NASA press release
NASA's Future: The Vision for Space Exploration | http://science1.nasa.gov/science-news/science-at-nasa/2007/12mar_stereoeclipse/ | 13 |
34 | Kepler-20e is the first planet smaller than Earth discovered to orbit a star other than the sun. A year on Kepler-20e lasts only six days, as it is much closer to its host star than Earth is to the sun. The temperature at the surface of the planet, around 1,400ºF, is much too hot to support life as we know it.
Image Credit: NASA/Ames/JPL-Caltech (Click image for full size.)
Navigating to alien planets similar to our own is a universal theme of science fiction. But how do our space heroes know where to find those planets? And how do they know they won’t suffocate as soon as they beam down to the surface? Discovering these Earth-like planets has taken a step out of the science fiction realm with NASA’s Kepler mission, which seeks to find planets within the Goldilocks zone of other stars: not too close (and hot), not too far (and freezing), but just right for potentially supporting life. While Kepler is only the first step on a long road of future missions that will tell us more about these extrasolar planets, or exoplanets, its own journey to launch took more than twenty years and lots of perseverance.
Looking for planets hundreds of light-years away is tricky. The stars are very big and bright, the planets very small and faint. Locating them requires staring at stars for a long time in hopes of everything aligning just right so we can witness a planet’s transit—that is, its passage in front of its star, which obscures a tiny fraction of the star’s light. Measuring that dip in light is how the Kepler mission determines a planet’s size.
The idea of using transits to detect extrasolar planets was first published in 1971 by computer scientist Frank Rosenblatt. Kepler’s principal investigator, William Borucki, expanded on that idea in 1984 with Audrey Summers, proposing that transits could be detected using high-precision photometry. The next sixteen years were spent proving to others—and to NASA—that this idea could work.
To understand how precise “high-precision” needed to be for Kepler, think of Earth-size planets transiting stars similar to our sun, but light-years away. Such a transit would cause a dip in the star’s visible light by only 84 parts per million (ppm). In other words, Kepler’s detectors would have to reliably measure changes of 0.01 percent.
Borucki and his team discussed the development of a high-precision photometer during a workshop in 1987, sponsored by Ames Research Center and the National Institute of Standards and Technology, and then built and tested several prototypes.
When NASA created the Discovery Program in 1992, the team proposed their concept as FRESIP, the Frequency of Earth-Size Inner Planets. While the science was highly rated, the proposal was rejected because the technology needed to achieve it wasn’t believed to exist. When the first Discovery announcement of opportunity arose in 1994, the team again proposed FRESIP, this time as a full mission in a Lagrange orbit.
Kepler’s focal plane consists of an array of forty-two charge-coupled devices (CCDs). Each CCD is 2.8 cm by 3.0 cm with 1,024 by 1,100 pixels. The entire focal plane contains 95 megapixels.
Photo Credit: NASA and Ball Aerospace (Click image for full size.)
This particular orbit between Earth and the sun is relatively stable due to the balancing gravitational pulls of Earth and the sun. Since it isn’t perfectly stable, though, missions in this orbit require rocket engines and fuel to make slight adjustments—both of which can get expensive. Reviewers again rejected the proposal, this time because they estimated the mission cost to exceed the Discovery cost cap.
The team proposed again in 1996. “To reduce costs, the project manager changed the orbit to heliocentric to eliminate the rocket motors and fuel, and then cost out the design using three different methods. This time the reviewers didn’t dispute the estimate,” Borucki explained. “Also at this time, team members like Carl Sagan, Jill Tarter, and Dave Koch strong-armed me into changing the name from FRESIP to Kepler,” he recalled with a laugh.
The previous year, the team tested charge-coupled device (CCD) detectors at Lick Observatory, and Borucki and his colleagues published results in 1995 that confirmed CCDs—combined with a mathematical correction of systematic errors—had the 10-ppm precision needed to detect Earth-size planets.
But Kepler was rejected again because no one believed that high-precision photometry could be automated for thousands of stars. “People did photometry one star at a time. The data analysis wasn’t done in automated fashion, either. You did it by hand,” explained Borucki. “The reviewers rejected it and said, ‘Go build an observatory and show us it can be done.’ So we did.”
They built an automated photometer at Lick Observatory and radio linked the data back to Ames, where computer programs handled the analysis. The team published their results and prepared for the next Discovery announcement of opportunity in 1998. “This time they accepted our science, detector capability, and automated photometry, but rejected the proposal because we did not prove we could get the required precision in the presence of on-orbit noise, such as pointing jitter and stellar variability. We had to prove in a lab that we could detect Earthsize transits in the presence of the expected noise,” said Borucki.
This star plate is an important Kepler relic. It was used in the first laboratory experiments to determine whether charge-coupled devices could produce very precise differential photometry.
Photo Credit: NASA/Kepler Team (Click image for full size.)
The team couldn’t prove it using ground-based telescope observations of stars because the atmosphere itself introduces too much noise. Instead, they developed a test facility to simulate stars and transits in the presence of pointing jitter. A thin metal plate with holes representing stars was illuminated from below, and a prototype photometer viewed the light from the artificial stars while it was vibrated to simulate spacecraft jitter.
The plate had many laser-drilled holes with a range of sizes to simulate the appropriate range of brightness in stars. To study the effects of saturation (very bright stars) and close-together stars, some holes were drilled large enough to cause pixel saturation and some close enough to nearly overlap the images.
“To prove we could reliably detect a brightness change of 84 ppm, we needed a method to reduce the light by that amount. If a piece of glass is slid over a hole, the glass will reduce the flux by 8 percent—about one thousand times too much,” Borucki explained. “Adding antireflection coatings helped by a factor of sixteen, but the reduction was still sixty times too large. How do you make the light change by 0.01 percent?
“There really wasn’t anything that could do the job for us, so we had to invent something,” said Borucki. “Dave Koch realized that if you put a fine wire across an aperture—one of the drilled holes—it would block a small amount of light. When a tiny current is run through the wire, it expands and blocks slightly more light. Very clever. But it didn’t work.”
The science merit function that Bill developed was a bridge between the science and engineering that we used in doing these kind of trade studies…
With a current, the wire not only expanded, it also curved. As it curved, it moved away from the center of a hole, thereby allowing more light to come through, not less.
“So Dave had square holes drilled,” said Borucki. “With a square hole, when the wire moves off center, it doesn’t change the amount of light. To keep the wire from bending, we flattened it.” The results demonstrated that transits could be detected at the precision needed even in the presence of on-orbit noise.
After revising, testing, publishing, and proposing for nearly twenty years, Kepler was finally approved as a Discovery mission in 2001.
After Kepler officially became a NASA mission, Riley Duren from the Jet Propulsion Laboratory joined the team as project systems engineer, and later became chief engineer. To help ensure a smooth progression, Duren and Borucki set out to create a common understanding of the scientific and engineering trade-offs.
“One of the things I started early with Bill and continued throughout the project was to make sure that I was in sync with him every step of the way, because, after all, the reason we’re building the mission is to meet the objectives of the science team,” said Duren. “It was important to develop an appreciation for the science given the many complex factors affecting Kepler mission performance, so early on I made a point of going to every science team meeting that Bill organized so I could hear and learn from the science team.”
The result was something they called the science merit function: a model of the science sensitivity of mission features— the effects on the science of various capabilities and choices. Science sensitivities for Kepler included mission duration, how many stars would be observed, the precision of the photometer’s light measurements, and how many breaks for data downlinks could be afforded. “Bill created a model that allowed us to communicate very quickly the sensitivity of the science to the mission,” explained Duren, “and this became a key tool for us in the years that followed.”
The science merit function helped the team determine the best course of action when making design trade-offs or descope decisions. One trade-off involved the telecommunications systems. Kepler’s orbit is necessary to provide the stability needed to stare continuously at the same patch of sky, but it puts the observatory far enough away from Earth that its telecommunications systems need to be very robust. The original plan included a high-gain antenna that would deploy on a boom and point toward Earth, transmitting data without interrupting observations. When costs needed to be cut later on, descoping the antenna offered a way to save millions. But this would mean turning the entire spacecraft to downlink data, interrupting observations.
A single Kepler science module with two CCDs and a single field-flattening lens mounted onto an Invar carrier. Each of the twenty-one CCD science modules are covered with lenses of sapphire. The lenses flatten the field of view to a flat plane for best focus.
Photo Credit: NASA/Kepler Mission (Click image for full size.)
“Because we’re looking for transits that could happen any time, it wasn’t feasible to rotate the spacecraft to downlink every day. It would have had a huge impact on the science,” Duren explained. So the team had to determine how frequently it could be done, how much science observation time could be lost, and how long it would take to put Kepler back into its correct orientation. “We concluded we could afford to do that about once a month,” said Duren. Since the data would be held on the spacecraft longer, the recorder that stored the data had to be improved, which would increase its cost even as the mission decreased cost by eliminating the highgain antenna.
“The science merit function that Bill developed was a bridge between the science and engineering that we used in doing these kind of trade studies,” said Duren. “In my opinion, the Kepler mission was pretty unique in having such a thing. And that’s a lesson learned that I’ve tried to apply to other missions in recent years.”
The tool came in handy as Kepler navigated through other engineering challenges, ensuring the mission could look at enough stars simultaneously for long periods of time, all the while accommodating the natural noise that comes from long exposures, spacecraft jitter in orbit, and instrumentation. This meant Kepler had to have a wide enough field of view, low-noise detectors, a large aperture to gather enough light, and very stable pointing. Each presented its own challenges.
Kepler’s field of view is nearly 35,000 times larger than Hubble’s. It’s like a very large wide-angle lens on a camera and requires a large number of detectors to see all the stars in that field of view.
Ball Aerospace built an instrument that could accommodate about 95 million pixels—essentially a 95-megapixel camera. “It’s quite a bit bigger than any camera you’d want to carry around under your arm,” Duren said. “The focal plane and electronics for this camera were custom built to meet Kepler’s unique science objectives. The entire camera assembly resides inside the Kepler telescope, so a major factor was managing the power and heat generated by the electronics to keep the CCD detectors and optics cold.”
This image from Kepler shows the telescope’s full field of view—an expansive star-rich patch of sky in the constellations Cygnus and Lyra stretching across 100 square degrees, or the equivalent of two side-by-side dips of the Big Dipper.
Image Credit: NASA/Ames/JPL-Caltech (Click image for full size.)
What might be surprising is that for all that precision, Kepler’s star images are not sharp. “Most telescopes are designed to provide the sharpest possible focus for crisp images, but doing that for Kepler would have made it very sensitive to pointing jitter and to pixel saturation,” explained Duren. “That would be a problem even with our precision pointing control. But of course there’s a trade-off: if you make the star images too large [less sharp], each star image would cover such a large area of the sky that light from other stars would be mixed into the target star signal, which could cause confusion and additional noise. It was a careful balancing act.”
And it’s been working beautifully.
Kepler launched successfully in 2009. After taking several images with its “lens cap” on to calculate the exact noise in the system, the observatory began its long stare at the Cygnus-Lyra region of the Milky Way. By June 2012, it had confirmed the existence of seventy-four planets and identified more than two thousand planet candidates for further observation. And earlier in the year, NASA approved it for an extended mission—to 2016.
“The Kepler science results are essentially a galactic census of the Milky Way. And it represents the first family portrait, if you will, of what solar systems look like,” said Duren.
Kepler’s results will be important in guiding the next generation of exoplanet missions. Borucki explained, “We all know this mission will tell us the frequency of Earth-size planets in the habitable zone, but what we want to know is the atmospheres of these planets. Kepler is providing the information needed to design those future missions.” | http://www.nasa.gov/offices/oce/appel/ask/issues/47/47s_kepler.html | 13 |
28 | Particles and Waves:
We have so far discussed two behaviors of light: straight line motion ( Geometric Optics) and the wave-like behavior and interference (Wave Optics). In this chapter, the particle-like behavior of light will be discussed. In fact, the particle-like behavior is also associated with a frequency and it cannot be separated form the wave-like behavior.
Max Planck formulated this theory that as electrons orbit the nucleus of an atom, they receive energy from the surroundings in different forms. Typical forms are: heat waves, light waves, and collision with other electrons and particles. The radius at which an electron orbits is a function of electron's K.E. and therefore electron's speed. Recall K.E. = (1/2)Mv2. Each electron is also under a Coulomb attraction force from the nucleus given by F = ke2 / r2. Furthermore, circular motion requires a centripetal force Fc = Mv2/r. We know that it is the Coulomb force F that provides the necessary centripetal force Fc for electron spin.
The above discussion clarifies that, in simplest explanation, each electron takes a certain radius of rotation depending on its energy or speed. When an electron receives extra energy, it then has to change its orbit or radius of rotation. It has to take an orbit of greater radius. The radius it takes is not just any radius. When such transition occurs, a vacant orbit is left behind that must be filled. It may be filled by the same electron or any other one. The electron that fills that vacant orbit must have the correct energy that matches the energy of that orbit. The electron that fills that orbit may have excess energy that has to be given off before being able to fill that vacant orbit. The excess energy that an electron gives off appears as a burst of energy, a parcel of energy, a packet of energy or a quantum of energy according to Max Planck.
The excess energy is simply the energy difference between two different orbits. If an electron returns from a greater radius orbit Rm with an energy level Em to a smaller radius orbit Rn with an energy level En, it releases a quantum of energy equal to the energy difference Em - En. Planck showed that this energy difference is proportional to the frequency of occurrence ( f ) of the released quantum or the packet of energy. The proportionality constant is h with a value of h = 6.626x10-34 J.s called the " Planck's constant." The packet or quantum of energy is also called a "photon."
In electron-volts, ( h ) has a value of h = 4.14x10-15eV-sec. The Plancks' formula is:
Em - En = hf or, ΔE = hf
Example 1: Calculate (a) the energy of photons whose frequency of occurrence is 3.2x1014 Hz. (b) Find their corresponding wavelength and (c) express if they are in the visible range.
Solution: (a) ΔE = hf ; ΔE = ( 6.626x10-34 J.s )( 3.2x1014 /s) = 2.12x10-19 J
Note that 1eV = 1.6x10 -19 J . Our answer is a little more than 1eV. In fact it is (2.12 /1.6) = 1.3 eV.
(b) c = f λ ; λ = c / f ; λ = (3.00x108m/s)/ (3.2x1014/s) = 9.4x10-7m = 940x10-9m = 940nm
(c) The visible range is between 400 nm - 700 nm; this is not in the range. It is infrared.
Example 2: Calculate (a) the energy ( in Joules) of each photon of ultraviolet light whose wavelength is 225nm. (b) Convert that energy to electron-volts.
Solution: (a) ΔE = hf = hc /λ ; ΔE = ( 6.626x10-34 J.s )(3.00x108m/s) / 225x10-9m = 8.83x10-19 J.
(b) Since 1eV = 1.6x10-19J; therefore, ΔE = 5.5eV.
The mechanism by which photoelectric effect operates may be used to verify the particle-like behavior of light. A photoelectric cell can be made of a vacuum tube in which two metallic plates or poles are fixed. The two plates are connected to two wires that come out of the sealed glass tube and are used for connection to other electronic components. For time being, let us connect a photoelectric cell to just a galvanometer (sensitive ammeter) as shown in the figure below. One terminal (plate) in the tube may be mounted in a slanted fashion in order for the light coming from outside to effectively shine on it. This side forms the negative pole. The other side collects or receives electrons and forms the positive pole.
When photons of light are sent toward the metal plate, it is observed that the galvanometer in the circuit shows the passage of a current. When the light is cut off, the current stops. This shows that the collision of photons of light on the metal surface must release electrons from the outer shells of the outermost atomic layers of the metal oxide coating.
Each energetic photon that collides with the metal surface, releases one electron. This released electron has some speed and therefore some K.E. = 1/2Mv2. The atoms of the outer surface that have lost electrons, replenish their electron deficiencies from the inner layer atoms of the metal oxide. This replenishing process transmits layer by layer through the wire and the galvanometer all the way to the pole labeled "Positive." The positive end pulls the released electrons from the negative end through the vacuum tube and the circuit completes itself. This process occurs very fast. As soon as light hits the metal plate, the circuit is on. As soon as light is cut off, the circuit goes off.
The conclusion of the above experiment is that photons of light act as particles and kick electrons out of their orbit. This explains the particle-like behavior of light.
Photoelectric Effect Formula:
The energy necessary to just detach an electron out of a metal surface is called the " Work Function" of that metal and is shown by Wo . If the energy of each incident photon on the metal surface is hf, and the kinetic energy of the released electron is K.E., then we may write the following energy balance for a photoelectric cell.
hf = Wo + K.E..
According to this equation, hf must be greater than Wo for an electron to be released. Since h is a constant; therefore, f must be high enough for the photon to be effective. There is a limit for frequency below which nothing happens. That limit happens when the frequency of the incident photon is just enough to release an electron. Such released electron has a K.E. = 0. At the limiting frequency called the " threshold frequency ", the kinetic energy of the released electron is zero. Setting KE = 0 and replacing f by fth, we get:
h fth = Wo or fth = Wo / h.
The above formula gives the threshold frequency, fth .
Example 3: The work function of the metal plate in a photoelectric cell is 1.73eV. The wavelength of the incident photons is 366nm. Find (a) the frequency of the photons, (b) the K.E. of the released electrons, and (c) the threshold frequency and wavelength for this photoelectric cell.
Solution: (a) c = fλ ; f = c/λ= (3.00x108m/s) / (366x10-9m) = 8.20x1014 Hz
(b) hf = W0 + K.E. ; K.E. = hf - W0
K.E. = ( 4.14x10-15eV-s )( 8.20x1014 /s ) - 1.73eV = 1.66eV
(c) fth = W0 / h ; fth = 1.73eV / (4.14x10-15eV-s) = 4.18x1014 Hz
λth = c / fth ; λth = (3.00x108m/s ) / (4.18x1014 Hz) = 718nm
According to de Broglie, for every moving particle of momentum Mv, we may associate an equivalent wavelength, λ describing its wave motion behavior such that
de Broglie Wavelength
where λ is called the "de Broglie wavelength."
Example 4: Calculate the DeBroglie wavelength associated with the motion of an electron that orbits a hydrogen atom at a speed of 6.56x106 m/s.
Solution: Using λ = h/Mv, we may write: λ = (6.626x10-34 Js) / [(9.108x10-31kg)(6.56x106 m/s)] = 1.11x10-10m.
Chapter 29 Test Yourself 1:
1) The energy of a photon of light, according to Max Planck's formula is (a) E = 1/2Mv2. (b) E = hf. (c) E = Mgh.
2) The Planck's constant, h, is (a) 6.6262x10-34 J.sec. (b) 4.14x10-15 eV.sec. (c) both a & b. click here.
3) An electron orbiting the nucleus of an atom can be energized by (a) receiving a heat wave. (b) getting collided by another subatomic particle. (c) by getting hit by a photon. (d) both a, b, & c.
4) When an electron is energized by any means, it requires (a) a greater radius of rotation. (b) a smaller radius of rotation. (c) it stays in the same orbit but spins faster. click here.
5) When there is a vacant orbit, it will be filled with an electron from (a) a lower orbit. (b) a higher orbit.
6) A higher orbit means (a) a greater radius. (b) a faster moving electron. (c) a greater energy. (d) a, b, and c.
7) The excess energy an electron in a higher orbit has is released in the form of a photon (small packet or burst of energy) as the electron fills up a lower orbit. (a) True (b) False click here.
8) The excess energy is (a) the energy difference, E2 - E1, of the higher and lower orbits. (b) the energy each electron has anyway. (c) both a & b.
9) A photon has a mass of (a) zero. (b) 1/2 of the mass of an electron. (c) neither a nor b..
10) Each photon carries a certain amount of energy. We may use the Einstein formula (E = Mc2) and calculate an equivalent mass for a photon. (a) True (b) False click here.
11) The greater the energy of a photon (a) the higher its speed. (b) the higher its velocity. (c) the higher it frequency. (d) a, b, c, & d.
12) The greater the energy of a photon the lower its wavelength. (a) True (b) False
13) The formula for waves speed, v = fλ, takes the form of (a) c = fλ for photons of visible light only. (b) for photons of non-visible light only. (c) for the full spectrum of E&M waves which visible light is a part of. click here.
Problem: A student has calculated a frequency of 4.8x1016 Hz for a certain type of X-ray and a wavelength of 7.0nm.
14) Use the equation v = fλ and calculate v to see if the student's calculations is correct. (a) Correct (b) Wrong
15) The answer to Question 14 is (a) 3.36x108 m/s. (b) 3.36x1017 m/s. (c) neither a nor b. click here.
16) The reason why the answer to Question 14 is wrong is that v turns out to be greater than the speed of light in vacuum that is 3.0x108 m/s. (a) True (b) False
17) In the photoelectric effect, (a) electrons collide and release photons. (b) photons collide and release electrons. neither a nor b. click here.
18) In a photoelectric cell, the plate that receives photons, becomes (a) negative. (b) positive. (c) neutral.
19) The reason why the released (energized) electrons do no return back to their shells is that (a) their energies are more than enough for the orbits they were in. (b) the orbits (of the atoms of the metal plate) that have lost electrons, quickly replenish electrons from the inner layer atoms of the metal plate. (c) the outer shells that have lost electrons will be left in loss for ever. (d) a & b. click here.
20) When light is incident on the metal plate of a photoelectric cell, the other pole of the cell becomes positive. The reason is that (a) photons carry negative charges. (b) the other pole loses electrons to replenish the lost electrons of the metal plate through the outside wire that connects it to the metal plate. (c) both a & b.
21) In a photoelectric cell, the released electrons (from the metal plate as a result of incident photons), (a) vanish in the vacuum of the cell. (b) accelerate toward the other pole because of the other pole being positive. (c) neither a nor b.
22) The negative current in the external wire of a photoelectric cell is (a) zero. (b) from the metal plate. (c) toward the negative plate. click here.
23) In a photoelectric cell, the energy of an incident photon is (a) 1/2Mv2. (b) hf. (c) Wo.
24) In a photoelectric cell, the work function of the metal plate is named (a) 1/2Mv2. (b) hf. (c) Wo.
25) In a photoelectric cell, the energy of each released electron is (a) 1/2Mv2. (b) hf. (c) Wo. click here.
26) A 5.00-eV incident photon has a frequency of (a) 1.21x10-15Hz. (b) 1.21x1015Hz. (c) 2.21x1015Hz.
27) An ultraviolet photon of frequency 3.44x1015Hz has an energy, hf, of (a) 14.2 eV. (b) 2.27x10-18J. (c) a & b.
28) When 3.7-eV photons are incident on a 1.7-eV work function metal, each released electron has an energy of (a) 2.0eV. (b) 5.4eV. (c) 6.3eV. click here.
29) 4.7-eV photons are incident on a 1.7-eV work function metal. Each released electron has an energy of (a) 4.8x10-19J. (b) 3.0eV. (c) both a & b.
30) 3.7-eV photons are incident on a 1.7-eV work function metal. Each released electron has a speed of (a) 8.4x10-5m/s. (b) 8.4x10 5m/s. (c) 8.4x10-15m/s. click here.
31) A speed of 8.4x10-5m/s is not reasonable for a moving electron because (a) electrons always move at the speed of light. (b) this speed has a power of -5 that makes it very close to zero same as being stopped. (c) neither a nor b.
32) If the released electrons in a photoelectric effect have an average speed of 9.0x105 m/s and the energy of the incident photons on the average is 4.0eV, the work function of the metal is (a) 1.3eV. (b) 1.1eV. (c) 1.7eV. click here.
33) The wavelength associated with the motion of proton at a speed of 6.2x106 m/s is (a) 6.4x10-14m. (b) 9.4x10-14m. (c) 4.9x10-14m.
34) The diameter of hydrogen atom (the where about of its electronic cloud) is 0.1nm or 10-10m called "Angstrom." The diameter of the nucleus of the hydrogen atom is even 100,000 times smaller or 10-15m called "Femtometer (fm)." The wavelength associated with the moving proton in Question 33 is (a) 6.4fm. (b) 64fm. (c) 640fm. click here.
1) Calculate (a) the energy of photons for which the frequency of occurrence is 6.40x1014 Hz. (b) Find their corresponding wavelength and (c) express if they are in the visible range.
2) Calculate (a) the energy ( in Joules) of each photon of ultraviolet light whose wavelength is 107nm. (b) Convert that energy to electron-volts.
3) The work function of the metal plate in a photoelectric cell is 2.07eV. The wavelength of the incident photons on it is 236nm. Find (a) the frequency of the photons, (b) the energy of each, (c) the K.E. of the released electrons, (d) their speed, and (e) the threshold frequency and wavelength for this photoelectric cell.
4) Calculate the DeBroglie wavelength associated with the motion of an electron that hast a speed of (a) 1.31x107 m/s.
1) 2.65eV, 469nm, Yes 2) 1.86x10-18J, 12eV
3) 1.27x1015Hz, 5.26eV, 3.19eV, 1.1x106m/s, 5.00x1014Hz, 600.nm | http://www.pstcc.edu/departments/natural_behavioral_sciences/Web%20Physics/Chapter29.htm | 13 |
11 | This chart illustrates how infrared is used to more accurately determine an asteroid's size. As the top of the chart shows, three asteroids of different sizes can look similar when viewed in visible-light. This is because visible-light from the sun reflects off the surface of the rocks. The more reflective, or shiny, the object is (a feature called albedo), the more light it will reflect. Darker objects reflect little sunlight, so to a telescope from millions of miles away, a large dark asteroid can appear the same as a small, light one. In other words, the brightness of an asteroid viewed in visible light is the result of both its albedo and size.
The bottom half of the chart illustrates what an infrared telescope would see when viewing the same three asteroids. Because infrared detectors sense the heat of an object, which is more directly related to its size, the larger rock appears brighter. In this case, the brightness of the object is not strongly affected by its albedo, or how bright or dark its surface is. When visible and infrared measurements are combined, the albedos of asteroids can be more accurately calculated.
JPL manages the Wide-field Infrared Survey Explorer for NASA's Science Mission Directorate, Washington. The principal investigator, Edward Wright, is at UCLA. The mission was competitively selected under NASA's Explorers Program managed by the Goddard Space Flight Center, Greenbelt, Md. The science instrument was built by the Space Dynamics Laboratory, Logan, Utah, and the spacecraft was built by Ball Aerospace & Technologies Corp., Boulder, Colo. Science operations and data processing take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena. Caltech manages JPL for NASA.
More information is online at http://www.nasa.gov/wise and http://wise.astro.ucla.edu. | http://photojournal.jpl.nasa.gov/catalog/PIA14733 | 13 |
10 | How the IF function works
Related Tutorial: Excel 2007 / 2010 IF Function Step by Step Tutorial
The Excel IF function checks to see if a certain condition is true or false. If the condition is true, the function will do one thing, if the condition is false, the function will do something else.
The IF function we are using in this tutorial asks if the value in column A is greater than the value in column B. If it is, the IF function will place the statement "A is larger" in column D. If it is not, the IF function will place the statement "B is larger" in column D.
Our IF function will be entered into cell D1 and it looks like this:
=IF(A3 > B3,"A is larger","B is larger")
Note: the two text statements "A is larger" and "B is larger" are enclosed in quotations. In order to add text to an Excel IF Function, it must be enclosed in quotation marks. | http://spreadsheets.about.com/od/excelfunctions/ss/if_function_sbs.htm | 13 |
13 | Creative and Inventive Thinking
"If everyone thinks the same: No one thinks." This saying illustrates the application of creativity in a classroom. Rather than one right answer, creative thinking activities focus on many inventive ideas.
Creative thinking involves creating something new or original. It's the skills of flexibility, originality, fluency, elaboration, brainstorming, modification, imagery, associative thinking, attribute listing, metaphorical thinking, forced relationships. The aim of creative thinking is to stimulate curiosity and promote divergence.
Read Creativity & Standards: Amazing Authentic Approaches. This online workshop developed by Annette Lamb provides an overview of ways that media specialists, technology coordinators, and classroom teachers can promote creative thinking and engage young people in creative activities.
In NCREL's enGauge document (DOC) (2003), six categories are identified of Inventive Thinking.
"Experts agree: As technology becomes more prevalent in our everyday lives, cognitive skills become increasingly critical. “In effect, because technology makes the simple tasks easier, it places a greater burden on higher-level skills” (ICT Literacy Panel, 2002: p. 6). The National Research Council’s Committee on Information Technology Literacy defines intellectual capabilities as “one’s ability to apply information technology in complex and sustained situations and to understand the consequences of doing so” (1999: online, Chapter 2.2). These capabilities are “life skills” formulated in the context of Digital-Age technologies.
Inventive Thinking is comprised of the following 'life skills':
- Adaptability/Managing Complexity: The ability to modify one’s thinking, attitude, or behavior to be better suited to current or future environments, as well as the ability to handle multiple goals, tasks, and inputs, while understanding and adhering to constraints of time, resources, and systems (e.g., organizational, technological)
- Self-Direction: The ability to set goals related to learning, plan for the achievement of those goals, independently manage time and effort, and independently assess the quality of learning and any products that result from the learning experience
- Curiosity: The desire to know or a spark of interest that leads to inquiry
- Creativity: The act of bringing something into existence that is genuinely new and original, whether personally (original only to the individual) or culturally (where the work adds significantly to a domain of culture as recognized by experts)
- Risk-taking: The willingness to make mistakes, advocate unconventional or unpopular positions, or tackle extremely challenging problems without obvious solutions, such that one’s personal growth, integrity, or accomplishments are enhanced
- Higher-Order Thinking and Sound Reasoning: Include the cognitive processes of analysis, comparison, inference/interpretation, evaluation, and synthesis applied to a range of academic domains and problem-solving contexts" (enGauge, 2003, p. 27)
Read Teaching for Creativity: Building Innovation through Open-Inquiry Learning by Jean Sausele Knodt in School Library Monthly (May 2010). IUPUI login required. Click PDF Full Text. Find pages 41-44.
Read Key Word: Creative Thinking in THE BLUE BOOK by Callison and Preddy, 349-353.
Creativity Links by C. Osborne. This page links to great resources on creative thinking.
Creativity Pool. This is a database of creative and original ideas. Submit your own or check to see if someone else has thought of the same thing.
Introduction to Creative Thinking by R. Harris from VirtualSalt. This page compares critical and creative thinking and discusses the myths of creative thinking. | http://www.virtualinquiry.com/scientist/creative.htm | 13 |
23 | In mathematics, an expression is well-defined if it is unambiguous and its objects are independent of their representative. More simply, it means that a mathematical statement is sensible and definite. In particular, a function is well-defined if it gives the same result when the form (the way in which it is presented) but not the value of an input is changed. The term well-defined is also used to indicate whether a logical statement is unambiguous, and a solution to a partial differential equation is said to be well-defined if it is continuous on the boundary.
Well-defined functions
In mathematics, a function is well-defined if it gives the same result when the form (the way in which it is presented) but not the value of an input is changed. For example, a function that is well-defined will take the same value when 0.5 is the input as it does when 1/2 is the input. An example of a "function" that is not well-defined is "f(x) = the first digit that appears in x". For this function, f(0.5) = 0 but f(1/2) = 1. A "function" such as this would not be considered a function at all, since a function must have exactly one output for a given input.
In group theory, the term well-defined is often used when dealing with cosets, where a function on a quotient group may be defined in terms of a coset representative. Then the output of the function must be independent of which coset representative is chosen. For example, consider the group of integers modulo 2. Since 4 and 6 are congruent modulo 2, a function defined on the integers modulo 2 must give the same output when the input is 6 that it gives when the input is 4.
A function that is not well-defined is not the same as a function that is undefined. For example, if f(x) = 1/x, then f(0) is undefined, but this has nothing to do with the question of whether f(x) = 1/x is well-defined. It is; 0 is simply not in the domain of the function.
In particular, the term well-defined is used with respect to (binary) operations on cosets. In this case one can view the operation as a function of two variables and the property of being well-defined is the same as that for a function. For example, addition on the integers modulo some n can be defined naturally in terms of integer addition.
The fact that this is well-defined follows from the fact that we can write any representative of as , where k is an integer. Therefore,
and similarly for any representative of .
Well-defined notation
For real numbers, the product is unambiguous because . In this case this notation is said to be well-defined. However, if the operation (here ) did not have this property, which is known as associativity, then there must be a convention for which two elements to multiply first. Otherwise, the product is not well-defined. The subtraction operation, , is not associative, for instance. However, the notation is well-defined under the convention that the operation is understood as addition of the opposite, thus is the same as . Division is also non-associative. However, does not have an unambiguous conventional interpretation, so this expression is ill-defined.
See also
- Weisstein, Eric W. "Well-Defined". From MathWorld--A Wolfram Web Resource. Retrieved 2 January 2013.
- Contemporary Abstract Algebra, Joseph A. Gallian, 6th Edition, Houghlin Mifflin, 2006, ISBN 0-618-51471-6. | http://en.wikipedia.org/wiki/Well-defined | 13 |
47 | Basic Algebra/Working with Numbers/Distributive Property
Sum The resulting quantity obtained by the addition of two or more terms.
Real Number: An element of the set of all rational and irrational numbers. All of these numbers can be expressed as decimals.
Monomial: An algebraic expression consisting of one term.
Binomial: An algebraic expression consisting of two terms.
Trinomial: An algebraic expression consisting of three terms.
Polynomial: An algebraic expression consisting of two or more terms.
Like Terms: Like terms are expressions that have the same variable(s) and the same exponent on the variable(s). Remember that constant terms are all like terms. This follows from the definition because all constant terms can be seen to have a variable with an exponent of zero.
The distributive property is the short name for "the distributive property of multiplication over addition", although you will be using it to distribute multiplication over subtraction as well. When you are simplifying or evaluating you follow the order of operations. Sometimes you are unable to simplify any further because you cannot combine like terms. This is when the distributive property comes in handy.
When you first learned about multiplication it was described as grouping. You used multiplication as a way to condense the multiple addition of the same quantity. If you wanted to add you could think about it as four groups of three items.
|ooo| + |ooo| + |ooo| + |ooo|
You have 12 items. This is where comes in. So as you moved on you took this idea to incorporate variables as well. is three groups of x.
And is three groups of and is
This gives you six x's or 6x. Now we need to take this idea and extend it even further. If you have you might try to simplify using the order of operations first. This would have you do the addition inside the parentheses first. However, x and 1 are not like terms so the addition is impossible. We need to look at this expression differently if we are going to simplify it. What you have is or in other words you have three groups of
Here you can collect like terms. You have three x's and three 1's.
So you started with and ended with
The last equation might make it easier to see what the distributive property says to do.
You are taking the multiplication by 3 and distributing that operation across the terms being added in the parentheses. You multiply the x by 3 and you multiply the 1 by 3. Then you just have to simplify using the order of operations.
What Is Coming Next
After you learn about the distributive property you will know how to multiply a monomial by a polynomial. Next, you can use this information to understand how to multiply a polynomial by a polynomial. You will probably move on to multiplying a binomial times a binomial. This will show up in something like (x+2)(3x+5). You can think of a problem like this as x(3x+5) + 2(3x+5). Breaking up the first binomial like this allows you to use your knowledge of the distributive property. Once you understand this use of the distributive property you can extend this understanding even further to justify the multiplication of any polynomial with any polynomial.
Sometimes while you are attempting to isolate a variable in an equation or inequality you will need to use the distributive property. You already know that you use inverse operations to isolate your desired variable, but before you do that you need to combine like terms that are on the same side of the equation (or inequality). Now there might be a step even before that. You will need to see if the distributive property needs to be used before you can combine like terms then proceed to use inverse operations to isolate a variable.
Word to the Wise
Remember that you still have the order of operations. If you can evaluate operations in a straightforward manner it is usually in your best interest to do so. The distributive property is like a back door to the order of operations for when you get stuck because you do not have like terms. Of course when you are dealing with only constant terms everything you encounter is like terms. The trouble happens when you introduce variables. This means that some terms cannot be combined. Remember that variables take the place of real numbers (at least in Algebra 1) so the same rules that govern real numbers will also govern the variables that hold their place and vice versa. You can use the distributive property even when you do not need to.
Example Problems
Example Problem #1:
Solution to Example Problem #1:
Normally, to follow the order of operations you would add the two terms in the parenthesis first, then do the multiplication by. This does not work for this expression because x and 4 are unlike terms so you cannot combine them. We use the distributive property to help us find a way around the order of operations while still being sure that we keep the value of the express.
We distribute the multiplication by 2 across the addition. We will have 2 multiplied by x and 2 multiplied by 4.
Now we just need to finish the multiplication. is equal to 8.
We are done because we just have two terms being added and we cannot add them because they are not like terms.
Example Problem #2:
Solution to Example Problem #2:
Since the terms inside the parentheses are not like terms we cannot combine them. We can use the distributive property to multiply by .
This is the first example with subtraction in it. You keep this operation between the two terms just like we kept the addition between the two terms in the previous example. The next step is to multiply
In order to complete the previous step you will already need to know how to multiply monomials.
To summarize all the steps...
Example Problem #3:
Solve for in
Solution to Example Problem #3:
To solve for a variable you must isolate it on one side of the equation. We need to get the out of the parentheses. Since we cannot go through the order of operations and just add x plus 10 then multiply by 2, we will have to use the distributive property. First, distribute the multiplication by 2 across the addition inside the parentheses.
Now you can multiply
Now we can work on getting the on one side by itself. You need to do the order of operations backwards so we can "undo" what is "being done to" . To get rid of adding 20 you need to subtract 20. And remember that an equation sets up a relationship that we need to preserve. If you subtract 20 from one side you need to subtract 20 from the other side as well to keep the balance.
Now we need to "undo" the multiplication by 2, so we divide by 2. Whatever you do to one side must be done to the other. So divide both sides by 2.
This is it. You know you are done when the variable is by itself on one side, and it is.
Practice Games
http://www.phschool.com/atschool/academy123/html/bbapplet_wl-problem-430723.html ( video explanation)
Practice Problems
(Note: solutions are in red)
Use the distributive property to rewrite the expression
Notes for Educators
It is obvious to most educators in the classroom that students must have a good number sense to comprehend mathematics in a useful way. A critical part to have number sense is understanding multiplication of real numbers and variables that stand in the place of real numbers.
Students also need as much practice as possible with counting principles. Explaining multiplication and the distributive property as above helps to solidify some counting principles knowledge in the minds of the students.
In order to teach the distributive property an educator might be interested in how students first perceive knowledge of this kind. The better we understand how the brain obtains knowledge the more responsibly we can guide it.
Piaget model of cognitive development sets up level of understanding that the students minds passes through.
According to this chart, the distributive property would sit in sensory-motor or perhaps the pre-operational stages. Piaget's work has been largely criticized, but few doubt that it is a good starting place to think about how the brain acquires mathematical understanding.
Annette Karmiloff-Smith was students of Piaget and many believe that she brings his ideas forward. She believes that human brains are born with some preset modules that have the innate ability to learn and as you have experiences you create more independent modules. Eventually these modules start working together to create a deeper understanding and more applicable knowledge. The person moves from implicit to a more explicit knowledge which helps to create verbal knowledge.
Education, and specifically mathematics education plays a role during the process of moving from the instictually implicit stages to the more verbal explicit understanding. A student acquires procedural methods then learning they theory behind the procedure. This runs parallel to mathematics education. If you accept this model of how the mind comes to understand a concept, it would be critical to teach the students the procedural methods and mechanics of how the distributive property must be carried out. It would then be just as important to show them why this works out the way it does, or at least provide them with the educational opportunities to explore why it works out.
This exploration should take three stages. First the students needs to master the mechanics of the distributive property. In math ed terms, this might be considered drill and kill. The next step would be asking the students to reflect on why they think the distributive property has such a behavior. This could be related to encouraging metacognition with your students. Have them reflect not only on the procedure of the distributive property but also on why they think that. Hopefully the third and final step would be a the last two steps coming together in the students' minds as a solid understand of the distributive property.
Since this knowledge would probably first be link in the students mind as a procedure only helpful in a math classroom, it might also be beneficial to encourage the students to stretch this concept across domains. After all, one of the main purposes of a public mathematics education is to encourage logicality among the populous.
One of the most common errors for students to make is to just multiply the first number in the parentheses by the number outside. For example
This could initially be remedied by explaining the distributive property as taking 2 groups of (x+1) and adding them, like multiplication means to do.
This might lead to another misunderstand though. It might be confusing to think about things like or because it is hard to think about .5 groups of (x+1) or of . When a student first learns about multiplication they are told that it like grouping things together to simplify the addition of the same number multiple times. Once they have mastered this concept multiplication is extended to all rational number. Now multiplication is better thought of as a scaling process. You are taking one number and scaling it by a factor of another. This same mental leap is needed to think about distributing a rational number because the distributive property is still just multiplication.
An effective method to explain multiplication as a scale factor is to have two number lines, one right above the other. If you are multiplying by then the scale factor is and you can draw guide lines from the top number line to the bottom number line that scale every number down by one half. So a line will be draw from 2 on the top number line to 1 on the bottom number line. Another line will be drawn from 3 on the top number line to 1.5 on the bottom number line and so on. Of course this method is easier to use if you have an interactive applet or program of some kind that allows you to update the scale factor immediately. Without this instant gratification the students may find this explanation too cumbersome to follow. | http://en.wikibooks.org/wiki/Basic_Algebra/Working_with_Numbers/Distributive_Property | 13 |
15 | For 200 years the FUR TRADE dominated the area known as Rupert's Land. Settlement, particularly from eastern Canada and eastern Europe, eventually created a sound agricultural tradition. Postwar political and economic efforts have enabled the economy to diversify industry and develop primary resources, while maintaining agricultural strength.
Land and Resources
The regions of Manitoba are derived chiefly from its landforms. Since the final retreat of the continental ice sheet some 8000 years ago, many physical forces have shaped its surface into 4 major physiographic regions: the Hudson Bay lowland, Precambrian upland, Lake Agassiz lowland and Western upland.
Manitoba provides a corridor for the Red, Assiniboine, Saskatchewan, Nelson and Churchill rivers. Three large lakes, Winnipeg, Winnipegosis and Manitoba, cover much of the Lake Agassiz lowland. They are the remnants of Lake AGASSIZ, which occupied south-central Manitoba during the last ice age. The prolonged duration of this immense lake accounts for the remarkable flatness of one-fifth of the province, as 18-30 m of sediments were laid on the flat, preglacial surface.
Antecedent streams, such as the Assiniboine, Valley and Swan rivers, carved the southwestern part of the province (Western upland) into low plateaus of variable relief, which with the Agassiz lowland provide most of Manitoba's arable land. The Precambrian upland is composed of hard granite and other crystalline rocks that were subject to severe glacial scouring during the Ice Age; its thin soil, rock outcrop and myriad lakes in rock basins are inhospitable to agriculture but are amenable to hydroelectric power sites, freshwater fishing, metal mines and some forestry.
Flat sedimentary rocks underlie the Hudson Bay lowland, and the climate is extremely cold. Little development or settlement exists other than at CHURCHILL, Manitoba's only saltwater port. A line drawn from southeastern Manitoba to Flin Flon on the western boundary separates the arable and well-populated section to the south and west from the sparsely inhabited wilderness to the north and east. The latter comprises about two-thirds of the area of the province.
The bedrock underlying the province varies from ancient Precambrian (Archean) to young sedimentary rocks of Tertiary age. The former has been identified as 2.7 billion years old, among the oldest on Earth, and forms part of the Canadian Shield, a U-shaped band of Precambrian rocks tributary to Hudson Bay. It consists principally of granites and granite gneisses in contact with volcanic rocks and ancient, metamorphosed sedimentary rocks. Contact zones often contain valuable minerals, including nickel, lead, zinc, copper, gold and silver - all of which are mined in Manitoba.
Along the flanks of and overlying the ancient Precambrian rocks are sedimentary rocks ranging from Palaeozoic to Tertiary age. The Lake Agassiz lowland comprises a surface cover of lacustrine sediments superimposed on early Palaeozoic rocks of Ordovician, Silurian and Devonian age, from which are mined construction limestone, gypsum, clay, bentonite, sand and gravel. In favourable structures petroleum has also been recovered from rocks of Mississippian age.
West of the Agassiz lowland rises an escarpment of Cretaceous rocks, which comprise the surface formations of the Western upland. For long periods the escarpment was the west bank of glacial Lake Agassiz. East-flowing rivers such as the Assiniboine, the Valley and the Swan once carried the meltwaters of retreating glaciers, eroding deep valleys (spillways) that opened into this lake. The former lake bottom and the former valleys of tributary streams were veneered with silts and clays, which today constitute the most fertile land in western Canada.
Both the Western upland and the bed of Lake Agassiz comprise the finest farmlands of Manitoba. In the southwest the geologic structures of the Williston Basin in North Dakota extend into Manitoba and yield small amounts of petroleum. A vast lowland resting on undisturbed Palaeozoic sediments lies between the Precambrian rocks of northern Manitoba and Hudson Bay. Adverse climate, isolation and poorly drained peat bogs make this region unsuitable for agriculture.
Minor terrain features of Manitoba were formed during the retreat of the Wisconsin Glacier at the close of the last ice age. The rocks of the Shield were severely eroded, leaving a marshy, hummocky surface threaded with a myriad of lakes, streams and bogs. Relief is rolling to hilly.
Much of the Agassiz lowland, the largest lacustrine plain in North America (286 000 km2), is suitable for irrigation. Much is so flat that it requires an extensive drainage system. Its margins are identified by beach ridges. The Western upland is now covered by glacial drift. Rolling ground moraine broken in places by hilly end moraines has a relief generally favourable to highly productive cultivated land.
Since southern Manitoba is lower than the regions to the west, east and south, the major rivers of western Canada flow into it. Including their drainage basins, these are the SASKATCHEWAN RIVER (334 100 km2); the Red (138 6002) and ASSINIBOINE (160 600 km2) and WINNIPEG rivers (106 500 km2). Lakes Winnipeg, Manitoba and Winnipegosis receive the combined flow of these basins. In turn the water drains into Hudson Bay via the NELSON RIVER. These together with the CHURCHILL, HAYES and other rivers provide a hydroelectric potential of 8360 MW.
Climate, Vegetation and Soil
Situated in the upper middle latitudes (49° N to 60° N) and at the heart of a continental landmass, Manitoba experiences large annual temperature ranges: very cold winters and moderately warm summers. The southward sweep of cold, dry arctic and maritime polar air masses in winter is succeeded by mild, humid maritime tropical air in summer. Nearly two-thirds of the precipitation occurs during the 6 summer months, the remainder appearing mostly as snow. The frost-free period varies greatly according to local conditions, but as a general rule the average 100-day frost-free line extends from Flin Flon southeast to the corner of the province.
Spring comes first to the Red River valley, which has a frost-free period of about 120 days, and spreads to the north and west. As a result, the mean number of growing degree days (above 5° C) varies from 2000 to 3000 within the limits defined. Snowfall tends to be heaviest in the east and diminishes westward. Around Winnipeg the average snowfall is 126 cm per year. Fortunately, 60% of the annual precipitation accompanies the peak growing period for grains: May, June and July. Late August and early September are dry, favouring the harvest of cereal grains.
Subarctic conditions prevail over northern Manitoba. Churchill occupies a position on Hudson Bay where abnormally cold summers are induced by sea temperatures. Manitoba's climate is best understood with reference to air masses. During the winter, low temperatures and humidities are associated with the dominance of continental Arctic and continental Pacific air. During spring abrupt seasonal changes introduce maritime tropical air from the south, which is unstable and warm. The usual sequence of midlatitude "lows" and "highs" brings frequent daily temperature changes. Some Pacific air moves east, moderating at intervals the extreme cold of winter.
Manitoba's natural vegetation ranges from open grassland and aspen in the south to mixed forest in the centre, typical boreal forest in the north and bush-tundra by Hudson Bay. In the south high evaporation rates discourage the growth of trees, which are replaced by prairie. Both tall-grass and mixed-grass species were extensive before settlement. Elm, ash and Manitoba maple grow along stream courses, and oak grows on dry sites. With increase in latitude and reduced evaporation, mixed broadleaf forest replaces parkland.
The northern half of the province is characteristically boreal forest, consisting of white and black spruce, jack pine, larch, aspen and birch.
This pattern continues with decreasing density nearly to the shores of Hudson Bay, where the cold summers and short growing period discourage all but stunted growth of mainly spruce and willow and tundra types of moss, lichens and sedges. Spruce, fir and pine are processed for lumber and pulp and paper products. Large mills are found at Pine Falls (newsprint), The Pas (lumber and pulp and paper) and Swan River (oriented standboard).
In general the province's soil types correlate closely with the distribution of natural vegetation. The following soil descriptions are in order of decreasing agricultural value. The most productive are the black soils (chernozems), corresponding to the once dominant prairie grassland of the Red River valley and southwestern Manitoba. They differ in texture from fine in the former to medium in the latter. Coarse black soils are found in the old Assiniboine delta and the Souris Valley, the former extending from Portage la Prairie to Brandon. Sand dunes are evident in places.
In areas of transition to mixed forest, degraded black soils and grey-wooded soils are common, notably in the area from Minnedosa to Russell south of Riding Mountain. Large areas of the former Lake Agassiz, where drainage is poor, are termed "degraded renzina" because of high lime accumulation. Soils derived from the hard granites and other rocks of the Shield, typically covered with coniferous forest, are described as grey wooded, podsol and peat; they are rated inferior for agriculture.
Manitoba's principal resource is fresh water. Of the 10 provinces it ranks third, with 101 590 km2 in lakes and rivers, one-sixth its total area. The largest lakes are WINNIPEG (24 387 km2), WINNIPEGOSIS (5374 km2) and MANITOBA (4624 km2). Other freshwater lakes of more than 400 km2 are SOUTHERN INDIAN, Moose, Cedar, Island, Gods, Cross, Playgreen, Dauphin, Granville, Sipiwesk and Oxford. Principal rivers are the Nelson, which drains Lake Winnipeg, and the Red, Assiniboine, Winnipeg, Churchill and Hayes. Lake Winnipeg is the only body of water used today for commercial transportation, but the Hayes, Nelson, Winnipeg, Red and Assiniboine rivers were important during the fur trade and early settlement eras.
The network of streams and lakes today is a source of developed and potential hydroelectric power; its installed generating capacity is 4498 MW. Possessing 70% of the hydroelectric potential of the Prairie region, Manitoba promises to become the principal contributor to an electric grid that will serve Saskatchewan and Alberta as well as neighbouring states of the US.
Flooding along the Red River and its principal tributaries, the Souris and Assiniboine, has affected towns as well as large expanses of agricultural land. Major flood-control programs have been undertaken, beginning with the Red River Floodway and control structures completed in 1968. A 48 km diversion ditch protects Winnipeg from periodic flooding. Upstream from Portage la Prairie a similar diversion was built between the Assiniboine River and Lake Manitoba. Associated control structures include the Shellmouth Dam and Fairford Dam. Towns along the Red River are protected by dikes.
Agricultural land is the province's second major resource, with over 4 million ha in field crops in addition to land used for grazing and wild hay production. Based on "census value added," agriculture leads by far all other resource industries; mining follows in third place after hydroelectric power generation. Nickel, copper, zinc and gold account for about three-quarters by value of all minerals produced. The fuels, mainly crude petroleum, are next, followed by cement, sand, gravel and construction stone. Of the nonmetallics, peat and gypsum are important.
Most of Manitoba's productive forestland belongs to the Crown. The volume of wood cut averages 1 600 000 m3 annually, from which lumber, plywood, pulp and paper are produced. Manitoba's freshwater lakes yield large quantities of fish; the leading species by value are pickerel, whitefish, perch and sauger. Hunting and trapping support many native people.
Conservation of resources has been directed mainly to wildlife. Fur-bearing animals are managed through trapping seasons, licensing of trappers and registered traplines. Hunting is managed through the Wildlife Act, which has gone through a series of revisions since 1870. The Endangered Species Act (1990) enables protection of a wider variety of species.
In 1961 a system of wildlife management areas was established and now consists of 73 tracts of crown land encompassing some 32 000 km2 to provide protection and management of Manitoba's biodiversity. Manitoba is on the staging route of the North American Flyway and these wildlife areas protect land which many migratory birds use.
Hunting of all species of game is closely managed and special management areas have been established to provide increased protection for some game, nongame and endangered species and habitats. Hunting and fishing are also closely managed in provincial parks and forest reserves.
Forest conservation includes fire protection, insect control, controlled cutting and reforestation programs. Surveillance of forest land by aircraft and from numerous widely dispersed fire towers reduces significantly the incidence and spread of forest fires. Insects and disease are controlled by aerial spraying, tree removal and regulated burning. Among the more virulent pests are jack pine budworm, spruce budworm, aspen tortrix, forest tent caterpillar and birch beetle. Winnipeg is fighting desperately to contain dutch elm disease.
Each year millions of seedlings, mainly jack pine, red pine and white spruce, are planted for REFORESTATION. To ensure future supplies of commercial timber, operators must make annual cuttings by management units on a sustained yield basis.
RIDING MOUNTAIN NATIONAL PARK, on the Manitoba escarpment, was the province's only national park until 1996 when Wapusk National Park near Churchill was established. Manitoba has over 100 provincial parks of various types. The natural and recreational parks are the most commonly used and include WHITESHELL PROVINCIAL PARK in the west and Duck Mountain in the east. The province's first wilderness park, Atikaki, was opened in 1985 and is Manitoba's largest park.
The Manitoba Fisheries Enhancement Initiative was announced in 1993 to fund projects that protect or improve fish stocks or enhance the areas where fish live. Projects have included rock riffles for fish spawning, artificial walleye spawning shoals, stream bank protection and habitat enhancement and a fish way. The FEI encourages cooperation with other government and nongovernment agencies. This ensures that fisheries values are incorporated in other sectors; eg, agriculture, forestry and highways.
Between 1682, when YORK FACTORY at the mouth of the Hayes River was established, and 1812, when the first Selkirk settlers came to Red River, settlement consisted of fur-trading posts established by the HUDSON'S BAY COMPANY (HBC), the NORTH WEST COMPANY and numerous independent traders. As agriculture spread along the banks of the Red and Assiniboine rivers, radiating from their junction, the RED RIVER COLONY was formed. In 1870 the British government paid the HBC $1.5 million for control of the vast territory of RUPERT'S LAND and opened the way for the newly formed Dominion of Canada to create the first of 3 Prairie provinces. Manitoba in 1870 was little larger than the Red River valley, but by 1912 its current boundaries were set. Settlement of the new province followed the Dominion Lands Survey and the projected route of the national railway. The lands of the original province of Manitoba were granted to settlers in quarter-section parcels for homesteading purposes under the Dominion Lands Act of 1872.
The remainder of what is now Manitoba was still the North-West Territories at the time. After 1878 settlers could obtain grants of quarter-section parcels of land in those areas provided they managed to improve the land. By 1910 most of southern Manitoba and the Interlake and Westlake areas were settled. Railway branch lines brought most settlers within 48 km (30 mi) of a loading point from which grain could be shipped to world markets. Rural population peaked in 1941, followed by a steady decline resulting from consolidation of small holdings into larger farm units, retreat from the submarginal lands of the frontier because of long, cold winters and poor soils, and the attraction of the larger cities, especially Winnipeg.
Overpopulation of submarginal lands in the Interlake and the Westlake districts and along the contact zone with the Shield in the southeast caused a substantial shift from the farm to the city. Hamlets and small towns have shrunk or disappeared; large supply centres are more easily reached with modern motor vehicles, and children are bused to schools in larger towns and cities. Elimination of uneconomic railway branch lines also has left many communities without services.
Manitoba's population is disproportionately distributed between the "North" and the "South." A line drawn from lat 54° N (north of The Pas) to the southeast corner of the province sharply divides the continuous settled area, containing 95% of the people, from the sparsely populated north. Settlement of the north is confined to isolated fishing stations and mining towns, scattered native reserves and Churchill, a far north transshipment centre on the shores of Hudson Bay.
Until 1941 the rural population component exceeded the urban. The rural population subsequently declined in absolute and relative terms until 2001, when it was 28% of the total. "Rural" includes farm and nonfarm residents and people living in towns and hamlets that have populations under 1000.
Centres designated as "urban" (more than 1000) now comprise 72% of the total. Almost 77% of the urban total live in Winnipeg, which together with its satellite, Selkirk, accounts for nearly 60% of the total provincial population.
WINNIPEG began in the shadow of Upper Fort Garry. In the 1860s free traders, in defiance of the HBC monopoly, located there and competed for furs. After 1870 the tiny village rapidly became a commercial centre for the Red River colony. Located at "the forks" of the Red and Assiniboine rivers, it commanded water and land travel from the west, south and north and became the northern terminus of the railway from St Paul, Minn, in 1878.
Following the decision to have the CANADIAN PACIFIC RAILWAY cross the Red River at Winnipeg (1881), the centre became the apex of a triangular network of rail lines that drew commerce from Alberta eastward, and it eventually became a crossroads for east-west air traffic. Since World War II Winnipeg has experienced modest growth and commercial consolidation in a reduced hinterland. It is the provincial centre of the arts, education, commerce, finance, transportation and government.
Although Winnipeg's pre-eminence is unchallenged, certain urban centres dominate local trading areas. BRANDON, Manitoba's second city, is a distribution and manufacturing centre for the southwest, as is the smaller PORTAGE LA PRAIRIE, set in the Portage plains, one of the richest agricultural tracts in the province. In the north, THOMPSON and FLIN FLON service the mining industry.
The major towns of SELKIRK, DAUPHIN and THE PAS were founded as fur-trading forts and today serve as distribution centres for their surrounding communities. LYNN LAKE, LEAF RAPIDS and Bissett are small northern mining centres.
A network of smaller towns in southwestern Manitoba fits the "central place theory" modified by the linear pattern of rail lines emanating from Winnipeg. Grain elevators approximately every 48 km (30 mi) became the nuclei of hamlets and towns. Eventually, with the advent of motor transport, branch lines were eliminated, and with them many place names that once stood for thriving communities. The present pattern is a hierarchy of central places, from hamlets to regional centres, competing to supply a dwindling farm population.
Since 1961 Manitoba's population growth has been slow but steady, rising from 921 686 in 1961 to 1 119 583 in 2001, despite a fairly constant amount of natural increase of 6000 to 7000 per year. The significant factor in population growth during this period has been migration. During periods of economic health, Manitobans have been less likely to move away, and in fact often return home from other provinces. When the economy is in decline, Manitobans tend to migrate, primarily to Ontario and to the other western provinces.
These cyclical periods, normally 3 to 5 years, either negated or enhanced the natural population growth so the population has experienced short periods of growth followed by short periods of decline, resulting in very slow overall population growth.
The labour participation rate (5-year average 2000-04) is higher for men (74.9%) than for women (62.2%), although the figure for women has increased steadily since the latter part of the 20th century. The unemployment rate was also higher for men (5.3%) than for women (4.8%). When Winnipeg is considered separately, its unemployment rate was slightly higher (5.3%) than in the rural areas (4.6%) and to the provincial average (5.1%). Compared with other provinces, Manitoba has had one of the lowest unemployment rates over the last 25 years.
Manitoba's largest employers of labour by industry are trade (85 200), manufacturing 69 100 and health care and social assistance (78 000). The annual average annual income for individuals in Manitoba in 2001 was $28 400, about 90% of the national average of $31 900.
The dominant "mother tongue"in 2001 was English (73.9%). Other prevalent languages are German, French and Ukrainian and Aboriginal languages. The concentration of those reporting their "mother tongue" as English is higher in urban centres than in rural areas. The reverse is true for French, Ukrainian and German, the latter mainly because of the large MENNONITE farming population. In 1870 the Manitoba Act gave French and English equal status before the courts and in the legislature. In 1890 a provincial act made English the only official language of Manitoba. This act was declared ULTRA VIRES in 1979, and since 1984 the provincial government has recognized both English and French as equal in status.
In schools the Français program provides instruction entirely in French for Franco-Manitobans and the French-immersion program gives all instruction in French to students whose mother tongue is not French. Some schools offer instruction in the majority of subjects in a minority tongue, eg, Polish, Ukrainian, German.
The mother tongues of native peoples are Ojibway, Cree, Dene and Dakota. The native people of the north speak mainly Cree; Ojibway is the mother tongue of most bands in the south, although English is most often spoken.
Manitoba contains a large diversity of ethnic origins. Most Manitobans trace their ancestries to one or more of the following ethnic groups: British, Canadian, GERMAN, Aboriginal, UKRAINIAN, and FRENCH. British descendants have decreased proportionately since 1921; numerically they are strongest in urban areas, whereas the minorities are relatively more numerous in rural areas. The distribution of the larger ethnic groups, especially in rural areas, is related to the history of settlement. In the 2001 census, about 8% of the population listed their sole ethnic origin as Aboriginal. There are also significant populations of those who listed Polish, DUTCH, FILIPINO, RUSSIAN and Icelandic ancestries.
The Mennonites (German and Dutch) are concentrated in the southern Red River valley around ALTONA, STEINBACH and WINKLER; Ukrainians and POLES live in the Interlake district and along the frontier. Many French live south of Winnipeg close to the Red River. Those of ICELANDIC origin are found around the southwestern shore of Lake Winnipeg. The Filipino population is concentrated in Winnipeg. FIRST NATIONS live mainly on scattered reserves, primarily in central and northern Manitoba, although some have moved to a very different lifestyle in Winnipeg.
To some extent religious denominations reflect the pattern of ethnicity. Three groups comprise about half of the population (2001c): UNITED CHURCH (16%), Roman CATHOLIC (26.5%) and ANGLICAN (7.8%). Most Ukrainians are members of the Ukrainian Catholic (2.7%) and Orthodox (1.0%) churches. Those of German and Scandinavian backgrounds support mainly the Lutheran faith (4.6%), and 4.7% are Mennonite. Nearly 19% of the population claimed to have no affiliation with any religion.
Hunting and trapping constitute Manitoba's oldest and today's smallest industry. For 200 years the HBC dominated trade in furs across western Canada as far as the Rocky Mountains. Alongside the fur trade, buffalo hunting developed into the first commercial return of the plains; native people, Métis and voyageurs traded meat, hides and PEMMICAN, which became the staple food of the region.
Until 1875 the fur trade was the main business of Winnipeg, which was by then an incorporated city of 5000 and the centre of western commerce. In the city the retail/wholesale and real estate business grew in response to a new pattern of settlement and the development of agriculture. Red Fife wheat became the export staple that replaced the beaver pelt.
After the westward extension of the main CPR line in the 1880s, farmers and grain traders could expand into world markets and an east-west flow of trade began, with Winnipeg the "gateway" city. Over the next 20 years, this basically agricultural economy consolidated. Lumbering, necessary to early settlement, declined and flour mills multiplied.
During the boom years, 1897 to 1910, there was great commercial and industrial expansion, particularly in Winnipeg, and agriculture began to diversify. The following decades of depression, drought, labour unrest and 2 world wars sharpened the realization that the economy must diversify further to survive, and since WWII there has been modest growth and commercial consolidation.
Today, manufacturing leads all industrial groups, followed by agriculture, the production of hydroelectric power and mining. The primary industries (including electric power generation) represent about half of the total revenue derived from all goods-producing industries. Manufacturing and construction account for the rest.
Agriculture plays a prominent role in the provincial economy. There are diverse sources of income from agriculture. In 1997 farm cash receipts for crops amounted to $1.7 billion compared with livestock at $1.2 billion. Wheat cash receipts are 4 times those from barley and oats combined. Hay crops are important because of a secondary emphasis on livestock production.
Cash receipts from livestock were highest from hogs ($478 million), followed by cattle ($301 million), dairy products, poultry and eggs. Wheat is grown throughout southern Manitoba, primarily where there are medium- to fine-textured black soils, especially in the southwest. Barley used as prime cattle feed is tolerant of a range of climatic conditions, but is intensively grown south and north of Riding Mountain and in the Swan River valley. CANOLA, used as a vegetable oil and as high-protein cattle feed, is also grown throughout the province. In the late 1990s, its importance rivalled that of wheat. Prime malting barley prefers the parkland soils and cooler summer temperatures. Cultivation of oats is general and concentrated in areas of livestock farming; it is frequently tolerant of less productive soil. Flax is grown mostly in the southwest on black soil, and canola is significant on the cooler lands near the outer margin of cultivation.
Specialized crops, including sugar beets, sunflowers, corn (for both grain and silage) and canning vegetables are concentrated in the southern Red River valley, where heating degree days are at a maximum and soil texture is medium. Beef cattle are raised on most farms in western Manitoba but are less important in the Red River valley.
Dairy cattle are raised mainly in the cooler marginal lands, which extend in a broad arc from the southeast to the Swan River valley. Poultry is heavily concentrated in the Red River valley, but hogs have a much wider distribution, influenced by a surplus of barley and fresh milk. Market gardening occupies good alluvial soil around Winnipeg and the Red River, from which water is obtained for irrigation during dry periods.
Neighbouring farmers set up cooperatives, which vary in scope and purpose from the common purchase of land and machinery to processing and marketing members' products. Two large cooperatives, Manitoba Pool Elevators and United Grain Growers, were founded to handle and market grain, and now deal in livestock and oilseeds and provide members with reasonably priced farm supplies. Manitoba's 8 marketing boards are producer bodies that control stages in the marketing of specific commodities. Wheat, oats and barley for export must be sold to the national CANADIAN WHEAT BOARD.
Agriculture is never likely to expand beyond the limits imposed by shortness of growing season (less than 90 days frost-free) and the poor podsolic soils associated with the Shield. Plans for irrigating the southwestern Red River valley, known as the Pembina Triangle, are under study. Periodic flooding of the upper Red River (south of Winnipeg) has damaged capital structures and reduced income. Approximately 880 000 ha of farmland are under drainage, mostly in the Red River Valley and the Interlake and Westlake districts. The Prairie Farm Rehabilitation Act encourages conservation of water through check dams and dugouts.
Mining contributed $1 billion to the provincial economy in 1996. Of Manitoba's income from all minerals, over 80% is derived from metals, chiefly nickel, copper, zinc, cobalt and gold, with minor amounts of precious metals. All metals are found in the vast expanse of Canadian Shield.
Diminishing amounts of petroleum are recovered from sedimentary rocks of Mississippian age in the southwest corner of the province near Virden and Tilston. Industrial minerals, principally quarried stone, gravel and sand, account for 8%. The famous Tyndall stone is a mottled dolomitic limestone quarried near Winnipeg and distributed across Canada. Gypsum is mined in the Interlake district near Gypsumville and in the Westlake area near Amaranth. Silica sand comes from Black Island in Lake Winnipeg.
Manitoba's most productive metal mines are at Thompson. Reputed to be the largest integrated (mining, smelting and refining) operation in North America, Thompson accounts for all of Manitoba's nickel production. The province's oldest mining area, dating from 1930, is at Flin Flon; along with its satellite property at Snow Lake, it is a major producer of copper and zinc and small amounts of gold and silver. Other major centres include Lynn Lake, where until 1989 copper and nickel were mined and now gold has taken their place, and Leaf Rapids, where nickel and copper are mined.
Other than a small amount of petroleum, the province's resources in energy are derived from hydroelectric power. Thermal plants depend mostly on low-grade coal imported from Estevan, Sask, and on diesel fuel. Manitoba Hydro, a crown corporation, is the principal authority for the generation, development and distribution of electric power, except for Winnipeg's inner core, which is served by Winnipeg Hydro, a civic corporation. Hydro power plants were first built along the Winnipeg River and 6 of these plants still operate.
The availability of cheap power within 100 km of Winnipeg has made the city attractive to industry for many years. Since 1955 hydroelectric development has been expanding in the north. In 1960 a plant was commissioned at Kelsey on the Nelson River, and in 1968 the Grand Rapids plant was built near the mouth of the Saskatchewan River. Increased demand led to the construction of 4 additional plants on the Nelson: Jenpeg, Kettle Rapids, Long Spruce and Limestone. Downstream another plant at Limestone with a 1330 MW capacity, the largest in Manitoba, was completed by 1992. In addition, 2 thermal plants powered by coal from Estevan are located at Brandon and Selkirk; they supplement hydro sources at peak load times.
Installed generating capacity in 1994 was 4912 MW with a further hydro potential of 5260 MW. Manitoba sells surplus power, mostly during the summer period, to Ontario, Saskatchewan, Minnesota and North Dakota. Its transmission and distribution system exceeds 76 000 km. Manitoba Hydro serves some 400 000 customers and Winnipeg serves another 90 000, who consumed 27 102 GWh in 1993. Natural gas from Alberta, which is used mainly for industrial and commercial heating, supplies one-third of Manitoba's energy requirements.
In its primary stage (logging), FORESTRY accounts for very little of the value of goods-producing industries. The most productive forestlands extend north from the agricultural zone to lat 57° N; north and east of this line timber stands are sparse and the trees are stunted, gradually merging with tundra vegetation along the shores of Hudson Bay. The southern limit is determined by the northward advance of commercial agriculture. On the basis of productivity for forestry, 40% of the total provincial land area is classified as "productive," 29% as nonproductive and over 30% as nonforested land.
Of the total productive forestland of 152 000 km2, 94% is owned by the provincial government. From 1870 to 1930 lands and forests were controlled by the federal government; after the transfer of natural resources in 1930, the province assumed full responsibility. In 1930 there were 5 forest reserves; today there are 15 provincial forests totalling more than 22 000 km2.
In order of decreasing volume, the most common commercial tree species are black spruce, jack pine, trembling aspen (poplar), white spruce, balsam poplar and white birch. Other species common to Manitoba include balsam fir, larch, cedar, bur oak, white elm, green ash, Manitoba maple and red and white pine.
Timber-cutting practices are restricted around roads, lakes and rivers. The government proposes annual cuts for each management unit on a sustained yield basis. In addition to its reforestation program, the government provides planting stock to private landowners for shelterbelts and Christmas trees.
The commercial inland fishery has been active in Manitoba for over 100 years. Water covers nearly 16% of Manitoba, of which an estimated 57 000 km2 is commercially fished. Two-thirds of the total catch comes from the 3 major lakes - Winnipeg, Manitoba and Winnipegosis - and the balance is taken from the numerous smaller northern lakes. The total value of the 1997-98 catch was $15 million. The catch is delivered to 70 lakeside receiving stations located throughout the province and then transported to the Freshwater Fish Marketing Corporation's central processing plant in Winnipeg. All the commercial catch is processed at this plant. The US and Europe account for most of the corporation's annual sales.
Thirteen commercial species, dressed and filleted, include whitefish, pike, walleye and sauger. Sauger, pike, walleye, trout and catfish are principal sport fish. The Manitoba Department of Natural Resources maintains hatcheries for pickerel, whitefish and trout.
Today, Manitoba has a firm base in its processing and manufacturing industries, as shown by the value of production: over 61 000 people were employed in producing nearly $11 billion (1998) worth of goods. About two-thirds of the value of industrial production comes from the following industries: food processing, distilling, machinery (especially agricultural); irrigation and pumps; primary metals, including smelting of nickel and copper ores, metal fabricating and foundries; airplane parts, motor buses, wheels and rolling-stock maintenance; electrical equipment; computers and fibre optics.
There are also the traditional industries: meat packing, flour milling, petroleum refining, vegetable processing, lumber, pulp and paper, printing and clothing. Winnipeg accounts for 75% of the manufacturing shipments. Half of all manufactured goods are exported, one-third to foreign countries.
Winnipeg's strongest asset has always been its location. In the heart of Canada and at the apex of the western population-transportation triangle, this city historically has been a vital link in all forms of east-west transportation.
The YORK BOATS of the fur trade and the RED RIVER CARTS of early settlers gave way first to steamboats on the Red River, then to the great railways of the 19th and early 20th centuries. Subsequently, Winnipeg provided facilities for servicing all land and air carriers connecting east and west. Today, rail and road join the principal mining centres of northern Manitoba. During the long, cold winter, the myriad of interconnected lakes creates a network of winter roads. Major northern centres are linked to the south via trunk highways. The Department of Highways manages over 73 000 km of trunk highways and 10 700 km of provincial roads (mainly gravel).
Since 1926 BUSH FLYING has made remote communities accessible; several small carriers serve the majority of northern communities. Transcontinental routes of Air Canada and Canadian Airlines International pass through Winnipeg and Greyhound Air began flying between Ottawa and Vancouver with a stop in Winnipeg in the summer of 1996. NWT Air connects Winnipeg with Yellowknife and Rankin Inlet, Nunavut. Canadian Airlines International serves northern Manitoba with its partner CALM Air. Perimeter Airlines also serves northern points.
Air Canada operates daily flights south to Chicago, Ill, connected with the United Airlines network; and Northwest Airlines provides service to Minneapolis, Minn. Canadian Airlines International, Air Canada and charter airlines, Canada 3000 and Royal, provide direct flights from Winnipeg to Europe and various winter sunspot vacation destinations.
Because Winnipeg is Canada's principal midcontinent rail centre, both CNR and CPR have extensive maintenance facilities and marshalling yards in and around the city. Wheat has the largest freight volume, but diverse products from petroleum and chemicals to motor cars and lumber are transported by rail. The CNR owns Symington Yards, one of the largest and most modern marshalling yards in the world. At Transcona it maintains repair and servicing shops for rolling stock and locomotives, and at GIMLI, a national employee training centre. In addition to repair shops and marshalling yards, the CPR has a large piggyback terminal; Weston shops, one of 3 in its trans-Canada system, employs some 2500 people.
Via Rail operates Canada's passenger train service, which uses the lines of the 2 major railways and provides direct service between Vancouver and Halifax and Saint John.
In 1929 the HUDSON BAY RAILWAY, now part of the CNR system, was completed to the port of Churchill, where today major transshipment facilities handle on average annually some 290 000 t of grain between July 20 and October 31. Formerly an army base, Churchill is also a research centre and a supply base for eastern arctic communities.
Government and Politics
On 15 March 1871 the first legislature of Manitoba met for the first time; it consisted of an elected legislative assembly with members from 12 English and 12 French electoral districts, an appointed legislative council and an appointed executive council who advised the government head, Lieutenant-Governor Adams G. ARCHIBALD. When the assembly prorogued systems of courts, education and statutory law had been established, based on British, Ontarian and Nova Scotian models. The legislative council was abolished 5 years later.
Since 1871 the province has moved from communal representation to representation by population and from nonpartisan to party political government. Today the LIEUTENANT-GOVERNOR is still formal head of the provincial legislature and represents the Crown in Manitoba. The government is led by the PREMIER, who chooses a CABINET, whose members are sworn in as ministers of the Crown. Her Majesty's Loyal Opposition is customarily headed by the leader of the party winning the second-largest number of seats in a given election. Laws are passed by the unicameral legislative assembly, consisting of 57 elected members. See MANITOBA PREMIERS: TABLE; MANITOBA LIEUTENANT-GOVERNORS: TABLE.
The judiciary consists of the superior courts, where judges are federally appointed, and many lesser courts that are presided over by provincial judges. The RCMP is contracted to provide provincial police services and municipal services in some centres; provincial law requires cities and towns to employ enough police to maintain law and order. Manitoba is federally represented by 14 MPs and 6 senators.
Local government is provided by a system of municipalities. Manitoba has 5 incorporated cities (Winnipeg, Brandon, Selkirk, Portage la Prairie and Thompson), 35 incorporated towns and 40 incorporated villages. (An incorporated municipality has a greater degree of autonomy, especially in taxing and borrowing power.) There are over 100 rural municipalities ranging in size from 4 to 22 TOWNSHIPS, many of which contain unincorporated towns and villages. Locally elected councils are responsible for maintaining services and administering bylaws.
In remote areas where population is sparse, the government has established 17 local government districts with an appointed administrator and an elected advisory council. The Department of Northern Affairs has jurisdiction over remote areas in northern Manitoba and uses the community council as an advisory body. Community councils are elected bodies, mostly in Métis settlements, through which the government makes grants. Each has a local government "coordinator" to represent the government.
For the fiscal year ending 31 March 1998, the province had revenues of $5.8 billion and expenditures of $5.7 billion. Income taxes garnered $1.6 billion and other taxes, including a 7% sales tax and gasoline and resources taxes, totalled another $1.6 billion. Unconditional transfer payments and shared-cost receipts from federal sources covering education, health and economic development were $1.7 billion. More than 50% of government expenditures go toward education, health and social services.
Health and Welfare
The Manitoba Health Services Commission, with generous support from Ottawa, provides nonpremium medical care for all its citizens. A pharmacare program pays 80% of the cost of all prescription drugs above $75 ($50 for senior citizens). The province and Winnipeg each have a free dental care program for all elementary-school children.
The departments of Health and of Community Services and Corrections provide services in public and mental health, social services, probations and corrections. The government is responsible for provincial correction and detention facilities and through the Alcoholism Foundation administers drug and alcohol rehabilitation facilities.
Manitoba has over 80 provincially supported hospitals, including 10 in Winnipeg, and over 100 personal care homes in addition to elderly persons' housing. Winnipeg is an important centre for medical research; its Health Sciences Centre includes Manitoba's chief referral hospitals and a number of specialist institutions, among them the Children's Centre and the Manitoba Cancer Treatment and Research Foundation.
While Manitoba's system of RESPONSIBLE GOVERNMENT was maturing during the 1870s, communal loyalties rather than party politics dominated public representation. As the 1880s advanced, however, a strong Liberal opposition to John NORQUAY'S nonpartisan government developed under Thomas GREENWAY. After the election of 1888, Greenway's Liberals formed Manitoba's first declared partisan government until defeated in 1899 (on issues of extravagance and a weak railway policy) by an invigorated Conservative Party under Hugh John MACDONALD. When Macdonald resigned in 1900, hoping to return to federal politics, R.P. ROBLIN became premier, a position he held until 1915, when a scandal over the contracting of the new legislative buildings brought down the government in its fifth term.
In 1920, against the incumbent Liberal government of T.C. NORRIS, the United Farmers of Manitoba first entered provincial politics and returned 12 members to the legislative assembly, heralding a new era of nonpartisan politics. The promise was fulfilled in the election of 1922, when the UFM won a modest majority and formed the new government. Manitoba was returning to its roots, reaffirming rural virtues of thrift, sobriety and labour to counter rapid change, depression and the aftereffects of war.
The farmers chose John BRACKEN as their leader, and he remained premier until 1943 despite the UFM withdrawal from politics in 1928. Bracken then formed a coalition party, the Liberal-Progressives, which won a majority in the assembly in 1932, but only gained a plurality in the 1936 election, surviving with Social Credit support. He continued as premier in 1940 over a wartime government of Conservative, Liberal-Progressive, CCF and Social Credit members.
Bracken became leader of the federal Conservatives in 1943 and was replaced by Stuart S. Garson. In 1945 the CCF left the coalition, the Conservatives left it in 1950 and the Social Credit Party simply faded. From 1948 the coalition was led by Premier Douglas CAMPBELL, although after 1950 it was predominantly a Liberal government.
From 1958 the Conservatives under Duff ROBLIN governed the province until Edward SCHREYER's NDP took over in 1969 with a bare majority. His government survived 2 terms; during its years in office, many social reforms were introduced and government activity in the private sector was expanded.
In 1977 Sterling LYON led the Conservative Party to victory on a platform of reducing the provincial debt and returning to free enterprise, but his government lasted only one term. In 1981 the NDP returned to power under Howard PAWLEY. They were re-elected in 1985. The Lyon government, in fact, was the only one-term government in Manitoba's history to that time, as the political tradition of the province has been notable for its long-term stability, particularly during the era of the UFM and later coalition governments.
Pawley's NDP were ousted in 1988 when Gary Filmon led the Conservatives to an upset minority victory. Filmon's government was precarious, and the Liberal opposition was extremely vocal in its opposition to the MEECH LAKE ACCORD (see MEECH LAKE ACCORD: DOCUMENT). Debate over the accord dominated the provincial agenda and was finally killed by procedural tactics led by NDP native MLA Elijah HARPER. Filmon went to the polls immediately following the death of the accord in 1990 and eked out a slim majority victory. This majority enabled Filmon to finally dictate the legislative agenda, and he began concentrating his government's efforts at bringing the province's rising financial debt under control. His government's success in this endeavour won Filmon an increased majority in April 1995.
The denominational school system was guaranteed by the Manitoba Act of 1870 and established by the provincial School Act of 1871: local schools, Protestant or Roman Catholic, might be set up on local initiative and administered by local trustees under the superintendence of the Protestant or Roman Catholic section of a provincial board of education. The board was independent of the government but received grants from it, which the sections divided among their schools. Until 1875 the grants were equal; disparity in the population and the ensuing Protestant attack on dualism in 1876 made it necessary to divide the grants on the basis of enrolment in each section.
After 1876 the British (predominantly Protestant) and French (Roman Catholic) coexisted peaceably and separately, until agitation against the perceived growing political power of the Catholic clergy spread west from Québec in 1889. A popular movement to abolish the dual system and the official use of French culminated in 1890 in the passage of 2 provincial bills. English became the only official language and the Public Schools Act was altered. Roman Catholics could have private schools supported by gifts and fees, but a new department of education, over local boards of trustees, was to administer nondenominational schools.
French Catholic objections to violations of their constitutional rights were ignored by the Protestant Ontarian majority, who saw a national school system as the crucible wherein an essentially British Manitoba would be formed. Intervention by the courts and the federal government eventually produced the compromise of 1897: where there were 40 (urban) or 10 (rural) Catholic pupils, Catholic teachers were to be hired; where at least 10 pupils spoke a language other than English, instruction was to be given in that language; school attendance was not compulsory, since Catholics were still outside the provincial system.
After 20 years of decreasing standards and linguistic chaos, the Public Schools Act was amended in 1916; the bilingual clause was removed and the new School Attendance Act made schooling compulsory for Catholics and Protestants alike, whether publicly or privately educated.
Since 1970, Franco-Manitobans can receive instruction entirely in French through the Français program; as well, non-French students in French immersion are taught all subjects in French. Instruction in a minority tongue in the majority of subjects is possible in some schools. Both English- and French-medium schools are organized in 48 school divisions, each administered by an elected school board, under the Department of Education.
In order to meet Manitoba's constitutional obligations and the linguistic and cultural needs of the Franco-Manitoban community, a new Francophone School Division was established and was in place for the 1994-95 school year.
There are 14 school districts, of which 6 are financed mainly from sources other than provincial grants and taxes; these include private schools sponsored by church organizations and by the federal government. School boards are responsible for maintaining and equipping schools, hiring teachers and support staff and negotiating salaries. The Manitoba Teachers Federation negotiates with the boards.
In 1994 enrolment in the elementary/secondary schools of the province totalled 221 610, and 14 500 teachers were employed, of which 12 675 were full-time. Elementary schools consist of kindergarten through grades 1 to 8. Secondary schools, grades 9 to 12, have varied curriculum with core subjects and several options.
Special, practically oriented programs are available at 35 vocational-industrial schools, and vocational-business training is given in 106 schools. There are also special services for the disabled, the blind, the deaf and those with learning disabilities.
COMMUNITY COLLEGES provide a wide variety of career-oriented adult educational and vocational programs, and day, evening and extension programs - full-time and part-time - are offered in more than 120 communities. Assiniboine Community College operates in and outside Brandon. Responsible for all community college agricultural training in the province, it offers 16 certificate courses and 11 diploma courses. Keewatin College offers 16 certificate courses of one year or less, and 4 diploma courses, mostly in northern Manitoba. Red River College, located in Winnipeg, provides 33 diploma courses as well as 28 certificate courses, including courses in applied arts, business administration, health services, industrial arts and technology.
During 1993-94 there were 3900 full-time and 1646 part-time students enrolled in community colleges in Manitoba. The community colleges, previously operated by the province, were incorporated under appointed boards of governors in April 1993. The community colleges are now funded by an annual grant from the province. Manitoba spent over $54 million on community colleges in 1993-94.
In 1877 St Boniface (French, Roman Catholic), St John's (Anglican) and Manitoba (Presbyterian) united as UNIVERSITY OF MANITOBA. Later, they were joined by other colleges, but in 1967 a realignment of the constituents resulted in 3 distinct universities. The University of Manitoba is one of the largest universities in Canada, with numerous faculties and with 4 affiliated colleges that provide instruction in French: St John's and St Paul's (Roman Catholic), St Andrew's (Ukrainian Orthodox) and St Boniface, which is the only college providing instruction entirely in French. In 1994-95, 17 905 full-time and 6062 part-time students were enrolled at the U of Man.
BRANDON UNIVERSITY offers undergraduate programs in arts, science, education and music and masters degrees in education and music, with an enrolment of 1541 full-time and 1956 part-time students (1994-95). The UNIVERSITY OF WINNIPEG, located in central Winnipeg, provides primarily undergraduate instruction, teacher training and theological studies for 2679 full-time and 7387 part-time students (1994-95). Teachers are trained at all 3 universities and at Red River College.
To a large degree, Manitoba's cultural activities and historical institutions reflect the varied ethnic groups that comprise its fabric. The provincial government, through its Department of Culture, Heritage and Citizenship, subsidizes a wide range of cultural activities. Many annual FESTIVALS celebrate ethnic customs and history: the Icelandic Festival at Gimli, the Winnipeg Folk Festival, National Ukrainian Festival at Dauphin, Opasquia Indian Days and the Northern Manitoba Trappers' Festival at The Pas, Pioneer Days at Steinbach, Fête Franco-Manitobaine at La Broquerie, the midwinter Festival du voyageur in St Boniface, and Folklorama sponsored by the Community Folk Art Council in Winnipeg.
Manitoba's historic past is preserved by the Museum of Man and Nature (Winnipeg), considered one of the finest interpretive museums in Canada; by the Living Prairie Museum, a 12 ha natural reserve; by the St Boniface Museum, rich in artifacts from the Red River colony; and the Provincial Archives and Hudson's Bay Company Archives, all located in Winnipeg. Also in Winnipeg is the Planetarium, one of the finest in North America, and Assiniboine Park Zoo, which has a collection of more than 1000 animals.
The Manitoba Arts Council promotes the study, enjoyment, production and performance of works in the arts. It assists organizations involved in cultural development; offers grants, scholarships and loans to Manitobans for study and research; and makes awards to individuals. The Winnipeg Symphony Orchestra, ROYAL WINNIPEG BALLET, Manitoba Theatre Centre, Le Cercle Molière, Manitoba Opera Association, Manitoba Contemporary Dancers and Rainbow Stage all contribute to Winnipeg's position as a national centre of the performing arts.
Among well-known and respected Manitoban writers are the novelists Margaret LAURENCE and Gabrielle ROY, essayist, historian and poet George WOODCOCK and popular historian Barry Broadfoot. The Winnipeg Art Gallery, in addition to traditional and contemporary works, houses the largest collection of Inuit art in the world.
Among the fine historic sites associated with the settlement of the West is the HBC's Lower Fort Garry (see FORT GARRY, LOWER). Situated on the Red River 32 km northeast of Winnipeg, this oldest intact stone fort in western Canada was built in 1832 and preserves much of the atmosphere of the Red River colony. The Forks, a waterfront redevelopment and national HISTORIC SITE, is the birthplace of the Winnipeg. Located at the junction of the Red and Assiniboine rivers, this site has been used as a trade and meeting place for over 6000 years. Today, it is again a place where recreational, cultural, commercial and historical activities bring people together. Upper Fort Garry Gate, the only remnant of another HBC fort (see FORT GARRY, UPPER), is nearby.
Among a number of historic houses is Riel House, home of the Riel family; York Factory, located at the mouth of the Nelson River and dating from 1682, was a transshipment point for furs. The partially restored PRINCE OF WALES FORT (1731-82) at the mouth of the Churchill River was built by the HBC and destroyed by the French. Other points of historical significance are St Boniface Basilica, the oldest cathedral in western Canada and the site of Louis RIEL's grave; Macdonald House, home of Sir H. J. MACDONALD; Fort Douglas; Ross House; Seven Oaks House; and the Living Prairie Museum.
Manitoba has 5 daily newspapers: the Winnipeg Free Press, the Winnipeg Sun, the Brandon Sun, the Portage la Prairie Daily Graphic and the Flin Flon daily Reminder. Sixty-two weekly and biweekly papers service suburban Winnipeg and rural areas, with emphasis on farming, and several trade and business journals are published. The French-language weekly La Liberté is published in St Boniface, and Winnipeg produces more foreign-language newspapers than any other centre in Canada.
The province has 20 AM radio stations (all but 4 are independent), including the French-language station CKSB, and 7 FM radio stations. As well, the CBC has 28 English- and French-language rebroadcasters. Four television stations operate from Winnipeg and one from Brandon, and CABLE TELEVISION is available in most centres. The Manitoba Telephone System, a crown corporation, provides telecommunications facilities for all Manitoba. North America's first publicly owned system, it was established in 1908 after the provincial government began appropriating Bell Telephone because of high rates and inefficiency.
Trading posts were soon established along the shores: Fort Hayes (1682), Fort York (1648), Fort Churchill (1717-18), Prince of Wales Fort (1731). Henry KELSEY, an HBC employee, penetrated southwest across the prairies 1690-92. The LA VÉRENDRYE family travelled west via the Great Lakes, building Fort Maurepas on the Red River (1734), then 4 other posts within the present area of Manitoba. The subsequent invasion by independent traders of lands granted to the HBC stimulated an intense rivalry for pelts, which ended only with amalgamation of the HBC and the North West Co in 1821. About 20 forts existed at various times south of lat 54° N, but the early explorers left little permanent impression on the landscape.
Agricultural settlement began in 1812 with the arrival of Lord SELKIRK's settlers at Point Douglas, now within the boundaries of Winnipeg. Over the next 45 years, the Red River Colony at Assiniboia survived hail, frost, floods, grasshoppers, skirmishes with the Nor'Westers and an HBC monopoly. Expansionist sentiment from both Minnesota and Upper Canada challenged the HBC's control over the northwest and the Red River Colony.
In 1857 the British government sponsored an expedition to assess the potential of Rupert's Land for agricultural settlement; the PALLISER EXPEDITION reported a fertile crescent of land suitable for agriculture extending northwest from the Red River valley. That same year the Canadian government sent Henry Youle to do a similar assessment. The conflict between agricultural expansion and the rights of the Métis broke out in 2 periods of unrest (see RED RIVER REBELLION; NORTH-WEST REBELLION).
Eventually the HBC charter was terminated and the lands of the North-West were transferred to the new Dominion of Canada by the Manitoba Act of 1870; quarter sections of land were then opened to settlement. It was soon evident that the diminutive province needed to expand. Settlers were rapidly moving to the northwest and spilling over the established boundaries.
In 1881, after years of political wrangling with the federal government, the boundaries were extended to their present western position, as well as being extended farther east, and to lat 53° N. Between 1876 and 1881, 40 000 immigrants, mainly Ontario British, were drawn west by the prospect of profitable wheat farming enhanced by new machinery and milling processes.
Mennonites and Icelandic immigrants arrived in the 1870s, the former settling around Steinbach and Winkler, the latter near Gimli and Hecla. Immigration then slowed until the late 1890s and it was limited mostly to small groups of Europeans.
Between 1897 and 1910, years of great prosperity and development, settlers from eastern Canada, the UK, the US and eastern Europe - especially Ukraine - inundated the province and the neighbouring lands. Subsequent immigration was never on this scale.
From 1897 to 1910 Manitoba enjoyed unprecedented prosperity. Transportation rates fell and wheat prices rose. Grain farming still predominated, but mixed farms prospered and breeders of quality livestock and plants became famous.
Winnipeg swiftly rose to metropolitan stature, accounting for 50% of the increase in population. In the premier city of the West, a vigorous business centre developed, radiating from the corner of Portage Avenue and Main Street: department stores, real estate and insurance companies, legal firms and banks thrived. Abattoirs and flour mills directly serviced the agricultural economy; service industries, railway shops, foundries and food industries expanded.
Both the CPR and the Canadian Northern Railway (later CNR) built marshalling yards in the city which became the hub of a vast network of rail lines spreading east, west, north and south. In 1906 hydroelectricity was first generated at PINAWA on the Winnipeg River, and the establishment of Winnipeg Hydro 28 June 1906 guaranteed the availability of cheap power for domestic and industrial use.
The general prosperity ended with the depression of 1913; freight rates rose, land and wheat prices plummeted and the supply of foreign capital dried up. The opening of the Panama Canal in 1914 ended Winnipeg's transportation supremacy, since goods could move more cheaply between east and west by sea than overland.
During WWI, recruitment, war industry demands, and cessation of immigration sent wages and prices soaring; by 1918 inflation seemed unchecked and unemployment was prevalent. Real wages dropped, working conditions deteriorated and new radical movements grew among farmers and urban workers, culminating in the WINNIPEG GENERAL STRIKE of May 1919.
Ensuing depression followed by an industrial boom in the late 1920s tilted the economic seesaw again. By 1928 the value of industrial production exceeded that of agricultural production; the long agricultural depression continued into the 1930s, aggravated by drought, pests and low world wheat prices, and the movement from farm to city and town accelerated. Cities were little better off: industry flagged and unemployment was high.
To eliminate the traditional boom/bust pattern, attempts have been made to diversify the economy. The continuing expansion of mining since 1911 has underlined the desirability of broadening the basis of the economy. The demands of WWII reinforced Manitoba's dependency on agriculture and primary production, but the postwar boom gave the province the opportunity to capitalize on its established industries and to broaden the economic base.
Since WWII, the Manitoba economy has been marked by rapid growth in the province's north. The development of rich nickel deposits in northern Manitoba by Inco Ltd led to the founding of the City of Thompson, whose fluctuating fortunes have mirrored swings in world commodity prices. The region has been the site of several "megaprojects," including the Manitoba Forest Resources operation at The Pas, and the huge limestone hydroelectric generating plant on the Nelson River. The economic future of Manitoba is thus a mixed one - a continuing agricultural slump, offset by growth in light industry, publishing, the garment industry and the export of power to the US.
The 20 years from 1970 to 1990 saw a dramatic realignment of provincial politics, with the virtual disappearance of the provincial Liberal Party and the rise to power of the New Democratic Party under Edward Schreyer and Howard Pawley. Typical of the social democratic initiatives of the NDP were the introduction of a government-run automobile insurance plan and the 1987 plan to purchase Inter-City Gas Co. The government's attempt to increase bilingual services within the province aroused old passions, however, and was abandoned. The Conservative government of Filmon in the 1990s faced the same problems of public debt and economic recovery as the rest of Canada.
Author T.R. WEIR
J. Brown, Strangers in Blood (1980); K. Coates and F. McGuinness, Manitoba, The Province & The People (1987); W.L. Morton, Manitoba: A History (2nd ed, 1967); G. Friesen, Prairie West (1984); X. McWillams, Manitoba Milestones (1928); Alan Artibise, Winnipeg: An Illustrated History (1977).
Links to Other Sites
The website for the Historica-Dominion Institute, parent organization of The Canadian Encyclopedia and the Encyclopedia of Music in Canada. Check out their extensive online feature about the War of 1812, the "Heritage Minutes" video collection, and many other interactive resources concerning Canadian history, culture, and heritage.
Government of Manitoba
The official website for the Government of Manitoba. Click on "About Manitoba" for information about Manitoba's geography, history, climate, and more.
Symbols of Canada
An illustrated guide to national and provincial symbols of Canada, our national anthem, national and provincial holidays, and more. Click on "Historical Flags of Canada" and then "Posters of Historical Flags of Canada" for additional images. From the Canadian Heritage website.
Manitoba Parks and Natural Areas
The website for Manitoba Parks and Natural Areas.
Manitoba Heritage Network
Explore Manitoba's history at this website for the Manitoba Heritage Network.
Library and Archives Canada
The website for Library and Archives Canada. Offers searchable online collections of textual documents, photographs, audio recordings, and other digitized resources. Also includes virtual exhibits about Canadian history and culture, and research aids that assist in locating material in the physical collections.
Festivities of the Living and the Dead in the Americas
A multimedia tour of major festivals across Canada and throughout the Americas. Describes the origins and unique features of each event. From the Virtual Museum of Canada.
A well-illustrated online guide to natural geological processes related to plate tectonics, earthquakes, and related events. From Natural Resources Canada.
Maps of provinces and territories from "The Atlas of Canada," Natural Resources Canada.
Hudson's Bay Company Archives
A comprehensive information source about the history of the Hudson’s Bay Company and the fur trade in Canada. A Manitoba Government website.
Geographical Names of Canada
Search the "Canadian Geographical Names Data Base" for the official name of a city, town, lake (or any other geographical feature) in any province or territory in Canada. See also the real story of how Toronto got its name. A Natural Resources Canada website.
Manitoba Agricultural Hall of Fame
Check out the life stories of people who have contributed to agriculture and the historical overview of agriculture in Manitoba.
An overview of the major issues and events leading up to Manitoba's entry into Confederation. Includes biographies of prominent personalities, old photos and related archival material. From Library and Archives Canada.
The Société Historique de Saint-Boniface
The Heritage Centre conserves and promotes resources which have cultural, heritage, judicial, and historical value; the product of Francophone presence in Western Canada and Manitoba for over the past 250 years. Their website is a great source for information about Louis Riel, Le "Voyageur," and other Manitoba history topics.
The website for Travel Manitoba highlights popular tourist destinations and events throughout the province.
A history of the "census" in Canada. Check the menu on the left for data on small groups (such as lone-parent families, ethnic groups, industrial and occupational categories and immigrants) and for information about areas as small as a city neighbourhood or as large as the entire country. From the website for Statistics Canada.
The Historic Resources Branch of Manitoba Culture, Heritage and Tourism. A reference source for genealogists, historians, archaeologists, students and interested laypersons.
The Rat Portage War
A fascinating account of the 19th Century border dispute involving Manitoba and Ontario. From the Winnipeg Police Service website.
OurVoices - Stories of Canadian People and Culture
An superb online audio collection of traditional stories about the Omushkego (Swampy Cree) people of northern Manitoba and Ontario. Presented in Cree and in English by Louis Bird, storyteller and elder. Also features printed transcripts and other resources. From the Centre for Rupert's Land Studies at the University of Winnipeg.
Manitoba Historical Society
An extensive online resource devoted to the history of Manitoba. Features biographies of noteworthy residents, articles from the journal “Manitoba History,” and much more.
Aboriginal Place Names
This site highlights Aboriginal place names found across Canada. From the Department of Aboriginal Affairs and Northern Development.
Find out about the intriguing origins of some of Manitoba’s historic place names. From the Manitoba Historical Society.
Mining in Manitoba
Scroll down to “The Flin Flon Mine” section to learn about the origin of the name “Flin Flon” and the accidental geological discovery that led to the establishment of the Flin Flon mine. This article also digs into the history of other Manitoba mining sites. From the Manitoba Historical Society.
An extensive biography of Edgar Dewdney, civil engineer, contractor, politician, office holder, and lieutenant governor. Provides details about his involvement with Indian and Métis communities in the North-West Territories, the settlement of the West, the construction of the transcontinental railway, and related events. From the “Dictionary of Canadian Biography Online.”
Archives Canada is a gateway to archival resources found in over 800 repositories across Canada. Features searchable access to virtual exhibits and photo databases residing on the websites of individual archives or Provincial/Territorial Councils. Includes documentary records, maps, photographs, sound recordings, videos, and more.
Four Directions Teachings
Elders and traditional teachers representing the Blackfoot, Cree, Ojibwe, Mohawk, and Mi’kmaq share teachings about their history and culture. Animated graphics visualize each of the oral teachings. This website also provides biographies of participants, transcripts, and an extensive array of learning resources for students and their teachers. In English with French subtitles.
National Inventory of Canadian Military Memorials
A searchable database of over 5,100 Canadian military memorials. Provides photographs, descriptions, and the wording displayed on plaques. Also a glossary of related terms. A website from the Directorate of History and Heritage.
The Société franco-manitobaine supports and promotes programs that preserve and enhance French language and culture in Manitoba.
Manitoba Association of Architects
The website for the Manitoba Association of Architects.
The Flour Milling Industry in Manitoba Since 1870
An illustrated article about the history of the flour milling industry in Manitoba. From the Manitoba Historical Society.
The Manitoba Museum is the province’s largest heritage centre renowned for its combined human and natural heritage themes. The institution shares knowledge about Manitoba, the world and the universe through its collections, exhibitions, publications, on-site and outreach programs, Planetarium shows and Science Gallery exhibits.
Names of the provinces and territories
Abbreviations and symbols for the names of the provinces and territories. From the website for Natural Resources Canada.
University of Manitoba : Archives & Special Collections
The website for Archives & Special Collections at the University of Manitoba.
North Eastman Region of Manitoba
This site offers profiles of communities in the North Eastman region of Manitoba.
With One Voice: A History of Municipal Governance in Manitoba
A synopsis of a book that covers topics such as daylight saving time, taxes, rural electrification, the impact of gophers and other farm pests, lottery terminals, and more. From the Association of Manitoba Municipalities.
Louis Riel Day
An information page about Manitoba's "Louis Riel Day." Check out the menu on the left side of the page for more on the origins of this holiday. From the website for the Government of Manitoba.
Province loses 'tremendous premier'
An obituary for former Manitoba premier Dufferin Roblin. From the winnipegfreepress.com website.
An Immense Hold in the Public Estimation
A feature article about Manitoba men and women who played hockey in the late 19th and early 20th centuries. From the Manitoba Historical Society.
Field Guide: Native Trees of Manitoba
An online guide to Manitoba’s ecozones and native coniferous and deciduous trees in Manitoba. With photographs showing identifying features of various species and biological keys. A Government of Manitoba website.
Agriculture in French Manitoba
This interactive exhibit features maps, images, and stories about the history of agriculture in French Manitoba. | http://www.thecanadianencyclopedia.com/articles/manitoba | 13 |
33 | Component 1: Ride the rock cycle
1. Do Now/ Journal Prompt: List the three different types of rocks and describe how they are formed.
2. Student share their responses. Teacher compiles information on a chart to represent the vocabulary visually.
3. Introduce the activity: Students will be a rock going through the rock cycle. They will roll the dice to find out where they go next. Students will fill out the attached organizer ( rock cycle worksheet) to collect their data
4. Break students up into groups (3-4 students per group)
5. Student groups will pick a station slip out of a “hat” to determine where they will begin.
6. Students will roll the dice (Rock cycle dice) at each station and record the data on the rock cycle worksheet. Then they will move to the station the dice directed them to.
7. Once the students have traveled to 12 stations they are done.
8. Students will use the comic strip outline found on the attached worksheet (rock cycle worksheet) to make a comic about their movement through the rock cycle (encourage students to be creative by adding dialogue and thought bubbles to use the domain specific vocabulary)
Component 2: Represent Data
1. Do Now/ Journal prompt: Students will need their (rock cycle worksheet) to complete the (Rock cylcle data collection) sheet that will be used to answer the following questions
a. Are the life cycles of rocks predictable?
i. State your claim
(exampe: The life cycles of rocks are/are not predictable).
ii. Use your data to support your claim using locations and environmental factors
(example: Four out of twelve times I was at the volcano station. Tectonic activity was the most frequent. ).
iii. Write a conclusion (This shows that..)
2. Students will be assigned the task of representing their data. (Student choice or teacher choice) You can offer the students multiple graphs to choose from to analyze the effectiveness of each representation OR Students can all be assigned the task of making a pie graph.
3. With that data students will visit http://nces.ed.gov/nceskids/createagraph/default.aspx to make a pie graph to show the amount of time they spent in each station. Students can also complete the attached extension activity (making a graph with rock cycle data) that guides students to convert the data and make a pie graph using a protractor.
4. Students will analyze their own graph and answer one of the following questions (student choice or teacher choice)
a. What are the benefits of using a pie graph to represent this type of data? (possible answer: This data is about the part of a whole)
b. Were you able to obtain more information from the data table or the graph? (possible answer: A data table is more specific. It’s easier to understand a pie graph because its visual.)
c. Are the life cycles of rocks predictable? (Possible answer: They are not predictable because we all have different data. It is predictable because I kept going back to the volcano station.)
d. What did your data show? (Possible answers: I spend 50 percent of the time in the soil because of deposition)
e. What were some of the environmental factors that you encountered? (I was melted in the core of the Earth.)
5. Students will present their data and their responses to the question. This is a good time to prompt responses that will aid your class in completing the writing component.
Component 3: Literacy
6. Students will complete an informational essay
a. See attached task.rubric. (Provide students with transitional phrases and domain specific vocabulary if support is needed)
Are the life cycles of rocks predictable?
i. State your claim (The life cycles of rocks are/are not predictable)
ii. Use your data from Ride the Rock Cycle and the class share.
iii. Write a conclusion that addresses claim and evidence.
Component 4: Design
1. Do Now/ Journal: What are the environmental factors that can affect a rock?
(intended response: erosion/weathering, deposition, cementation/ compaction, heating, pressure, cooling)
2. Students share responses as the teacher records answers to provide a visual.
3. Students are given the task to design their own experiment that will represent the life cycle of a rock.
You must design an experiment that will represent the environmental effects of the life cycle of a rock using a crayon. Identify what you will you use to represent each factor. (example: Sharpener, candle, hand friction, cold water).
i. Erosion/ Weathering
iii. Cementation/ Compaction
Students are offered the opportunity to brainstorm this alone, with a partner or in a group.
EXAMPLE: Students will...
break down crayons with a crayon sharpener
deposit the sediments using foil
show heat and pressure with hand friction
show intense heat and pressure with with the flame of a candle and a book
cool the “magma” in a cup of water
*visit http://www.geosociety.org/educate/lessonplans/rockcyclelab.pdf to see a model
Component 4: Rock Cycle Simulation
1. Students will be provided the materials that they need to complete their simulation.
2. Students complete the simulation that they designed
3. Students will display their results on the attached Rock cycle simulation display sheet. (have them save a piece of each type of rock before they move on to the next stage)
4. Students will label each sample and will label arrows with the processes (erosion/weathering, deposition, cementation/ compaction, heating, pressure, cooling). | http://www.teacherstryscience.org/lp/ride-rock-cycle | 13 |
14 | To continue our look at rubrics and models, I offer below a revised version of a dialogue I wrote 20 years ago to attempt to clarify what a rubric is and what differentiates good from not so good rubrics. You can read more in MOD J in Advanced Topics in Unit Design. [I further revised the last third of the dialogue to clear up some fuzziness pointed out to me by a reader]
Just what is a rubric? And why do we call it that?
A rubric is a set of written guidelines for distinguishing between performances or products of different quality. (We would use a checklist if we were looking for something or its absence only, e.g. yes there is a bibliography). A rubric is composed of descriptors for criteria at each level of performance, typically on a four or six point scale. Sometimes bulleted indicators are used under each general descriptor to provide concrete examples or tell-tale signs about what to look for under each descriptor. A good rubric makes possible valid and reliable criterion-referenced judgment about performance.
The word “rubric” derives from the Latin word for “red.” In olden times, a rubric was the set of instructions or gloss on a law or liturgical service — and typically written in red. Thus, a rubric instructs people — in this case on how to proceed in judging a performance “lawfully.”
You said that rubrics are built out of criteria. But some rubrics use words like “traits” or “dimensions.” Is a trait the same as a criterion?
Strictly speaking they are different. Consider writing: “coherence” is a trait; “coherent” is the criterion for that trait. Here’s another pair: we look through the lens of “organization” to determine if the paper is “organized and logically developed.” Do you see the difference? A trait is a place to look; the criterion is what we look for, what we need to see to judge the work successful (or not) at that trait.
Why should I worry about different traits of performance or criteria for them? Why not just use a simple holistic rubric and be done with it?
Because the fairness and feedback may be compromised in the name of efficiency. In complex performance the criteria are often independent of one another: the taste of the meal has little connection to its appearance, and the appearance has little relationship to its nutritional value. These criteria are independent of one another. What this means in practice is that you could easily imagine giving a high score for taste and a low score for appearance in one meal and vice versa in another. Yet, in a holistic scheme you would have to give the two (different) performances the same score. However, it isn’t helpful to say that both meals are of the same general quality.
Another reason to use separate dimensions of performance separately scored is the problem of landing on one holistic score with varied indicators. Consider the oral assessment rubric below. What should we do if the student makes great eye contact but fails to make a clear case for the importance of their subject? Cannot we easily imagine that on the separate performance dimensions of “contact with audience” and “argued-for importance of topic” that a student might be good at one and poor at the other? The rubric would have us believe that these sub-achievements would always go together. But logic and experience suggest otherwise.
Oral Assessment Rubric
- 5 – Excellent: The student clearly describes the question studied and provides strong reasons for its importance. Specific information is given to support the conclusions that are drawn and described. The delivery is engaging and sentence structure is consistently correct. Eye contact is made and sustained throughout the presentation. There is strong evidence of preparation, organization, and enthusiasm for the topic. The visual aid is used to make the presentation more effective. Questions from the audience are clearly answered with specific and appropriate information.
- 4 – Very Good: The student described the question studied and provides reasons for its importance. An adequate amount of information is given to support the conclusions that are drawn and described. The delivery and sentence structure are generally correct. There is evidence of preparation, organization, and enthusiasm for the topic. The visual aid is mentioned and used. Questions from the audience are answered clearly.
- 3 – Good: The student describes the question studied and conclusions are stated, but supporting information is not as strong as a 4 or 5. The delivery and sentence structure are generally correct. There is some indication of preparation and organization. The visual aid is mentioned. Questions from the audience are answered.
- 2 – Limited: The student states the question studied, but fails to fully describe it. No conclusions are given to answer the question. The delivery and sentence structure is understandable, but with some errors. Evidence of preparation and organization is lacking. The visual aid may or may not be mentioned. Questions from the audience are answered with only the most basic response.
- 1 – Poor: The student makes a presentation without stating the question or its importance. The topic is unclear and no adequate conclusions are stated. The delivery is difficult to follow. There is no indication of preparation or organization. Questions from the audience receive only the most basic, or no, response.
- 0 - No oral presentation is attempted.
Couldn’t you just circle the relevant sentences from each level to make the feedback more precise?
Sure, but then you have made it into an analytic-trait rubric, since each sentence refers to a different criterion across all the levels. (Trace each sentence in the top paragraph into the lower levels to see its parallel version, to see how each paragraph is really made up out of separate traits.) It doesn’t matter how you format it – into 1 rubric or many – as long as you keep genuinely different criteria separate.
Given that kind of useful breaking down of performance into independent dimensions, why do teachers and state testers so often do holistic scoring with one rubric?
Because holistic scoring is quicker, easier, and often reliable enough when we are assessing a generic skill quickly like writing on a state test (as opposed, for example, to assessing control of specific genres of writing). It’s a trade-off, a dilemma of efficiency and effectiveness.
What did you mean when you said above that rubrics could affect validity. Why isn’t that a function of the task or question only?
Validity concerns permissible inferences from scores. Tests or tasks are not valid or invalid; inferences about general ability based on specific results are valid or invalid. In other words, from this specific writing prompt I am trying to infer, generally, to your ability as a writer.
Suppose, then, a rubric for judging story-writing places exclusive emphasis on spelling and grammatical accuracy. The scores would likely be highly reliable — since it is easy to count those kinds of errors — but surely it would likely yield invalid inferences about who can truly write wonderful stories. It isn’t likely, in other words, that spelling accuracy correlates with the ability to write in an engaging, vivid, and coherent way about a story (the elements presumably at the heart of story writing.) Many fine spellers can’t construct engaging narratives, and many wonderful story-tellers did poorly in school grammar and spelling tests.
You should consider, therefore, not just the appropriateness of a performance task but of a rubric and its criteria. On may rubrics, for example, the student need only produce “organized” and “mechanically sound” writing. Surely that is not a sufficient description of good writing. (More on this, below).
It’s all about the purpose of the performance: what’s the goal – of writing? of inquiry? of speaking? of science fair projects? Given the goals being assessed, are we then focusing on the most telling criteria? Have we identified the most important and revealing dimensions of performance, given the criteria most apporpriate for such an outcome? Does the rubric provide an authentic and effective way of discriminating between performances? Are the descriptors for each level of performance sufficiently grounded in actual samples of performance of different quality? These and other questions lie at the heart of rubric construction.
How do you properly address such design questions?
By focusing on the purpose of performance i.e. the sought-after impact, not just the most obvious features of performers or performances. Too many rubrics focus on surface features that may be incidental to whether the overall result or purpose was achieved. Judges of math problem-solving, for example, tend to focus too much on obvious computational errors; judges of writing tend to focus too much on syntactical or mechanical errors. We should highlight criteria that relate most directly to the desired impact based on the purpose of the task.
I need an example.
Consider joke-telling. The joke could have involved content relevant to the audience, it could have been told with good diction and pace, and the timing of the punch-line could have been solid. But those are just surface features. The bottom-line question relates to purpose: was the joke funny? i.e. did people really laugh?
But how does this relate to academics?
Consider the following impact-focused questions:
- The math solution may have been accurate and thorough, but was the problem solved?
- The history paper may have been well-documented and clearly written, with no mechanical errors, but was the argument convincing? Were the counter-arguments and counter-evidence effectively addressed?
- The poem have have rhymed, but did it conjure up vibrant images and feelings?
- The experiment may have been thoroughly written up, but was the conclusion valid?
It is crucial that student learn that the point of performance is effective/successful results, not just good-faith effort and/or mimicry of format and examples.
So, it’s helpful to consider four different kinds of criteria: impact, process, content, polish. Impact criteria should be primary. Process refers to methods or techniques. Content refers to appropriateness and accuracy of content. Polish refers to how well crafted the product is. Take speaking: many good speakers make eye contact and vary their pitch, in polished ways, as they talk about the right content. But those are not the bottom-line criteria of good speaking, they are merely useful techniques in trying to achieve one’s desired impact (e.g. keeping an audience engaged). Impact criteria relate to the purpose of the speaking — namely, the desired effects of my speech: was I understood? Was I engaging? Was I persuasive? moving? — i.e. whatever my intent, was it realized?
That seems hard on the kid and developmentally suspect!
Not at all. You need to learn early and often that there is a purpose and an audience in all genuine performance. The sooner you learn to think about the key purpose audience questions – What’s my goal? What counts as success here? What does this audience and situation demand? What am i trying to cause in the end? the more effective and self-directed you’ll be as a learner. It’s not an accident in Hattie’s research that this kind of metacognitive work yields some of the greatest educational gains.
Are there any simple rules for better distinguishing between valid and invalid criteria?
One simple test is negative: can you imagine someone meeting all the proposed criteria in your draft rubric, but not being able to perform well at the task, given its true purpose or nature? Then you have the wrong criteria. For example, many writing rubrics assess organization, mechanics, accuracy, and appropriateness to topic in judging analytic essays. These are necessary but not sufficient; they don’t get to the heart of the purpose of writing — achieving some effect or impact on the reader. These more surface-related criteria can be met but still yield bland and uninteresting writing. So they cannot be the best basis for a rubric.
But surely formal and mechanical aspects of performance matter!
Of course they do. But they don’t get at the point of writing, merely the means of achieving the purpose — and not necessarily the only means. What is the writer’s intent? What is the purpose of any writing? It should “work” or yield a certain effect on the reader. Huck Finn “works” even though the written speech of the characters is ungrammatical. The writing aims at some result; writers aim to accomplish some response — that’s what we must better assess for. If we are assessing analytic writing we should presumably be assessing something like the insightfulness, novelty, clarity and compelling nature of the analysis. The real criteria will be found from an analysis of the answers to questions about the purpose of the performance.
Notice that these last four dimensions implicitly contain the more formal mechanical dimensions that concern you: a paper is not likely to be compelling and thorough if it lacks organization and clarity. We would in fact expect to see the descriptor for the lower levels of performance addressing those matters in terms of the deficiencies that impede clarity or persuasiveness. So, we don’t want learners to fixate on surface features or specific behaviors; rather, we want them to fixate on good outcomes related to purpose.
Huh? What do you mean by distinguishing between specific behaviors and criteria?
Most current rubrics tend to over-value polish, content, and process while under-valuing the impact of the result, as noted above. That amounts to making the student fixate on surface features rather than purpose. It unwittingly tells the student that obeying instructions is more important than succeeding (and leads some people to wrongly think that all rubrics inhibit creativity and genuine excellence).
Take the issue of eye contact, mentioned above. We can easily imagine or find examples of good speaking in which eye contact wasn’t made: think of the radio! Watch some of the TED talks. And we can find examples of dreary speaking with lots of eye contact being made. Any techniques are best used as “indicators” under the main descriptor in a rubric, i.e. there are a few different examples or techniques that MAY be used that tend to help with “delivery” – but they shouldn’t be mandatory because they are not infallible criteria or the only way todo it well.
Is this why some people think rubrics kill creativity?
Exactly right. BAD rubrics kill creativity because they demand formulaic response. Good rubrics demand great results, and give students the freedom to cause them. Bottom line: if you signal in your rubrics that a powerful result is the goal you FREE up creativity and initiative. If you mandate format, content, and process and ignore the impact, you inhibit creativity and reward safe uncreative work.
But it’s so subjective to judge impact!
Not at all. “Organization” is actually far more subjective and intangible a quality in a presentation than “kept me engaged the whole time” if you think about it. And when you go to a bookstore, what are you looking for in a book? Not primarily “organization” or “mechanics” but some desired impact on you. In fact, I think we do students a grave injustice by allowing them to continually submit (and get high grades!) on boring, dreary papers, presentations, and projects. It teaches a bad lesson: as long as you put the right facts in, I don’t care how well you communicated.
The best teacher I ever saw was teacher in Portland HS, Portland Maine, who got his kids to make the most fascinating student oral presentations I have ever heard. How did you do it? I asked. Simple, he said. You got 1 of 2 grades: YES = kept us on the edge of our seats. NO = we lost interest or were bored by it.
Should we not assess techniques, forms, or useful behaviors at all, then?
I didn’t mean to suggest it was a mistake. Giving feedback on ALL the types of criteria is helpful. For example, in archery one might aptly desire to score stance, technique with the bow, and accuracy. Stance matters. On the other hand, the ultimate value of the performance surely relates to its accuracy. In practice that means we can justifiably score for a process or approach, but we should not over-value it so that it appears that results really don’t matter much.
What should you do, then, when using different types of criteria, to signal to the learner what to attend to and why?
You should weight the criteria validly and not arbitrarily. We often, for example, weight the varied criteria equally that we are using (say, persuasiveness, organization, idea development, mechanics) – 25% each. Why? Habit or laziness. Validity demands that we ask: given the purpose and audience, how should the criteria be weighted? A well-written paper with little that is interesting or illuminating should not get really high marks – yet using many current writing rubrics, the paper would because the criteria are weighted equally and impact is not typically scored.
Beyond this basic point about assigning valid weights to the varied criteria, the weighting can vary over time, to signal that your expectations as a teacher properly change once kids get that writing, speaking, or problem solving is about purposeful effects. E.g. accuracy in archery may be appropriately worth only 25% when scoring a novice, but 100% when scoring archery performance in competition.
Given how complex this is, why not just say that the difference between the levels of performance is that if a 6 is thorough or clear or accurate, etc. then a 5 is less thorough, less clear or less accurate than a 6? Most rubrics seem to do that: they rely on a lot of comparative (and evaluative) language.
Alas, you’re right. This is a cop-out – utterly unhelpful to learners. It’s ultimately lazy to just use comparative language; it stems from a failure to provide a clear and precise description of the unique features of performance at each level. And the student is left with pretty weak feedback when rubrics rely heavily on words like “less than a 5” or “a fairly complete performance” — not much different than getting a paper back with a letter grade.
Ideally, a rubric focuses on discernible and useful empirical differences in performance; that way the assessment is educative, not just measurement. Too many such rubrics end up being norm-referenced tests in disguise, in other words, where judges fail to look closely at the more subtle but vital features of performance. Mere reliability is not enough: we want a system that can improve performance through feedback.
Compare the following excerpt from the ACTFL guidelines with a social studies rubric below it to see the point: the ACTFL rubric is rich in descriptive language which provides insight into each level and its uniqueness. The social studies rubric never gets much beyond comparative language in reference to the dimensions to be assessed (note how the only difference between each score point is a change in one adjective or a comparative):
- Novice-High: Able to satisfy immediate needs using learned utterances… can ask questions or make statements with reasonable accuracy only where this involves short memorized utterances or formulae. Most utterances are telegraphic, and errors often occur when word endings and verbs are omitted or confused… Speech is characterized by enumeration, rather than by sentences. There is some concept of the present tense forms of regular verbs particular -ar verbs, and some common irregular verbs… There is some use of articles, indicating a concept of gender, although mistakes are constant and numerous…
- Intermediate-High: Able to satisfy most survival needs and limited social demands. Developing flexibility in language production although fluency is still uneven. Can initiate and sustain a general conversation on factual topics beyond basic survival needs. Can give autobiographical information… Can provide sporadically, although not consistently, simple directions and narration of present, past, and future events, although limited vocabulary range and insufficient control of grammar lead to much hesitation and inaccuracy…. Has basic knowledge of the differences between ser and estar, although errors are frequent…. Can control the present tense of most regular and irregular verbs…. Comprehensible to native speakers used to dealing with foreigners, but still has to repeat utterances frequently to be understood by general public.
Compare those rich descriptors and their specificity to this vagueness in the social studies rubric from a Canadian provincial exam:
|The examples or case studies selected are relevant, accurate, and comprehensively developed, revealing a mature and insightful understanding of social studies content.|
|The examples or case studies selected are relevant, accurate, and clearly developed, revealing a solid understanding of social studies content.|
|The examples or case studies selected are relevant and adequately developed but may contain some factual errors. The development of the case studies/examples reveals an adequate understanding of social studies content.|
|The examples or cases selected, while relevant, are vaguely or incompletely developed, and/or they contain inaccuracies. A restricted understanding of social studies is revealed.|
|The examples are relevant, but a minimal attempt has been made to develop them, and/or the examples contain major errors revealing a lack of understanding of content.|
What’s the difference between insightful, solid, and adequate understanding? We have no idea from the rubric (which harkens back to the previous post: the only way to find out is to look at the sample papers that anchor the rubrics.)
Even worse, though, is when rubrics turn qualitative differences into arbitrary quantitative differences.
What do you mean?
A “less clear” paper is obviously less desirable than a “clear” paper (even though that doesn’t tell us much about what clarity or its absence look like), but it is almost never valid to say that a good paper has more facts or more footnotes or more arguments than a worse paper. A paper is never worse because it has fewer footnotes; it is worse because the sources cited are somehow less appropriate or illuminating. A paper is not good because it is long but because it has something to say. There is a bad temptation to construct descriptors based on easy to count quantities instead of valid qualities.
The rubric should thus always describe “better” and “worse” in tangible qualitative terms in each descriptor: what specifically make this argument or proof better than another one? So, when using comparative language to differentiate quality, make sure at least that what is being compared is relative quality, not relative arbitrary quantity. | http://grantwiggins.wordpress.com/2013/02/05/on-rubrics-and-models-part-2-a-dialogue/ | 13 |
27 | Origin and Evolutionary Relationships of Giant Galápagos Tortoises
Andalgisa Caccone, James P. Gibbs, Valerio Ketmaier, Elizabeth Suatoni, and Jeffrey R. Powell. Proc Natl Acad Sci, USA, 1999 November 9; 96(23)13223-13228
Giant tortoises, up to 5 feet in length, were widespread on all continents except Australia and Antarctica before and during the Pleistocene (3, 4). Now extinct from large landmasses, giant tortoises have persisted through historical times only on remote oceanic islands: the Galápagos, Seychelles, and Mascarenes. The tortoises of the Mascarenes are now extinct; the last animal died in 1804 (5). The tortoises of the Seychelles are represented by a single surviving population on the Aldabra atoll. Only in the Galápagos have distinct populations survived in multiple localities. The Galápagos tortoises remain the largest living tortoises (up to 400 kg) and belong to a pantropical genus of some 21 species (6).
The Galápagos Islands are volcanic in origin; the oldest extant island in the eastern part of the archipelago is less than 5 million years (myr) old (7); volcanic activity is ongoing, especially on the younger western islands. Because the archipelago has never been connected to the mainland, tortoises probably reached the islands by rafting from South America, 1000 km to the east. The Humboldt Current travels up the coast of Chile and Peru before diverting westward at Equatorial latitudes corresponding to the Galápagos Archipelago. Three extant species of Geochelone exist on mainland South America and are therefore the best candidates for the closest living relative of the Galápagos tortoises: Geochelone denticulata, the South American yellow-footed tortoise; Geochelone carbonaria, the South American red-footed tortoise; and Geochelone chilensis, the Chaco tortoise.
Within the archipelago, up to 15 subspecies (or races) of Galápagos tortoises have been recognized, although only 11 survive to the present (2, 8). Six of these are found on separate islands; five occur on the slopes of the five volcanoes on the largest island, Isabela (Fig. 1). Several of the surviving subspecies of Galápagos tortoises are seriously endangered. For example, a single male nicknamed Lonesome George represents G. nigra abingdoni from Pinta Island. The decline of the populations began in the 17th century when buccaneers and whalers collected tortoises as a source of fresh meat; the animals can survive up to six months without food or water. An estimated 200,000 animals were taken (2). More lastingly, these boats also introduced exotic pests such as rats, dogs, and goats. Today, these feral animals, along with continued poaching, represent the greatest threat to the survival of the tortoises.
The designated subspecies differ in a number of morphological characters, such as carapace shape (domed vs. saddle-backed), maximum adult size, and length of the neck and limbs. These differences do not, however, permit clear discrimination between individuals of all subspecies (9). Similarly, an allozyme survey that included seven G. nigra subspecies and the three South American Geochelone failed to reveal patterns of genetic differentiation among the subspecies or to identify any of the mainland species as the closest living relative to the Galápagos tortoises (10). A robust phylogeny of the Galápagos tortoise complex and its relatives is thus unavailable currently, and it is much needed to help resolve the long-term debate over the systematics of this group, as well as to clarify subspecies distinctiveness as a basis for prioritizing conservation efforts.
DNA was extracted from blood stored in 100 mM Tris/100 mM EDTA/2% SDS buffer by using the Easy DNA extraction kit (Invitrogen). Modified primer pair 16Sar+16Sbr (12) was used for PCR amplifications of 568 bp of the 16S rRNA gene. A 386-bp-long fragment of the cytochrome b (cytb) gene was amplified by using the cytb GLU: 5'-TGACATGAAAAAYCAYCGTTG (13) and cytb B2: H15149 (14) primers. The D-loop region was amplified with primers based on conserved sequences of the cytb and 12S rRNA genes, which flank the D loop in tortoises. Primer GT12STR (5'-ATCTTGGCAACTTCAGTGCC-3') is at the 5' end of the 12S ribosomal gene, and primer CYTTOR (5' GCTTAACTAAAGCACCGGTCTTG-3') is at the 3' end of the cytb gene. These primers amplify the D loop from several Geochelone species (unpublished observations). Internal primers specific to the D loop of G. nigra were used to amplify and sequence a 708-bp fragment of the D loop (corresponding to 73.7% of the region). Internal primer sequences are available from the senior author upon request. Double-stranded PCR amplifications and automated sequencing were carried out as described (11). To promote accuracy, strands were sequenced in both directions for each individual.
In addition to blood from live animals, we also obtained samples of skin from three tortoises collected on Pinta Island in 1906 and now in the California Academy of Science, San Francisco (specimen numbers CAS 8110, CAS 8111, and CAS 8113). One-half gram of skin was surface-cleaned with sterile water and subjected to 20 min of UV irradiation. The skin was pulverized in liquid nitrogen and suspended in buffer A of the Easy DNA kit. Proteinase K (100 µg/ml) was added and the sample was incubated for 24 hr at 58°C, following the Easy DNA procedure with the addition of a second chloroform extraction. The samples were washed in a Centricon 30 microconcentrator (Amicon) and suspended in 100 µl of 10 mM Tris/1 mM EDTA, pH 8.0. Only one skin sample was extracted at a time. Several rounds of PCR were performed, finally yielding four fragments of about 150 bp each, representing about 75% of the sequence obtained from blood samples. All procedures on the skin samples (until PCR products were obtained) were done in a room separate from that where all other DNA work was done.
Because of the high sequence similarity, sequences were aligned by eye. The alignment was also checked by using CLUSTAL W (15). Alignments are available from the first author. Phylogenetic analyses were carried out on each gene region and on the combined data set. G. pardalis was used as the outgroup. Phylogenetic inferences were made by using maximum parsimony (MP) (16), maximum likelihood (ML) (17), and neighbor joining (NJ) (18). MP trees were reconstructed by the branch-and-bound search method (19) with ACCTRAN (accelerated transformation) characterstate optimization as implemented in PAUP* (20). Various weighting methods were used: all substitutions unweighted, transversions (Tv) weighted 3 times transitions (Ti), or only Tv. For cytb, MP analyses were also performed, excluding Ti from third positions of all codons. ML analyses were carried out using PAUP* with an empirically determined transition/transversion ratio (9.19) and rates were assumed to follow a gamma distribution with an empirically determined shape parameter (a = 0.149). Sequences were added randomly, with 1000 replicates and TBS as the branch-swapping algorithm. For the NJ analysis, ML distances were calculated by PAUP* with the empirically determined gamma parameter. PAUP* was used to obtain NJ trees based on those distance matrices.
The incongruence length difference test (21) was carried out as implemented in PAUP* (in which it is called the partition homogeneity test). As suggested by Cunningham (22), invariant characters were always removed before applying the test. Templeton's (23) test was used to compare competing phylogenetic hypotheses statistically, by using the conservative two-tailed Wilcoxon rank sum test (24). The significance of branch length in NJ trees was tested by using the confidence probability (CP) test as implemented in MEGA (25). The strength of support for individual nodes was tested by the bootstrap method (26) with 1,000 (MP and NJ) or 300 (ML) psuedoreplicates. Rate homogeneity among lineages was tested by Tajima's one-degree-of-freedom method (27).
Fig. 2 shows the 50% majority rule consensus tree for MP generated from the cytb and 16S rRNA data combined, by using a branch-and-bound search. There are 167 variable sites, of which 66 are parsimony-informative; there were 12 MP trees of equal length (196 steps), with a consistency index of 0.6667 (excluding uninformative characters). We emphasize that all three reconstruction methods, ML, MP, and NJ, produced very similar topologies, as did all weightings of transitions and transversions; all of the lettered nodes in Fig. 2 were found in all cases. When multiple tree reconstruction methods produce nearly the same tree, there is more confidence in the accuracy of the tree (28). Table 2 presents the statistical analysis of the well-supported nodes. We were particularly interested in identifying the closest extant relative of the Galápagos tortoises; we therefore performed other tests to ask whether alternative trees are statistically worse than are those in Fig. 2. Table 3 presents the results of these tests. Constraining one of the other mainland South American species to be the sister taxon to the G. nigra, or using the three mainland species as a trichotomy produced significantly less parsimonious trees by Templeton's (23) test, even with the relatively conservative two-tailed Wilcoxon rank sum test (24). For the NJ tree, the crucial branch separating the chilensis/nigra clade from the other South American species is significant at the 98% level by the confidence probability test in MEGA (25).
Estimates of genetic distances also support the sister taxa status of G. chilensis and G. nigra. Among the subspecies of G. nigra, the maximum likelihood distances range from 0 to 0.0124 with a mean of 0.0066 ± 0.004 (SD). Between subspecies of G. nigra and G. chilensis, the average distance is 0.0788 ± 0.005. Between G. nigra and G. carbonaria or G. denticulataML distances are 0.118 ± 0.005 and 0.116 ± 0.003, respectively.
Fig. 2 also reveals some resolution of the relationships among the Galápagos subspecies. One point of interest is that the five named subspecies on Isabela do not form a monophyletic clade. The four southern Isabela subspecies are sister taxa to the subspecies from Santa Cruz, whereas the northernmost subspecies, G. n. becki, is the sister taxon to G. n. darwini on San Salvador. It is a geographically reasonable scenario for southern Isabela to be colonized from Santa Cruz and northern Isabela to be colonized from San Salvador (Fig. 1).
There is virtually no evidence for genetic differentiation among the four southern Isabela subspecies. The cytb sequence is identical in all individuals sampled. There are only three differences in the 16S rRNA sequence among the eight samples of these four named subspecies. We have also sequenced what is generally the fastest evolving region of mtDNA, the D loop, in individuals from these four subspecies to test whether this region gives evidence of genetic differentiation (Fig. 3). Only 17 of the 708 sites varied among the 23 individuals sequenced, and there were seven equally most parsimonious trees. The tree is only 23 steps long for the 23 sequences, with only seven nodes having bootstrap values above 50%. The only subspecies for which there is some evidence of a monophyletic clade is G. n. microphyes, but only two individuals have been studied and the bootstrap for this clade is not strong (Fig. 2). Furthermore, trees with G. n. microphyes constrained to not be monophyletic are two steps longer and not significantly worse than the MP tree by Templeton's (23) test, nor is the branch leading to the two G. n. microphyes statistically significant by the confidence probability test. We conclude that there is little or no evidence for significant genetic differentiation corresponding to the four southernmost named subspecies from Isabela. (Genetic differentiation of the other subspecies is addressed under Discussion.)
One surprise was the very close relationship of Lonesome George, the sole representative of the G. n. abingdoni subspecies from Pinta, to the subspecies from San Cristóbal and Española (Fig. 2). For cytb and 16S rRNA, the samples from Española and Lonesome George are identical, whereas there is one transition difference in the samples from San Cristóbal. To check whether this sole survivor could have been a recent transplant to Pinta, we obtained samples of skin from three animals collected on Pinta in 1906. Although we could obtain only about 75% of the sequence that we had for the other samples, these segments of the cytb and 16S rRNA are identical to those from Lonesome George; this 75% of the sequences contains all the synapomorphies that place Lonesome George in the San Cristóbal/Española clade.
Although G. chilensis is the closest living relative of the Galápagos tortoise, it is unlikely that the direct ancestor of G. nigra was a small-bodied tortoise. Several lines of reasoning (for review, see ref. 2) suggest that gigantism was a preadapted condition for successful colonization of remote oceanic islands, rather than an evolutionary trend triggered by the insular environment. Giant tortoises colonized the Seychelles at least three separate times (29). Fossil giant tortoises are known from mainland South America, and morphological analysis of these and extant species are consistent with a clade containing giant tortoise fossils and G. chilensis (30).
Further evidence that the split between the ancestral lineages that gave rise to G. chilensis and G. nigra occurred on mainland South America comes from time estimates based on a molecular clock. We applied the Tajima (27) test of the clocklike behavior of DNA sequences to pairwise comparisons between G. chilensis and Galápagos subspecies, using in turn G. carbonaria and G. denticulata as the outgroup. The tests were done on transitions and transversions together, and on transversions only. We could not reject the hypothesis of constant substitution rates for the vast majority (94%) of comparisons for both genes. Therefore, we assumed that the 16S rRNA and cytb genes were evolving linearly with time. To calculate approximate divergence times between the lineages, we used published mtDNA rates estimated from turtles and other vertebrate ectotherms (3133). Depending on which estimate and gene are used, the predicted time of the split between G. nigra and G. chilensis varies, but most put the date between 6 and 12 myr ago. The oldest extant islands (San Cristóbal and Española) date to less than 5 myr (7), although sea mounts now submerged may have formed islands more than 10 myr ago (34). However, given the existence of mainland giant fossils and the argument that gigantism was required for long distance rafting, invoking colonization on now submerged islands would seem less reasonable than a split on the mainland before colonization, with the immediate ancestral lineage now extinct. The oldest split within G. nigra is estimated at no more than 2 myr ago, consistent with diversification on the existing islands.
Times of divergence and colonization of other prominent Galápagos organisms have been estimated by molecular data. The diversification of Darwin's finches has been estimated to have occurred within the age of the extant islands (35). On the other hand, the endemic marine (Conolophorus) and land (Amblyrhyncus) iguanas are estimated to have diverged from each other between 10 and 20 myr ago (36, 37). As argued by Rassmann (37), it is likely that the split occurred on the Archipelago; therefore, it must have occurred on now-submerged islands. Similarly the diversification of the lava lizards (Tropidurus) and geckos (Phyllodactylus) was estimated to have begun around 9 myr ago, although in this case, there is some evidence indicating multiple colonizations (38, 39).
Taxonomic Status of Isabela Subspecies. From Fig. 2, it seems clear that the largest and youngest island with tortoise populations, Isabela, was colonized at least twice independently. The four southern subspecies are sister taxa to the Santa Cruz subspecies (G. n. porteri), whereas the subspecies on the northernmost volcano (G. n. becki) is sister to the subspecies (G. n. darwini) on San Salvador. We have found no significant genetic differentiation among the four southern Isabela subspecies (microphyes, vandenburghi, guntheri, and vicina), even for what should be the fastest evolving region of mtDNA (Fig. 3). The lack of genetic differentiation is perhaps not surprising in light of the age of the Isabela volcanoes, estimated to be less than 0.5 myr (7). For colonization by tortoises, most volcanic activity must have ceased and sufficient time must have passed for appropriate vegetation to develop. Given this relatively short time, coupled with long generation time [age of first reproduction is over 20 years (8)], significant genetic differentiation among these populations is unlikely. The genetic distinctness of the population on the northernmost volcano is accounted for by an independent colonization from another island.
The lack of genetic differentiation of these four Isabela subspecies is consistent with the morphological assessment of at least one authority. Pritchard (2) suggested that the four southern Isabela subspecies do not warrant separate subspecific status, but rather the described differences are either attributable to environmental differences (especially of rainfall, food availability, and humidity), or do not show geographic correlation, but are artifacts of age and sex. This, coupled with our results, would seem to warrant a reassessment of the taxonomic status of these subspecies.
The data presented here also indicate little or no genetic differentiation between or among subspecies connected to nodes c, d, and e in Fig. 2. However, faster evolving regions of the mtDNA do reveal diagnostic differences among all subspecies (unpublished data) with the exception of the four southern Isabela populations, for which none of our data indicate geographically structured differentiation. Because a major purpose of the present study was to identify the mainland sister taxon to the Galápagos lineage, we emphasize here relatively slowly evolving regions. The molecular diagnoses of subspecies, based on larger sample sizes than are available now, should be addressed in the near future.
Lonesome George. Perhaps the greatest surprise in our data was the close relationship of the single living representative of the G.n. abingdoni subspecies from Pinta to subspecies on Española and San Cristóbal. Most other relationships make biogeographic sense. The three well supported nodes in Fig. 2 (c, d, and e) all connect subspecies on islands geographically close to one another (Fig. 1). Pinta is the farthest major island from Española and San Cristóbal, being about 300 km distant. One possibility is that Lonesome George actually did originate on Española or San Cristóbal and was transported to Pinta. Morphologically, all three subspecies are considered saddle-backed, although subtle differences among them have been noted (2). Fortunately, we had available to us skin samples from three specimens collected on Pinta in 1906. The DNA sequences we obtained from these skins are identical to those of Lonesome George. Thus, it is reasonable to conclude that Lonesome George is the sole (known) living survivor of this subspecies.
Although based solely on geographic distance, it seems unlikely that the Pinta subspecies should be so closely related to those from Española and San Cristóbal, consideration of oceanic currents makes it plausible. There is a strong current running northwest from the northern coast of San Cristóbal leading directly to the area around Pinta (40). These tortoises are not strong swimmers and thus their direction of rafting in the ocean must have depended largely on currents.
Attempts to breed Lonesome George have been unsuccessful. However, he has been placed with females primarily from northern Isabela because, given its proximity, it was thought to be the most likely origin of the Pinta population (Fig. 1). Now that we see he has close genetic affinities to the Española and San Cristóbal subspecies, perhaps they would be a more appropriate source of a mate for this sole survivor.
Copyright © 1999, The National Academy of Sciences
Need to update a veterinary or herp society/rescue listing?
|Clean/Disinfect||Green Iguanas & Cyclura||Kids||Prey||Veterinarians|
|Home||About Melissa Kaplan||CND||Lyme Disease||Zoonoses|
|Help Support This Site||Emergency Preparedness|
© 1994-2013 Melissa Kaplan or as otherwise noted by other authors of articles on this site | http://www.anapsid.org/chaco4.html | 13 |
10 | Matt Strassler [February 25, 2013]
The nucleus of an atom forms its tiny core, with a radius 10,000 to 100,000 smaller than that of the atom itself. Each nucleus contains a certain number (which we’ll call “Z”) of protons and a certain number (which we’ll call “N”) of neutrons, clumped together into a ball not much larger than the protons and neutrons themselves. Note that protons and neutrons are often collectively called “nucleons”, and Z+N is often called “A”, the total number of nucleons in a nucleus. [Recall that Z, called the ``atomic number'', is also the number of electrons in the atom.]
The typical cartoon drawing of an atom (Figure 1) greatly exaggerates the size of the nucleus, but it does represent the nucleus more or less properly as a loosely combined cluster of protons and neutrons.
The Contents of a Nucleus
How do we know what a nucleus contains? These tiny objects are (and were, historically) simple to characterize because of three facts of nature.
1. a proton and a neutron differ in mass by only about one part in a thousand, so if we’re not being too precise, we can say that all nucleons have essentially the same mass, which we’ll call the nucleon mass, mnucleon;
- mproton ≈ mneutron ≈ mnucleon
(where the “≈” symbol means “is approximately equal to”).
2. the amount of energy required to keep the protons and neutrons together in the nucleus is relatively small, only about one part in a thousand of the mass-energy (E=mc² energy) of the protons and neutrons, so the mass of a nucleus is almost equal to the sum of the masses of its nucleons;
- Mnucleus ≈ (Z+N) × mnucleon
3. an electron’s mass is only 1/1835 as big as a proton’s — so an atom’s mass is almost entirely in its nucleus;
- Matom ≈ Mnucleus
[Implicitly there's an equally important fourth fact: all atoms of a particular isotope of a particular element are identical, as are all of their electrons, their protons, and their neutrons.]
Since the most common isotope of hydrogen has one electron and one proton,
- Mhydrogen ≈ mproton ≈ mnucleon
the mass Matom of an atom of a particular isotope is, simply Z+N times the mass of a hydrogen atom,
- Matom ≈ Mnucleus ≈ (Z+N) × mnucleon ≈ (Z+N) × Mhydrogen
where these equations are accurate to about 0.1%.
Meanwhile, since neutrons are electrically neutral, the electric charge Qnucleus on a nucleus is just the number of protons times the electric charge (called “e”) of a proton
- Qnucleus = Z × Qproton = Z × e
Unlike the previous equations, this equation is exact, with no corrections.
- Z = Qnucleus / e
- A = Z + N ≈ Matom / Mhydrogen
These equations are illustrated in Figure 2.
Experimentally, using discoveries of the late decades of the 19th century and the early decades of the 20th, physicists knew how to measure both of the quantities in red: the charge of a nucleus relative to e, and the mass of any atom relative to that of hydrogen. [Here's one way (article coming soon) to do this and use it to confirm the structure of atoms and their nuclei, in case you'd like to know.] So these quantities were known by the 1910s. However, they were not interpreted properly until 1932, when the neutron (the idea of which had been suggested by Ernest Rutherford in 1920) was identified as an independent particle, by James Chadwick. But once it was understood that neutrons existed, and that the mass of a neutron was almost the same as that of a proton, it instantly became clear how to interpret Z and N as the numbers of protons and neutrons. [It also immediately raised a new puzzle: why do protons and neutrons have almost the same mass? We'll return to this question in later articles.]
Honestly, physicists of that time period were very lucky, from the scientific point of view, that things were so easy to figure out. The patterns of masses and charges are so simple that most lingering confusions were quickly eliminated once the neutron was discovered. If any one of the four facts of nature I listed hadn’t been true, then it might have taken a long time to work out what was going on inside atoms and their nuclei.
Unfortunately, in other ways it would have been much better if things were more complicated. The timing of this scientific breakthrough could hardly have been worse. The discovery of the neutron and the understanding of the structure of nuclei coincided with the international economic crisis often called the Great Depression, and with the rise of several authoritarian and expansionist governments in Europe and Asia. A race among leading scientific countries to understand and obtain energy and weapons from nuclei quickly began; reactors generating nuclear power were obtained within a mere ten years, and within thirteen, nuclear explosives. Today we live with the consequences.
How Do We Know Nuclei are Small?
It’s one thing to convince ourselves that a particular nucleus of a particular isotope contains Z protons and N neutrons; it’s another altogether to convince ourselves that nuclei are tiny and that the protons and neutrons, rather than being crushed together into a pulp or stirred into a stew, more or less retain their integrity, as the cartoon nucleus suggests. How can we confirm these things?
I’ve mentioned before that atoms are mostly empty. There’s an easy way to see this. Think about a piece of aluminum foil; you can’t see through it. Since it is opaque, you might think the atoms of the aluminum are
- so large that there are no gaps between them, and
- so dense, thick and solid that there’s no way for light to sneak through.
Well, you’d be right about there being no gaps; in a solid there’s hardly any space between one atom and the next. You can see this in the pictures of atoms obtained using special microscopes; the atoms are like little spheres (their edges being the edges of their electron clouds), and they are pretty tightly packed together. But you’d be wrong about atoms being impenetrable.
If atoms were impenetrable, then nothing could get through the aluminum foil… not visible light photons, not X-ray photons, not electrons, not protons, not atomic nuclei. Anything you aimed at the foil would get stuck in the foil or bounce off, just the way a thrown object has to bounce off or stick to a plaster wall (Figure 3). But in fact, high-energy electrons can easily go through a piece of aluminum foil; so can X-ray photons, high-energy protons, high-energy neutrons, high-energy nuclei, and so on. The electrons and the other particles — almost all of them, to be precise — can go through the material without losing any energy or momentum in a collision with something inside one of the atoms. Only a very small fraction of them will hit an atomic nucleus or an electron head on, in which case they may lose a lot of their initial motion-energy. But the majority of the electrons, protons, neutrons, X-rays and the like just sail right through (Figure 4). It’s not like throwing pebbles at a plaster wall; it is like throwing pebbles at a chicken-wire fence (Figure 5).
The thicker you make the foil — for instance, if you stack more and more pieces of foil together — the more likely it becomes that the particles you are firing at the aluminum will hit something, lose energy, bounce back, change direction, or perhaps even stop. The same would be true if you had layer upon layer of chicken-wire fencing (Figure 6). And just as you could figure out, from how far an average pebble could make it through the layered fences, how big the gaps are in the fence, scientists can work out, from the distance traveled by electrons or atomic nuclei through matter, how empty an atom is.
Through experiments of this type, physicists in the early 20th century figured out that nothing inside an atom — neither atomic nuclei nor electrons — can be more than a thousandth of a millionth of a millionth of a meter across, 100,000 times smaller in radius than atoms themselves. (The fact that it is the nuclei that are of this size, and that electrons are at least 1000 times smaller, is something that we learn from other experiments, such as those in which high-energy electrons scatter off each other, or off of anti-electrons [positrons].)
[To be even more precise, I should mention that some of the particles will lose a bit of energy in a process called ``ionization'', in which the electric force between the incoming particle and an electron may strip the electron off its atom. This is a long-distance effect, and is not really a collision. The resulting loss of energy is an important effect for incoming electrons, but not for incoming nuclei.]
You might wonder whether the reason the particles are passing through the foil is the same reason that bullets can pass through paper, by pushing the paper out of the way. Maybe the first few particles are just pushing the atoms aside, leaving big holes for the rest of the particles to pass through? The reason we know that’s not the case is that we can do the experiment where the particles pass into and/or out of a container made of metal or glass, inside of which is a vacuum. If, as the particles passed through the container’s walls, they created holes larger than atoms, air molecules would rush in, and the vacuum would be lost. But when an experiment like this is done, the vacuum remains intact!
It’s also not too hard to tell that a nucleus is a loosely organized clump, inside of which the nucleons retain their identity. We can already guess this from the fact that the mass of a nucleus is very nearly the sum of the masses of the protons and neutrons that it contains. This fact is true of atoms and of molecules too — their masses are almost equal to the sum of the masses of their constituents, except for a small correction due to something called “binding energy” [not an essential concept in this article] — and this is reflected in the fact that it is relatively easy to break molecules into their atoms (for example, by heating them so that they bang into each other more forcefully) and to break electrons off of atoms (again by heating.) In a similar way, it is relatively easy to break nuclei into pieces, a process called “fission”, or to assemble nuclei from smaller nuclei or individual nucleons, a process called “fusion”. For example, relatively slow-moving protons or small nuclei crashing into a larger nucleus can break that nucleus apart; there’s no need for the impinging particles to be moving near the speed of light.
To understand that this isn’t inevitable, let me mention that we’ll see that these properties are not true of protons and neutrons themselves. The proton mass is not roughly equal to the sum of the masses of the objects that it contains; there is no way to break the proton into pieces; and to get the proton to do anything interesting requires energies comparable to the mass-energy of the proton itself. Molecules, atoms and nuclei are comparatively simple; the proton and the neutron are extremely complex.
Now that we know a nucleus is tiny, we have to ask an obvious question: why is it so small? [Coming soon] | http://profmattstrassler.com/articles-and-posts/particle-physics-basics/the-structure-of-matter/the-nuclei-of-atoms-at-the-heart-of-matter/ | 13 |
17 | The reader is invited to investigate changes (or permutations) in the ringing of church bells, illustrated by braid diagrams showing the order in which the bells are rung.
Imagine you have six different colours of paint. You paint a cube
using a different colour for each of the six faces. How many
different cubes can be painted using the same set of six colours?
How many ways can you write the word EUROMATHS by starting at the
top left hand corner and taking the next letter by stepping one
step down or one step to the right in a 5x5 array?
A standard die has the numbers 1, 2 and 3 are opposite 6, 5 and 4
respectively so that opposite faces add to 7? If you make standard
dice by writing 1, 2, 3, 4, 5, 6 on blank cubes you will find. . . .
Given a 2 by 2 by 2 skeletal cube with one route `down' the cube.
How many routes are there from A to B?
Blue Flibbins are so jealous of their red partners that they will
not leave them on their own with any other bue Flibbin. What is the
quickest way of getting the five pairs of Flibbins safely to. . . .
Find the point whose sum of distances from the vertices (corners)
of a given triangle is a minimum.
Imagine a stack of numbered cards with one on top. Discard the top,
put the next card to the bottom and repeat continuously. Can you
predict the last card?
Can you cross each of the seven bridges that join the north and south of the river to the two islands, once and once only, without retracing your steps?
This article for teachers discusses examples of problems in which
there is no obvious method but in which children can be encouraged
to think deeply about the context and extend their ability to. . . .
A game for 2 people. Take turns joining two dots, until your opponent is unable to move.
This is a simple version of an ancient game played all over the world. It is also called Mancala. What tactics will increase your chances of winning?
If you can copy a network without lifting your pen off the paper and without drawing any line twice, then it is traversable.
Decide which of these diagrams are traversable.
A Hamiltonian circuit is a continuous path in a graph that passes through each of the vertices exactly once and returns to the start.
How many Hamiltonian circuits can you find in these graphs?
Draw a pentagon with all the diagonals. This is called a pentagram.
How many diagonals are there? How many diagonals are there in a
hexagram, heptagram, ... Does any pattern occur when looking at. . . .
Is it possible to rearrange the numbers 1,2......12 around a clock
face in such a way that every two numbers in adjacent positions
differ by any of 3, 4 or 5 hours?
Triangle numbers can be represented by a triangular array of
squares. What do you notice about the sum of identical triangle
Square numbers can be represented as the sum of consecutive odd
numbers. What is the sum of 1 + 3 + ..... + 149 + 151 + 153?
Can you maximise the area available to a grazing goat?
The triangle ABC is equilateral. The arc AB has centre C, the arc
BC has centre A and the arc CA has centre B. Explain how and why
this shape can roll along between two parallel tracks.
In a right angled triangular field, three animals are tethered to posts at the midpoint of each side. Each rope is just long enough to allow the animal to reach two adjacent vertices. Only one animal. . . .
The diagram shows a very heavy kitchen cabinet. It cannot be lifted but it can be pivoted around a corner. The task is to move it, without sliding, in a series of turns about the corners so that it. . . .
Imagine a large cube made from small red cubes being dropped into a
pot of yellow paint. How many of the small cubes will have yellow
paint on their faces?
An irregular tetrahedron is composed of four different triangles.
Can such a tetrahedron be constructed where the side lengths are 4,
5, 6, 7, 8 and 9 units of length?
A game for 2 players
Problem solving is at the heart of the NRICH site. All the problems
give learners opportunities to learn, develop or use mathematical
concepts and skills. Read here for more information.
Euler discussed whether or not it was possible to stroll around Koenigsberg crossing each of its seven bridges exactly once. Experiment with different numbers of islands and bridges.
How many moves does it take to swap over some red and blue frogs? Do you have a method?
A game for 2 players. Can be played online. One player has 1 red
counter, the other has 4 blue. The red counter needs to reach the
other side, and the blue needs to trap the red.
Four rods, two of length a and two of length b, are linked to form
a kite. The linkage is moveable so that the angles change. What is
the maximum area of the kite?
In the game of Noughts and Crosses there are 8 distinct winning
lines. How many distinct winning lines are there in a game played
on a 3 by 3 by 3 board, with 27 cells?
A half-cube is cut into two pieces by a plane through the long diagonal and at right angles to it. Can you draw a net of these pieces? Are they identical?
The image in this problem is part of a piece of equipment found in the playground of a school. How would you describe it to someone over the phone?
A right-angled isosceles triangle is rotated about the centre point
of a square. What can you say about the area of the part of the
square covered by the triangle as it rotates?
Mathematics is the study of patterns. Studying pattern is an
opportunity to observe, hypothesise, experiment, discover and
Show that among the interior angles of a convex polygon there
cannot be more than three acute angles.
Take a line segment of length 1. Remove the middle third. Remove
the middle thirds of what you have left. Repeat infinitely many
times, and you have the Cantor Set. Can you picture it?
In how many ways can you fit all three pieces together to make
shapes with line symmetry?
ABCD is a regular tetrahedron and the points P, Q, R and S are the midpoints of the edges AB, BD, CD and CA. Prove that PQRS is a square.
Seven small rectangular pictures have one inch wide frames. The
frames are removed and the pictures are fitted together like a
jigsaw to make a rectangle of length 12 inches. Find the dimensions
of. . . .
You have 27 small cubes, 3 each of nine colours. Use the small
cubes to make a 3 by 3 by 3 cube so that each face of the bigger
cube contains one of every colour.
A train leaves on time. After it has gone 8 miles (at 33mph) the
driver looks at his watch and sees that the hour hand is exactly
over the minute hand. When did the train leave the station?
How many different symmetrical shapes can you make by shading triangles or squares?
Here are four tiles. They can be arranged in a 2 by 2 square so
that this large square has a green edge. If the tiles are moved
around, we can make a 2 by 2 square with a blue edge... Now try. . . .
Use the interactivity to play two of the bells in a pattern. How do
you know when it is your turn to ring, and how do you know which
bell to ring?
Have a go at this 3D extension to the Pebbles problem.
In this problem, we have created a pattern from smaller and smaller
squares. If we carried on the pattern forever, what proportion of
the image would be coloured blue?
How could Penny, Tom and Matthew work out how many chocolates there
are in different sized boxes?
Use the animation to help you work out how many lines are needed to draw mystic roses of different sizes.
On the graph there are 28 marked points. These points all mark the
vertices (corners) of eight hidden squares. Can you find the eight | http://nrich.maths.org/public/leg.php?code=-68&cl=3&cldcmpid=643 | 13 |