score
int64
10
1.34k
text
stringlengths
296
618k
url
stringlengths
16
1.13k
year
int64
13
18
21
Video:How to Write a Scientific Method Worksheet for Elementary Schoolwith Milo De Prieto The scientific method is a great critical thinking tool in the classroom. Learn how to create and implement a scientific method worksheet for an elementary class in this how-to video from About.com.See Transcript Transcript:How to Write a Scientific Method Worksheet for Elementary School Hello I'm Milo for About.com and today we are talking about how to write a scientific method worksheet for elementary school. How to Start a Scientific Method Activity There are some specific steps to writing a scientific method worksheet when working with children as we are trying to get them to develop critical thinking skills. The first step is to come up with a question, a problem that needs to be tested out to be proven true or false. Depending on the age group it could be helpful to assign a topic and engage the class in formulating questions about the topic as a group. Teaching the cognitive skill of questioning is the real goal of this process. Let the students see many examples of the thought processes a skilled thinker uses to form these questions. Be explicit and show your own step by step thinking process. Let the students know they are learning how to question. Performing Research and Forumlating a Hypothesis The second step is to get the class to do research about the topic. The students can use books, the Internet, ask the teacher, or even interview people knowledgeable in the topic. For this step, make sure to allow extra space on the worksheet to write down this information; or better yet the kids keep a learning journal, writing down their search, both failures and success as well as their finds. With this step we are teaching the kids to collect data and to keep track of it though the process of research. The third step is to formulate an hypothesis. During this step, we are getting the class to practice reasoning using the facts they have collected to re-formulate the question into a testable hypothesis. Conducting the Experiment Finally, the experiment. Here you can make up an experiment for them or look through science books or the Internet to find one that is appropriate for your age group. Make sure to allow a space on the worksheet for a list of materials, as we want the kids to understand the importance of organization for this procedure. During the experiment, the students should keep track of what's happening in their journal as accurately as possible. Interpreting Data and Writing the Conclusion When the experiment is finished it is time for the conclusion. Here the data collected during the experiment is analyzed and it becomes apparent if our hypothesis was correct or not. It is a good idea to provide a time in the classroom for the children to share what they learned with the rest of the class to celebrate learning and the process. This way they condense all the steps they have gone through and verbalize them. It is a good way to get kids to gain confidence in themselves by explaining what they have done. As a teacher, you can also encourage them to share it with their family at home! Consider role playing as well. The students can be zoologists, astronauts, medical doctors, or even archeologists. For more excellent and helpful information on practically anything, check us out at About.com.
http://video.about.com/k6educators/How-to-Write-a-Scientific-Method-Worksheet-for-Elementary-School.htm
13
48
From earliest times, astronomers assumed that the orbits in which the planets moved were circular; yet the numerous catalogs of measurements compiled especially during the 16th cent. did not fit this theory. At the beginning of the 17th cent., Johannes Kepler stated three laws of planetary motion that explained the observed data: the orbit of each planet is an ellipse with the sun at one focus; the speed of a planet varies in such a way that an imaginary line drawn from the planet to the sun sweeps out equal areas in equal amounts of time; and the ratio of the squares of the periods of revolution of any two planets is equal to the ratio of the cubes of their average distances from the sun. The orbits of the solar planets, while elliptical, are almost circular; on the other hand, the orbits of many of the extrasolar planets discovered during the 1990s are highly elliptical. After the laws of planetary motion were established, astronomers developed the means of determining the size, shape, and relative position in space of a planet's orbit. The size and shape of an orbit are specified by its semimajor axis and by its eccentricity. The semimajor axis is a length equal to half the greatest diameter of the orbit. The eccentricity is the distance of the sun from the center of the orbit divided by the length of the orbit's semimajor axis; this value is a measure of how elliptical the orbit is. The position of the orbit in space, relative to the earth, is determined by three factors: (1) the inclination, or tilt, of the plane of the planet's orbit to the plane of the earth's orbit (the ecliptic); (2) the longitude of the planet's ascending node (the point where the planet cuts the ecliptic moving from south to north); and (3) the longitude of the planet's perihelion point (point at which it is nearest the sun; see apsis). These quantities, which determine the size, shape, and position of a planet's orbit, are known as the orbital elements. If only the sun influenced the planet in its orbit, then by knowing the orbital elements plus its position at some particular time, one could calculate its position at any later time. However, the gravitational attractions of bodies other than the sun cause perturbations in the planet's motions that can make the orbit shift, or precess, in space or can cause the planet to wobble slightly. Once these perturbations have been calculated one can closely determine its position for any future date over long periods of time. Modern methods for computing the orbit of a planet or other body have been refined from methods developed by Newton, Laplace, and Gauss, in which all the needed quantities are acquired from three separate observations of the planet's apparent position. The laws of planetary orbits also apply to the orbits of comets, natural satellites, artificial satellites, and space probes. The orbits of comets are very elongated; some are long ellipses, some are nearly parabolic (see parabola), and some may be hyperbolic. When the orbit of a newly discovered comet is calculated, it is first assumed to be a parabola and then corrected to its actual shape when more measured positions are obtained. Natural satellites that are close to their primaries tend to have nearly circular orbits in the same plane as that of the planet's equator, while more distant satellites may have quite eccentric orbits with large inclinations to the planet's equatorial plane. Because of the moon's proximity to the earth and its large relative mass, the earth-moon system is sometimes considered a double planet. It is the center of the earth-moon system, rather than the center of the earth itself, that describes an elliptical orbit around the sun in accordance with Kepler's laws. All of the planets and most of the satellites in the solar system move in the same direction in their orbits, counterclockwise as viewed from the north celestial pole; some satellites, probably captured asteroids, have retrograde motion, i.e., they revolve in a clockwise direction. In physics, an orbit is the gravitationally curved path of one object around a point or another body, for example the gravitational orbit of a planet around a star. Historically, the apparent motion of the planets were first understood in terms of epicycles, which are the sums of numerous circular motions. This predicted the path of the planets quite well, until Johannes Kepler was able to show that the motion of the planets were in fact elliptical motions. Sir Isaac Newton was able to prove that this was equivalent to an inverse square, instantaneously propagating force he called gravitation. Albert Einstein later was able to show that gravity is due to curvature of space-time, and that orbits lie upon geodesics and this is the current understanding. The basis for the modern understanding of orbits was first formulated by Johannes Kepler whose results are summarized in his three laws of planetary motion. First, he found that the orbits of the planets in our solar system are elliptical, not circular (or epicyclic), as had previously been believed, and that the sun is not located at the center of the orbits, but rather at one focus. Second, he found that the orbital speed of each planet is not constant, as had previously been thought, but rather that the speed of the planet depends on the planet's distance from the sun. And third, Kepler found a universal relationship between the orbital properties of all the planets orbiting the sun. For each planet, the cube of the planet's distance from the sun, measured in astronomical units (AU), is equal to the square of the planet's orbital period, measured in Earth years. Jupiter, for example, is approximately 5.2 AU from the sun and its orbital period is 11.86 Earth years. So 5.2 cubed equals 11.86 squared, as predicted. Isaac Newton demonstrated that Kepler's laws were derivable from his theory of gravitation and that, in general, the orbits of bodies responding to an instantaneously propagating force of gravity were conic sections. Newton showed that a pair of bodies follow orbits of dimensions that are in inverse proportion to their masses about their common center of mass. Where one body is much more massive than the other, it is a convenient approximation to take the center of mass as coinciding with the center of the more massive body. Albert Einstein was able to show that gravity was due to curvature of space-time and was able to remove the assumption of Newton that changes propagate instantaneously. In relativity theory orbits follow geodesic trajectories which approximate very well to the Newtonian predictions. However there are differences and these can be used to determine which theory relativity agrees with. Essentially all experimental evidence agrees with relativity theory to within experimental measuremental accuracy. Owing to mutual gravitational perturbations, the eccentricities of the orbits of the planets in our solar system vary over time. Mercury, the smallest planet in the Solar System, has the most eccentric orbit. At the present epoch, Mars has the next largest eccentricity while the smallest eccentricities are those of the orbits of Venus and Neptune. As two objects orbit each other, the periapsis is that point at which the two objects are closest to each other and the apoapsis is that point at which they are the farthest from each other. (More specific terms are used for specific bodies. For example, perigee and apogee are the lowest and highest parts of an Earth orbit, respectively.) In the elliptical orbit, the center of mass of the orbiting-orbited system will sit at one focus of both orbits, with nothing present at the other focus. As a planet approaches periapsis, the planet will increase in speed, or velocity. As a planet approaches apoapsis, the planet will decrease in velocity. As an illustration of an orbit around a planet, the Newton's cannonball model may prove useful (see image below). Imagine a cannon sitting on top of a tall mountain, which fires a cannonball horizontally. The mountain needs to be very tall, so that the cannon will be above the Earth's atmosphere and the effects of air friction on the cannonball can be ignored. If the cannon fires its ball with a low initial velocity, the trajectory of the ball curves downward and hits the ground (A). As the firing velocity is increased, the cannonball hits the ground farther (B) away from the cannon, because while the ball is still falling towards the ground, the ground is increasingly curving away from it (see first point, above). All these motions are actually "orbits" in a technical sense — they are describing a portion of an elliptical path around the center of gravity — but the orbits are interrupted by striking the Earth. If the cannonball is fired with sufficient velocity, the ground curves away from the ball at least as much as the ball falls — so the ball never strikes the ground. It is now in what could be called a non-interrupted, or circumnavigating, orbit. For any specific combination of height above the center of gravity, and mass of the planet, there is one specific firing velocity that produces a circular orbit, as shown in (C). As the firing velocity is increased beyond this, a range of elliptic orbits are produced; one is shown in (D). If the initial firing is above the surface of the Earth as shown, there will also be elliptical orbits at slower velocities; these will come closest to the Earth at the point half an orbit beyond, and directly opposite, the firing point. At a specific velocity called escape velocity, again dependent on the firing height and mass of the planet, an infinite orbit such as (E) is produced — a parabolic trajectory. At even faster velocities the object will follow a range of hyperbolic trajectories. In a practical sense, both of these trajectory types mean the object is "breaking free" of the planet's gravity, and "going off into space". The velocity relationship of two objects with mass can thus be considered in four practical classes, with subtypes: Energy is associated with gravitational fields. A stationary body far from another can do external work if it is pulled towards it, and therefore has gravitational potential energy. Since work is required to separate two massive bodies against the pull of gravity, their gravitational potential energy increases as they are separated, and decreases as they approach one another. For point masses the gravitational energy decreases without limit as they approach zero separation, and it is convenient and conventional to take the potential energy as zero when they are an infinite distance apart, and then negative (since it decreases from zero) for smaller finite distances. With two bodies, an orbit is a conic section. The orbit can be open (so the object never returns) or closed (returning), depending on the total kinetic + potential energy of the system. In the case of an open orbit, the speed at any position of the orbit is at least the escape velocity for that position, in the case of a closed orbit, always less. Since the kinetic energy is never negative, if the common convention is adopted of taking the potential energy as zero at infinite separation, the bound orbits have negative total energy, parabolic trajectories have zero total energy, and hyperbolic orbits have positive total energy. An open orbit has the shape of a hyperbola (when the velocity is greater than the escape velocity), or a parabola (when the velocity is exactly the escape velocity). The bodies approach each other for a while, curve around each other around the time of their closest approach, and then separate again forever. This may be the case with some comets if they come from outside the solar system. A closed orbit has the shape of an ellipse. In the special case that the orbiting body is always the same distance from the center, it is also the shape of a circle. Otherwise, the point where the orbiting body is closest to Earth is the perigee, called periapsis (less properly, "perifocus" or "pericentron") when the orbit is around a body other than Earth. The point where the satellite is farthest from Earth is called apogee, apoapsis, or sometimes apifocus or apocentron. A line drawn from periapsis to apoapsis is the line-of-apsides. This is the major axis of the ellipse, the line through its longest part. Orbiting bodies in closed orbits repeat their path after a constant period of time. This motion is described by the empirical laws of Kepler, which can be mathematically derived from Newton's laws. These can be formulated as follows: Note that that while the bound orbits around a point mass, or a spherical body with an ideal Newtonian gravitational field, are all closed ellipses, which repeat the same path exactly and indefinitely, any non-spherical or non-Newtonian effects (as caused, for example, by the slight oblateness of the Earth, or by relativistic effects, changing the gravitational field's behavior with distance) will cause the orbit's shape to depart to a greater or lesser extent from the closed ellipses characteristic of Newtonian two body motion. The 2-body solutions were published by Newton in Principia in 1687. In 1912, Karl Fritiof Sundman developed a converging infinite series that solves the 3-body problem; however, it converges too slowly to be of much use. Except for special cases like the Lagrangian points, no method is known to solve the equations of motion for a system with four or more bodies. Instead, orbits with many bodies can be approximated with arbitrarily high accuracy. These approximations take two forms. One form takes the pure elliptic motion as a basis, and adds perturbation terms to account for the gravitational influence of multiple bodies. This is convenient for calculating the positions of astronomical bodies. The equations of motion of the moon, planets and other bodies are known with great accuracy, and are used to generate tables for celestial navigation. Still there are secular phenomena that have to be dealt with by post-newtonian methods. The differential equation form is used for scientific or mission-planning purposes. According to Newton's laws, the sum of all the forces will equal the mass times its acceleration (F = ma). Therefore accelerations can be expressed in terms of positions. The perturbation terms are much easier to describe in this form. Predicting subsequent positions and velocities from initial ones corresponds to solving an initial value problem. Numerical methods calculate the positions and velocities of the objects a tiny time in the future, then repeat this. However, tiny arithmetic errors from the limited accuracy of a computer's math accumulate, limiting the accuracy of this approach. Differential simulations with large numbers of objects perform the calculations in a hierarchical pairwise fashion between centers of mass. Using this scheme, galaxies, star clusters and other large objects have been simulated. Please note that the following is a classical (Newtonian) analysis of orbital mechanics, which assumes the more subtle effects of general relativity (like frame dragging and gravitational time dilation) are negligible. General relativity does, however, need to be considered for some applications such as analysis of extremely massive heavenly bodies, precise prediction of a system's state after a long period of time, and in the case of interplanetary travel, where fuel economy, and thus precision, is paramount. To analyze the motion of a body moving under the influence of a force which is always directed towards a fixed point, it is convenient to use polar coordinates with the origin coinciding with the center of force. In such coordinates the radial and transverse components of the acceleration are, respectively: Since the force is entirely radial, and since acceleration is proportional to force, it follows that the transverse acceleration is zero. As a result, After integrating, we have which is actually the theoretical proof of Kepler's 2nd law (A line joining a planet and the sun sweeps out equal areas during equal intervals of time). The constant of integration, h, is the angular momentum per unit mass. It then follows that where G is the constant of universal gravitation, m is the mass of the orbiting body (planet), and M is the mass of the central body (the Sun). Substituting into the prior equation, we have So for the gravitational force – or, more generally, for any inverse square force law – the right hand side of the equation becomes a constant and the equation is seen to be the harmonic equation (up to a shift of origin of the dependent variable). The solution is: The equation of the orbit described by the particle is thus: The rotation to do this in three dimensions requires three numbers to uniquely determine; traditionally these are expressed as three angles. In principle once the orbital elements are known for a body, its position can be calculated forward and backwards indefinitely in time. However, in practice, orbits are affected or perturbed, by forces other than gravity due to the central body and thus the orbital elements change over time. For a prograde or retrograde impulse (i.e. an impulse applied along the orbital motion), this changes both the eccentricity as well as the orbital period, but any closed orbit will still intersect the perturbation point. Notably, a prograde impulse given at periapsis raises the altitude at apoapsis, and vice versa, and a retrograde impulse does the opposite. A transverse force out of the orbital plane causes rotation of the orbital plane. The bounds of an atmosphere vary wildly. During solar maxima, the Earth's atmosphere causes drag up to a hundred kilometres higher than during solar minima. Some satellites with long conductive tethers can also decay because of electromagnetic drag from the Earth's magnetic field. Basically, the wire cuts the magnetic field, and acts as a generator. The wire moves electrons from the near vacuum on one end to the near-vacuum on the other end. The orbital energy is converted to heat in the wire. Orbits can be artificially influenced through the use of rocket motors which change the kinetic energy of the body at some point in its path. This is the conversion of chemical or electrical energy to kinetic energy. In this way changes in the orbit shape or orientation can be facilitated. Another method of artificially influencing an orbit is through the use of solar sails or magnetic sails. These forms of propulsion require no propellant or energy input other than that of the sun, and so can be used indefinitely. See statite for one such proposed use. Orbital decay can also occur due to tidal forces for objects below the synchronous orbit for the body they're orbiting. The gravity of the orbiting object raises tidal bulges in the primary, and since below the synchronous orbit the orbiting object is moving faster than the body's surface the bulges lag a short angle behind it. The gravity of the bulges is slightly off of the primary-satellite axis and thus has a component along the satellite's motion. The near bulge slows the object more than the far bulge speeds it up, and as a result the orbit decays. Conversely, the gravity of the satellite on the bulges applies torque on the primary and speeds up its rotation. Artificial satellites are too small to have an appreciable tidal effect on the planets they orbit, but several moons in the solar system are undergoing orbital decay by this mechanism. Mars' innermost moon Phobos is a prime example, and is expected to either impact Mars' surface or break up into a ring within 50 million years. Finally, orbits can decay via the emission of gravitational waves. This mechanism is extremely weak for most stellar objects, only becoming significant in cases where there is a combination of extreme mass and extreme acceleration, such as with black holes or neutron stars that are orbiting each other closely. However, in the real world, many bodies rotate, and this introduces oblateness and distorts the gravity field, and gives a quadrupole moment to the gravitational field which is significant at distances comparable to the radius of the body. The general effect of this is to change the orbital parameters over time; predominantly this gives a rotation of the orbital plane around the rotational pole of the central body (it perturbs the argument of perigee) in a way that is dependent on the angle of orbital plane to the equator as well as altitude at perigee. Thus the constant has dimension density-1 time-2. This corresponds to the following properties. Scaling of distances (including sizes of bodies, while keeping the densities the same) gives similar orbits without scaling the time: if for example distances are halved, masses are divided by 8, gravitational forces by 16 and gravitational accelerations by 2. Hence orbital periods remain the same. Similarly, when an object is dropped from a tower, the time it takes to fall to the ground remains the same with a scale model of the tower on a scale model of the earth. When all densities are multiplied by four, orbits are the same, but with orbital velocities doubled. When all densities are multiplied by four, and all sizes are halved, orbits are similar, with the same orbital velocities. These properties are illustrated in the formula (known as Kepler's 3rd Law) for an elliptical orbit with semi-major axis a, of a small body around a spherical body with radius r and average density σ, where T is the orbital period.
http://www.reference.com/browse/Orbit
13
10
Feb. 20, 2003 Images from the visible light camera on NASA's Mars Odyssey spacecraft, combined with images from NASA's Mars Global Surveyor, suggest melting snow is the likely cause of the numerous eroded gullies first documented on Mars in 2000 by Global Surveyor. The now-famous martian gullies were created by trickling water from melting snow packs, not underground springs or pressurized flows, as had been previously suggested, argues Dr. Philip Christensen, the principal investigator for Odyssey's camera system and a professor from Arizona State University in Tempe. He proposes gullies are carved by water melting and flowing beneath snow packs, where it is sheltered from rapid evaporation in the planet's thin atmosphere. His paper is in the electronic February 19 issue of Nature. Looking at an image of an impact crater in the southern mid-latitudes of Mars, Christensen noted eroded gullies on the crater's cold, pole-facing northern wall and immediately next to them a section of what he calls "pasted-on terrain." Such unique terrain represents a smooth deposit of material that Mars researchers have concluded is "volatile" (composed of materials that evaporate in the thin Mars atmosphere), because it characteristically occurs only in the coldest, most sheltered areas. The most likely composition of this slowly evaporating material is snow. Christensen suspected a special relationship between the gullies and the snow. "The Odyssey image shows a crater on the pole-facing side has this 'pasted-on' terrain, and as you come around to the west there are all these gullies," said Christensen. "I saw it and said 'Ah-ha!' It looks for all the world like these gullies are being exposed as this terrain is being removed through melting and evaporation." Eroded gullies on martian crater walls and cliff sides were first observed in images taken by Mars Global Surveyor in 2000. There have been other scientific theories offered to explain gully formation on Mars, including seeps of ground water, pressurized flows of ground water (or carbon dioxide), and mudflows caused by collapsing permafrost deposits, but no explanation to date has been universally accepted. The scientific community has remained puzzled, yet has been eagerly pursuing various possibilities. "The gullies are very young," Christensen said. "That's always bothered me, because how is it that Mars has groundwater close enough to the surface to form these gullies, and yet the water has stuck around for billions of years? Second, you have craters with rims that are raised, and the gullies go almost to the crest of the rim. If it's a leaking subsurface aquifer, there's not much subsurface up there. And, finally, why do they occur preferentially on the cold face of the slope at mid-latitudes? If it's melting groundwater causing the flow, that's the coldest place, and the least likely place for that to happen." Christensen points out that finding water erosion under melting snow deposits answers many of these problems, "Snow on Mars is most likely to accumulate on the pole-facing slopes, the coldest areas. It accumulates and drapes the landscape in these areas during one climate period, and then it melts during a warmer one. Melting begins first in the most exposed area right at the crest of the ridge. This explains why gullies start so high up." Once he started to think about snow, Christensen began finding a large number of other images showing a similar relationship between "pasted on" snow deposits and gullies in the high resolution images taken by the camera on Global Surveyor. Yet it was the unique mid-range resolution of the visible light camera in Mars Odyssey's thermal emission imaging system that was critical for the insight, because of its wide field of view. "It was almost like finding a Rosetta Stone. The basic idea comes out of having a regional view, which Odyssey's camera system gives. It's a kind of you-can't-see-the forest-for-the-trees problem. An Odyssey image made it all suddenly click, because the resolution was high enough to identify these features and yet low enough to show their relationship to each other in the landscape," he said. "Christensen's new hypothesis was made possible by NASA's tandem of science orbiters currently laying the groundwork for locating the most interesting areas for future surface exploration by roving laboratories, such as the Mars Exploration Rovers, scheduled for launch in May and June of this year," said Dr. Jim Garvin, NASA's lead scientist for Mars Exploration in Washington, D.C. The Jet Propulsion Laboratory manages the Mars Exploration Program for NASA's Office of Space Science in Washington, D.C. The new images are available online at http://photojournal.jpl.nasa.gov/catalog/PIA04408 and http://photojournal.jpl.nasa.gov/catalog/PIA04409 . More information about the 2001 Mars Odyssey mission is available on the Internet at http://mars.jpl.nasa.gov/odyssey/ . Other social bookmarking and sharing tools: Note: Materials may be edited for content and length. For further information, please contact the source cited above. Note: If no author is given, the source is cited instead.
http://www.sciencedaily.com/releases/2003/02/030220082349.htm
13
81
Square kilometre array Understanding the evolution of the Universe, galaxies and stars requires looking back in time as far as possible. But the radiation from distant objects is incredibly weak and its detection needs huge collecting areas. Increasing sensitivity provided by the collecting area will reveal new classes of cosmic objects, distant and nearby, which are too faint or too short-lived to have been detected so far. One of these huge telescopes is the Square Kilometre Array (SKA) in the radio wavelength part of the electromagnetic spectrum, planned for a dual-site construction between 2017 and 2023 in Southern Africa and Australia/New Zealand. Radio waves carry signals from gas clouds emitted even before the formation of the first stars. The SKA will also constrain fundamental physics on gravitation and magnetism. It will conduct astro-biological observations, potentially including the detection of life elsewhere in the Universe via their radio signals. Radio waves provide a number of advantages: unlike optical waves, they are not absorbed by interstellar dust and they mostly do not suffer from distortions in the atmosphere, except for the shortest wavelengths of a few mm and below. The radio window for ground-based observations spans frequencies from about 10 MHz (30 m wavelength), below which the Earth's ionosphere blocks cosmic radio waves, to frequencies between 10 GHz (3 cm) and 1 THz (0.3 mm), depending on height above sea-level and water content of the troposphere. Radio waves emerge from objects widely different from the well-known sources of light. Observations at radio wavelengths led to the modern view of the Universe: discovery of the cosmic microwave background (CMB, see below), the first notion of non-thermal emission from charged particles in magnetic fields, discovery of quasars, pulsars, masers and extrasolar planets. Some of the most spectacular objects in the Universe are radio sources whose radiation is emitted from hot gas and charged particles around black holes (quasars) and in the magnetospheres around neutron stars (pulsars), remainders of supernova explosions. Cold gas in galaxies, invisible in the optical range, can be radio-bright when emitting in specific radio spectral lines. Radio waves tell us that the Universe does not only consist of stars, gas and dark matter, but is also permeated by superfast "cosmic ray" particles and magnetic fields which emit synchrotron emission over a wide (continuous) frequency range in the radio, while they escape detection in most other spectral ranges. Radio astronomy is another window to the Universe where known objects look different and new objects shine. The radio window allows us to look deep into space and hence deep into the past, and we can observe how the gas, fast particles and magnetic fields have developed over time. Scientists worldwide are extremely excited about the possibilities offered by the SKA. Key Science Projects The large investment in the SKA requires convincing justification. Apart from the expected technological spin-offs, five main science questions ("Key Science Projects") drive the SKA (see the SKA homepage and Further Reading below for details): - Probing the dark ages The SKA will use the emission of neutral hydrogen to observe the most distant objects in the Universe. The strongest line emission of hydrogen is in the radio range at a frequency of 1.4 GHz (21 cm wavelength) which corresponds to the energy difference of the hyperfine transition when the spin of the electron flips with respect to that of the proton. According to present-day cosmological models, the Universe became transparent about 380,000 years after the big bang (at a redshift of about 1100). The radiation released at that time is now prominent in the radio range as the Cosmic Microwave Background (CMB) (Durrer 2008), measured in great detail by NASA’s WMAP satellite and since 2009 by ESA’s PLANCK satellite . Matter (mostly hydrogen) remained neutral and smoothly distributed over the next billion years, called the dark ages, until the first stars and black holes formed, followed by the formation of galaxies. The energy output from the first energetic stars and the jets launched near young black holes (quasars) started to heat the neutral gas, forming bubbles of ionized gas as structure emerged. This is called the Epoch or Reionization (see Fan et al. 2006 for a review). The signatures from this exciting transition phase should still be observable with help of the radio line of hydrogen, though extremely redshifted by a factor of about 10 when arriving at our telescopes today (Fig. 1). The lowest SKA frequency will allow us to detect hydrogen at redshifts of up to 20, well into the dark ages, to search for the transition from a neutral to an ionized Universe, and hence provide a critical test of our present-day cosmological model. - Galaxy evolution, cosmology, and dark energy The expansion of the Universe is currently accelerating, a poorly understood phenomenon, for which a multitude of possible explanations have been proposed: Einstein's cosmological constant , a time-dependent energy called quintessence, topological defects, the effects of "other" Universes and many more. Since the correct answer is not known, physicists and astronomers named the phenomenon dark energy (see also Frieman et al. 2008 for a review). One important method of distinguishing between these various explanations is to compare the distribution of galaxies at different epochs in the evolution of the Universe to the distribution of matter at the time when the Cosmic Microwave Background (CMB, see above) was formed, about 380,000 years after the Big Bang. Small distortions ("ripples") in the distribution of matter, called baryon acoustic oscillations, should persist from the era of CMB formation until today. Tracking if and how these ripples change in size and spacing over cosmic time can then tell us if one of the existing models for dark energy is correct or if a new idea is needed. The SKA will use the hydrogen emission from galaxies to measure the properties of dark energy. The strongest line emission of hydrogen is in the radio range at a frequency of 1.4 GHz (21 cm wavelength), but redshifted to lower frequencies/longer wavelengths for distant galaxies. A deep all-sky SKA survey will detect hydrogen emission from galaxies out to redshifts of about 1.5, at a distance of about 9 billion light years, or at a time when the Universe was about 4.7 billion years old. The galaxy observations will be “sliced” in different redshift (time) intervals and hence reveal a comprehensive picture of the Universe's history. The same data set will give us unique new information about the evolution of galaxies. How the hydrogen gas was concentrated to form galaxies, how fast it was transformed into stars, and how much gas did galaxies acquire during their lifetime from intergalactic space and by merging with other galaxies? Present-day telescopes have difficulty in detecting intergalactic hydrogen clouds with no star formation activity and distant dwarf galaxies, but these sorts of radio sources will be easily detectable by the SKA. The hydrogen survey will simultaneously give us the synchrotron radiation intensity of all galaxies which is a measure of their star-formation rate and magnetic field strength. - Tests of General Relativity and detection of gravitational waves with pulsars and black holes The radio-astronomical discovery of pulsars and the indirect detection of gravitational waves from a pulsar-star binary system were rewarded with two Nobel prizes for physics. Pulsars are precise clocks and can be used for further experiments in fundamental physics and astrophysics. Einstein’s Theory of General Relativity has precisely predicted the outcome of every test experiment so far. However, no tests in the strong gravitational field around black holes have yet been made. The SKA will search for a radio pulsar orbiting around a black hole (Fig. 2), the remnants from the supernova explosions of two massive stars in a binary system, measure time delays in extremely curved space with much higher precision than with laboratory experiments and hence probe the limits of General Relativity (Lorimer & Kramer 2004). Regular high-precision observations with the SKA of a network of pulsars with periods of milliseconds opens the way to detect gravitational waves with wavelengths of many light years, as expected for example from two massive black holes orbiting each other with a period of a few years resulting from galaxy mergers in the early Universe. When such a gravitational wave passes by the Earth, the nearby space-time changes slightly at a frequency of a few nHz (about 1 oscillation per 30 years). The wave can be detected as apparent systematic delays and advances of the pulsar clocks in particular directions relative to the wave propagation on the sky. We expect that more than 20,000 new pulsars will be detected with the SKA, compared to about 2000 known today. Almost all pulsars in the Milky Way (Fig. 3) and several 100 bright pulsars in nearby galaxies will become observable. - Origin and evolution of cosmic magnetism Electromagnetism is one of the fundamental forces, but little is known about its role in the Universe. Large-scale electric fields induce electric currents and are unstable, whereas magnetic fields can exist over long times because, mysteriously, single magnetic charges (monopoles) are missing in the Universe. Data suggest that all interstellar and probably intergalactic space is permeated by magnetic fields, but these are extremely hard to observe. Radio waves provide two tools: synchrotron radiation emitted by cosmic-ray electrons spiraling around magnetic field lines with almost the speed of light, and Faraday rotation of the polarization plane when a polarized (synchrotron) radio wave passes through a medium with magnetic fields and thermal electrons. Both methods have been applied to reveal the large-scale magnetic fields in our Milky Way, nearby spiral galaxies (Fig. 4), and in galaxy clusters, which are probably amplified and maintained by dynamo action , but little is known about magnetic fields in the intergalactic medium (Wielebinski & Beck 2005). Furthermore, the origin and evolution of magnetic fields is still unknown. The first "seed" fields may originate in the very young Universe or may have been ejected from the first quasars, stars, or supernovae. The SKA will measure the Faraday rotation towards several tens of million polarized background sources (mostly quasars), allowing us to derive the magnetic field structures and strengths of the intervening objects, such as, the Milky Way, distant spiral galaxies, clusters of galaxies, and in intergalactic space. - The cradle of life The presence of life on other planets is a fundamental issue for astronomy and biology. The SKA will contribute to this question in several ways. Firstly, it will be able to detect the thermal radio emission from centimeter-sized "pebbles" in protoplanetary systems (Fig. 5) which are thought to be the first step in assembling Earth-like planets. The SKA will allow us to detect a protoplanet separated from the central star by spacings of order the Sun-Earth separation out to distances of about 3000 light years. Biomolecules are observable in the radio range, for example, "cold sugar" glycolaldehyde (CH2OHCHO) which has several lines between 13 and 22 GHz. Prebiotic chemistry - the formation of the molecular building blocks necessary for the creation of life - occurs in interstellar clouds long before that cloud collapses to form a new solar system with planets. Finally, the SETI (Search for Extra Terrestrial Intelligence) project (see Tarter 2001 for a review) will use the SKA to find hints of technological activities. Ionospheric radar experiments similar to those on Earth will be detectable out to several thousand light years, and Arecibo-type radar beams, like those that we use to map our neighbor planets in the solar system, out to as far as a few ten thousand light years. SETI will also search for such artificial signals superimposed onto natural signals from other objects. Core science drivers From the five Key Science Projects (see above) two major science goals have been identified that drive the technical specifications for the first phase (SKA1): - Origins: Understanding the history and role of neutral hydrogen in the Universe from the dark ages to the present-day - Fundamental Physics: Detecting and timing binary pulsars and spin-stable millisecond pulsars in order to test theories of gravity. Exploration of the Unknown While the experiments described above are exciting science, the history of science tells us that many of the greatest discoveries happen unexpectedly and reveal objects which are completely different from those which had been envisaged during the planning phase of a new-generation telescope. For example, the serendipitous discovery of pulsars was made with a low-frequency telescope at Cambridge/UK that had been designed to measure the effects of the ionized interplanetary medium on radio waves. The unique sensitivity of the SKA will certainly reveal new classes of cosmic objects which are totally beyond our present imagination. We are looking forward to such surprises. Similar to present-day radio interferometers, like the Very Large Array (USA), the Westerbork Synthesis Radio Telescope (Netherlands), the Australia Telescope Compact Array and the Allen Telescope Array (USA), the SKA will consist of many antennas which are spread over a large area. The resolving power is proportional to the frequency and to the largest baseline between the outermost antennas and hence is much higher than for single dish telescopes. The signals are combined in a central computer (correlator). While the radio images from present-day interferometric telescopes are generally produced offline at the observer's institute, the enormous data rates of the SKA will demand online image production with automatic software pipelines. With a collecting area of about one square kilometer, the SKA will be about ten times more sensitive than the largest single dish telescope (305 m diameter) at Arecibo (Puerto Rico) , and fifty times more sensitive than the currently most powerful interferometer, the Jansky Very Large Array (JVLA, at Socorro/USA) . The SKA will continuously cover most of the frequency range accessible from ground, from 50 MHz to 10 GHz (corresponding to wavelengths of 3 cm to 6 m) in the first and second phases, later to be extended to at least 25 GHz (1.2 cm). The third major improvement is the enormously wide field of view, ranging from at least 20 square degrees at 70 MHz to about 18 square degree at 1.4 GHz. The speed to survey a large part of the sky, particularly at the lower frequencies, will hence be ten thousand to a million times faster than what is possible today. The SKA central region will contain about 50% of the total collecting area and comprise (1) separate core stations of 5 km diameter each for the dish antennas and the two types of aperture arrays (Fig. 6), (2) the mid-region out to about 180 km radius from the core with dish and aperture array antennas aggregated into "stations" distributed on a spiral arm pattern, and (3) "remote" stations with about 20 dish antennas each out to distances of at least 3000 km and located on continuations of the spiral arm pattern. The overall extent of the array determines the angular resolution, which will be about 0.1 seconds of arc at 100 MHz and 0.001 seconds of arc at 10 GHz. To meet these ambitious specifications and keep the cost to a level the international community can support, planning and construction of the SKA requires many technological innovations such as light and low-cost antennas, detector arrays with a wide field of view, low-noise amplifiers, high-capacity data transfer, high-speed parallel-processing computers and high-capacity data storage units. The realization needs multifold innovative solutions which will soon find their way into general communication technology. The frequency range spanning more than two decades cannot be realized with one single antenna design, so this will be achieved with a combination of different types of antennas. Under investigation are the following designs for the low and mid-frequency ranges: 1. An aperture array of simple dipole antennas with wide spacings (a "sparse aperture array") for the low-frequency range (about 50-350 MHz) (Fig. 7). This is a software telescope with no moving parts, steered solely by electronic phase delays. It has a large field of view and can observe towards several directions simultaneously. 2. An array of several thousand parabolic dishes of 15 meters diameter each for the medium frequency range (about 350 MHz - 3 GHz), each equipped with wide-bandwidth single-pixel "feeds" (Fig. 8). The surface accuracy of these dishes will allow a later receiver upgrade to higher frequencies. As an "Advanced Instrumentation Programme" for the full SKA, two additional technologies for substantially enhancing the field of view in the 1-2 GHz range are under rapid development: aperture arrays for medium frequencies with dense spacings (Fig. 9) and phased-array feeds for the parabolic dishes (see below). The technologies for a wide field of view are currently less mature than the dishes and the low-frequency dipole array but have the promise of significant scientific benefit in further increasing the survey speed once they prove feasible and cost effective. The detailed design for low and mid frequencies will be ready until 2016. The development of technologies for the high-frequency band (about 3-25 GHz) will start in 2016. Technical developments around the world are being coordinated by the SKA Science and Engineering Committee and its executive arm, the SKA Project Office. The technical work itself is funded from national and regional sources, and is being carried out via a series of verification programs. The global coordination was supported by funds from the European Commission under a program called PrepSKA, the Preparatory Phase for SKA , whose primary goals were to provide a costed system design and an implementation plan for the telescope by 2012. A number of telescopes provide examples of low frequency arrays, such as the European LOFAR (Low Frequency Array) telescope, with its core in the Netherlands , the MWA (Murchison Widefield Array) in Australia , PAPER (Precision Array to Probe the Epoch of Reionization), also in Australia , and the LWA (Long Wavelength Array) in the USA . All these long wavelength telescopes are software telescopes steered by electronic phase delays ("phased aperture array"). The first LOFAR stations saw "first light" in 2007 in the frequency band 10-80 MHz and in 2009 in the frequency band 110-240 MHz (Fig. 10). Full operation of LOFAR with 40 Dutch and 9 stations in other European countries is expected in 2013. Examples of dishes with a single-pixel feed are operating already in the USA (Allen Telescope Array, ATA) and are under development in South Africa (MeerKAT) . The first 12 m prototype dish of the MeerKAT array was completed in 2009. Dense aperture arrays comprise up to millions of receiving elements in planar arrays on the ground which can be phased together to point in any direction on the sky. Due to the large reception pattern of the basic elements, the field of view can be up to 250 square degrees. Dense aperture arrays have been the subject of a European Commission-funded design study named SKA Design Study (SKADS) which has resulted in a prototype array of 140 square meters area (EMBRACE) . The technology of dense phased-array feeds (PAF) can also be adapted to the focal plane of parabolic dishes. Such a "radio camera" is composed of many elements (pixels) which are controlled and combined electronically. This allows the dishes to observe over a far wider field of view than when using a classical single-pixel feed. Prototypes of such wide-field cameras are presently constructed in Australia (ASKAP) , the Netherlands (APERTIF) and in Canada (AFAD) . The first of the 36 dish antennas (12 meter size) of ASKAP in Western Australia have already been equipped with PAF prototypes. To summarize the various international activities: ASKAP, MWA and MeerKAT are SKA Precursor telescopes and are located on the two candidate sites, Australia and South Africa, respectively. SKA Pathfinder telescopes develop technology or science projects related to the SKA, such as LOFAR, EMBRACE, APERTIF, ATA, LWA, the Arecibo dish and the EVLA dish array. SKA Design Studies include the SKA Design Study (SKADS, Europe), the SKA Program (Canada) and the Technology Development Project (TDP, USA). To obtain radio images, the data from all stations have to be transmitted to a central computer and processed online. Compared to LOFAR with a data rate of about 300 Gigabits per second and a central processing power of 27 Tflops, the SKA will produce much more data and need much more processing power - by a factor of at least one hundred. Following "Moore’s law" of increasing computing power, a processor with sufficient power should be available by the end of this decade. The energy consumption for the computers and cooling will be tens of MegaWatts. Timeline and site Construction of the SKA is planned to start in 2017. In the first phase (until 2020) about 10% of the SKA will be erected (SKA Phase 1, SKA1), with completion of construction (SKA Phase 2, SKA2) at the low and mid frequency bands by about 2025, followed by construction at the high band. The total costs of the SKA are 400 million € for SKA1 plus about 1,500 million € for SKA2 (estimate from 2007), to be shared among the countries of the worldwide collaboration. In 2011, the SKA Organisation was founded, with presently ten members (Australia, Canada, China, Germany, Italy, the Netherlands, New Zealand, South Africa, Sweden and the United Kingdom) and one associated member (India). On 25 May 2012, the Members of the SKA Organisation agreed on a dual site solution for the SKA with two candidate sites fulfilling the scientific and logistical requirements: Southern Africa, extending from South Africa, with a core in the Karoo desert, eastward to Madagascar and Mauritius and northward into the continent, and Australia, with the core in Western Australia. Building of SKA1 190 15-meter dishes of SKA1 will be built in South Africa, combined with the 64 MeerKAT dishes in South Africa and equipped with three single-pixel receivers for the frequency range 350-3050 MHz ("SKA_mid"). 60 15-meter dishes will be added to the 36 dishes of the ASKAP array in Australia and equipped with phased-array feeds for the frequency range 650-1670 MHz ("SKA_survey"). The low-frequency sparse aperture array of about 250,000 dipole antennas for the frequency range 50-350 MHz will be built in Australia ("SKA_low"). Further reading on the SKA The Square Kilometre Array, download from: http://www.skatelescope.org/wp-content/uploads/2011/03/SKA-Brochure_June2011_web_small.pdf C. Carilli and S. Rawlings: Science with the Square Kilometre Array, New Astronomy Reviews, vol. 48, Elsevier, Amsterdam (2004) P.E. Dewdney, P.J. Hall, R.T. Schilizzi and T.J.L.W. Lazio: The Square Kilometre Array, Proceedings of the IEEE, 97, 1482-1496 (2009) P. Hall: The SKA: an Engineering Perspective, Experimental Astronomy, vol. 17, Springer, Berlin (2005) J. Lazio, M. Kramer and B. Gaensler: Tuning in to the Universe, Sky & Telescope 7/2008, p.20 B.F. Burke, F. Graham-Smith: An Introduction to Radio Astronomy, 3rd ed., Cambridge University Press (2009) R. Durrer: The Cosmic Microwave Background, Cambridge University Press (2008) X. Fan, C.L. Carilli and B. Keating: Observational Constraints on Cosmic Reionization, Annual Reviews in Astronomy & Astrophysics, 44, 415-462 (2006) J.A. Frieman, M.S. Turner and D. Huterer: Dark Energy and the Accelerating Universe, Annual Reviews in Astronomy & Astrophysics, 46, 385-432 (2008) D.R. Lorimer and M. Kramer: Handbook of Pulsar Astronomy, Cambridge University Press (2004) J. Tarter: The Search for Extraterrestrial Intelligence (SETI), Annual Reviews in Astronomy & Astrophysics, 39, 511-548 (2001) R. Wielebinski and R. Beck (eds.): Cosmic Magnetic Fields, Springer, Berlin (2005) T.L. Wilson, K. Rohlfs and S. Hüttemeister: Tools of Radio Astronomy, 5th ed., Springer, Berlin (2009)
http://www.scholarpedia.org/article/Square_kilometre_array
13
10
When it comes to orbits, Johannes Kepler knew his stuff. He’s the one who in 1602 realized that planets orbit in ellipses rather than circles, which became the first of his Three Planetary Laws. But no one is perfect, and these were not his first attempts at describing the motions of the Heavens. In 1596 he published Mysterium Cosmographicum (The Mysteries of the Cosmos), in which he proposed the following model for the solar system: In this model, the six known planets were envisioned as traveling in circles, along the equators of six giant spheres. The six giant spheres were separated by the five platonic solids. Saturn and Jupiter were separated by a giant cube, and Jupiter and Mars by a giant tetrahedron. It’s harder to see the interior planets in the drawing above, so here’s a close up: Mars and Earth were separated by a giant dodecahedron, Earth and Venus by a giant icosahedron, and, finally, Venus and Mercury by a giant octahedron. And then, in center of all the orbits, was the Sun. Let’s see how accurate this model is. If you start with a giant platonic solid, like a cube, you can circumscribe a sphere on the outside and inscribe a sphere on the inside, and then compare the ratio of the radii of the two spheres. It turns out to be √3≈1.73. And lo, if you look at the average radius of Saturn’s orbit (9.021 Astronomical Units) and divide it by the average radius of Jupiter’s orbit (5.20336 AU), it rounds to 1.73. Let’s see how the other ratios match up: |Giant Polyhedron||Ratio of spheres in model||Ratio of Actual Planet Orbits| |Saturn to Jupiter||cube||1.73||1.73| |Jupiter to Mars||tetrahedron||3.00||3.42| |Mars to Earth||dodecahedron||1.26||1.52| |Earth to Venus||icosahedron||1.26||1.38| |Venus to Mercury||octahedron||1.73||1.87| Not too shabby! Plus, as a bonus, you can see that the cube and the octahedron, which are dual polyhedra, have the same ratios of the radii of the circumscribed and inscribed spheres (√3≈1.73); likewise, the dodecahedron and the icosahedron (which are also duals of each other) have the same ratio of the radii of circumscribed and inscribed sphere (≈1.26). And unlike the Titius-Bode law, the big gap between Jupiter and Mars didn’t really cause any problems since the tetrahedron fit nicely in there. But a few years later Kepler realized it was wrong, and Uranus’s discovery later would have sealed the deal in any case. Poor Kepler. But it’s still an impressive idea, and was deemed important enough even recently to put on a 2002 commemorative 10-Euro coin in Austria (designed by Thomas Pesendorfer). The planet data came from NASA; the data on the radii of circumscribed and inscribed spheres came from Wolfram MathWorld. It’s not clear if the coin is copyrighted or even copyrightable or not; it seems to fall under fair-use guidelines, however. You can find the coin at the Austrian Mint.
http://threesixty360.wordpress.com/tag/kepler/
13
23
This page describes a programming technique called Recursive Programming, in which a procedure calls itself repeatedly until some escape condition is met. Recursive programming is a powerful technique that can greatly simplify some programming tasks. In summary, recursive programming is the situation in which a procedure calls itself, passing in a modified value of the parameter(s) that was passed in to the current iteration of the procedure. Typically, a recursive programming environment contains (at least) two procedures: first, a procedure to set up the initial environment and make the initial call to the recursive procedure, and second, the recursive procedure itself that calls itself one or more times. Let's begin with a simple example. The Factorial of a number N is the product of all the integers between 1 and N. The factorial of 5 is equal to 5 * 4 * 3 * 2 * 1 = 120. In the real world you would not likely use a recursive procedure for this, but it will serve as a simple yet illustrative example. The first procedure is named DoFact sets things up, calls the Fact function and displays the result. Dim L As Long Dim N As Long N = 3 L = Fact(N) Debug.Print "The Factorial of " & CStr(N) & " is " & Format(L, "#,##0") The Fact function does the real work of calculating the factorial. Function Fact(N As Long) As Long If N = 1 Then Fact = 1 Fact = N * Fact(N - 1) In this code, the value of the input N is tested. If it is 1, the function simply returns 1. If N is greater than 1, Fact calls itself passing itself the value N-1. The function returns as its result the input value N times the value of itself evaluated for N-1. While recursive programming is a powerful technique, you must be careful to structure the code so that it will terminate properly when some condition is met. In the Fact procedure, we ended the recursive calls when N was less than or equal to 1. Your recursive code must have some sort of escape logic that terminates the recursive calls. Without such escape logic, the code would loop continuously until the VBA runtime aborts the processing with an Out Of Stack Space error. Note that you cannot trap an Out Of Stack Space error with conventional error trapping. It is called an untrappable error and will terminate all VBA execution immediately. You cannot recover from an untrappable error. For example, consider the following poorly written recursive procedure: Function AddUp(N As Long) Static R As Long If N <= 0 Then R = 0 R = AddUp(N + 1) AddUp = R In this code, there is no condition that prevents AddUp from calling itself. Every call results in another call to AddUp . The function will continue to call itself without restriction until the VBA runtime aborts the procedure execution sequence. See also Recursion And The File System Object for additional recursive code examples. This page last updated: 14-September-2007
http://www.cpearson.com/excel/RECURSIVEPROGRAMMING.ASPX
13
26
Sirius is the brightest star in the night sky. With a visual apparent magnitude of -1.46, it is almost twice as bright as Canopus, the next brightest star. The name "Sirius" is derived from the Ancient Greek Seirios ("glowing" or "scorcher"). The star has the Bayer designation Alpha Canis Majoris. What the naked eye perceives as a single star is actually a binary star system, consisting of a white main sequence star of spectral type A1V, termed Sirius A, and a faint white dwarf companion of spectral type DA2, termed Sirius B. The distance separating Sirius A from its companion varies between 8.1 and 31.5 AU. Sirius appears bright because of both its intrinsic luminosity and its proximity to Earth. At a distance of 2.6 parsecs (the Sirius system is one of Earth's near neighbors. Sirius A is about twice as massive as the Sun and has an absolute visual magnitude of 1.42. It is 25 times more luminous than the Sun but has a significantly lower luminosity than other bright stars such as Canopus or Rigel. The system is between 200 and 300 million years old. It was originally composed of two bright bluish stars. The more massive of these, Sirius B, consumed its resources and became a red giant before shedding its outer layers and collapsing into its current state as a white dwarf around 120 million years ago. Sirius can be seen from almost every inhabited region of the Earth's surface (those living north of 73.284 degrees cannot see it) and, in the Northern Hemisphere, is known as a vertex of the Winter Triangle. The best time of year to view it is around January 1, when it reaches the meridian at midnight. Under the right conditions, Sirius can be observed in daylight with the naked eye. Ideally the sky must be very clear, with the observer at a high altitude, the star passing overhead, and the sun low down on the horizon. Sirius is also known colloquially as the "Dog Star", reflecting its prominence in its constellation, Canis Major (Big Dog). The heliacal rising of Sirius marked the flooding of the Nile in Ancient Egypt and the "dog days" of summer for the ancient Greeks, while to the Polynesians it marked winter. A Binary Star is a star system consisting of two stars orbiting around their common center of mass. The brighter star is called the primary and the other is its companion star, comes, or secondary. Research between the early 19th century and today suggests that many stars are part of either binary star systems or star systems with more than two stars, called multiple star systems. The term double star may be used synonymously with binary star, but more generally, a double star may be either a binary star or an optical double star which consists of two stars with no physical connection but which appear close together in the sky as seen from the Earth. A double star may be determined to be optical if its components have sufficiently different proper motions or radial velocities, or if parallax measurements reveal its two components to be at sufficiently different distances from the Earth. Most known double stars have not yet been determined to be either bound binary star systems or optical doubles. If components in binary star systems are close enough they can gravitationally distort their mutual outer stellar atmospheres. In some cases, these close binary systems can exchange mass, which may bring their evolution to stages that single stars cannot attain. Examples of binaries are Algol (an eclipsing binary), Sirius, and Cygnus X-1 (of which one member is probably a black hole). Binary stars are also common as the nuclei of many planetary nebulae, and are the progenitors of both novae and type Ia supernovae. What appears as a single star is actually a large binary star system, consisting of a bright white main sequence star of spectral type A1V, named Sirius A, and a faint white dwarf companion of spectral type DA named Sirius B. Sirius B is invisible to the naked eye but packs almost the entire mass of our sun into a globe only 4 times as large as the Earth. Sirius B's surface is 300 times harder than diamonds, while its interior has a density 3,000 times that of diamonds. Spinning on its axis about 23 times a minute, it generates huge magnetic fields around it. The two stars, Sirius A and Sirius B move around each other, constantly exchanging particles. Because of its greater density and magnetic field, Sirius B takes the lion's share, taking gases and materials off of its larger host body. Sirius B has a super-heavy gravitationally powerful star made of concentrated super-dense matter (essence) with the number 50 associated with it (describing its orbital period). Every 49.9 years, Sirius A and B, come as close together as their orbits allow, creating huge magnetic storms between them. As they approach each other, the stars both begin to spin faster as tidal forces become stronger, finally flip-flopping over, actually trading places with each other. This energy is eventually released to flow on magnetic field lines to the Sun, which transmits it like a lens to all the planets. When a star like our sun gets to be very old, after another seven billion years or so, it will no longer be able to sustain burning its nuclear fuel. With only about half of the its mass remaining, it will shrink to a fraction of its radius and become a white dwarf star. White dwarfs are common, the most famous one being the companion to the brightest star in the sky, Sirius. Although they are common and represent the final stage of our own sun, astronomers still do not understand their full range of character, or the parameters that determine what they ultimately become. One reason is that many white dwarfs are, like the companion of Sirius, located in binary systems in which the companion stars influence the details of how they age. Around 150 AD, the Hellenistic astronomer Claudius Ptolemy described Sirius as reddish, along with five other stars, Betelgeuse, Antares, Aldebaran, Arcturus and Pollux, all of which are clearly of orange or red hue. The discrepancy was first noted by amateur astronomer Thomas Barker, squire of Lyndon Hall in Rutland, who prepared a paper and spoke at a meeting of the Royal Society in London in 1760. The existence of other stars changing in brightness gave credence to the idea that some may change in color too; Sir John Herschel noted this in 1839, possibly influenced by witnessing Eta Carinae two years earlier. Thomas Jefferson Jackson He cited not only Ptolemy but also the poet Aratus, the orator Cicero, and general Germanicus as coloring the star red, though acknowledging that none of the latter three authors were astronomers, the last two merely translating Aratus' poem Phaenomena. Seneca, too, had described Sirius as being of a deeper red color than Mars. However, not all ancient observers saw Sirius as red. The 1st century AD poet Marcus Manilius described it as "sea-blue", as did the 4th century Avienus. It is the standard star for the color white in ancient China, and multiple records from the 2nd century BC up to the 7th century AD all describe Sirius as white in hue. In 1985, German astronomers Wolfhard Schlosser and Werner Bergmann published an account of an 8th century Lombardic manuscript, which contains De cursu stellarum ratio by St. Gregory of Tours. The Latin text taught readers how to determine the times of nighttime prayers from positions of the stars, and Sirius is described within as rubeola - "reddish". The authors proposed this was further evidence Sirius B had been a red giant at the time. However, other scholars replied that it was likely St. Gregory had been referring to Arcturus instead. The possibility that stellar evolution of either Sirius A or Sirius B could be responsible for this discrepancy has been rejected by astronomers on the grounds that the timescale of thousands of years is too short and that there is no sign of the nebulosity in the system that would be expected had such a change taken place. An interaction with a third star, to date undiscovered, has also been proposed as a possibility for a red appearance. Alternative explanations are either that the description as red is a poetic metaphor for ill fortune, or that the dramatic scintillations of the star when it was observed rising left the viewer with the impression that it was red. To the naked eye, it often appears to be flashing with red, white and blue hues when near the horizon. Some ancient observations of Sirius describe it as a red star. To the Romans this meant an angry god, and they are known to have sacrificed red dogs to this star. Today, Sirius A is bluish white. The possibility that stellar evolution of either Sirius A or Sirius B could be responsible for this discrepancy is rejected by astronomers on the grounds that the timescale of thousands of years is too short and that there is no sign of the nebulosity in the system that would be expected had such a change taken place. Alternative explanations are either that the description as red is a poetic metaphor for ill fortune, or that the dramatic scintillations of the star when it was observed rising left the viewer with the impression that it was red. To the naked eye, it often appears to be flashing with red/white/blue hues when near the horizon. Sirius is the standard star for the color white in ancient China. Multiple records from the 2nd century BC up to the 7th century AD all describe Sirius as white in hue. Historically, many cultures have attached special significance to Sirius. Sirius, known in ancient Egypt as Sopdet or Sothis, is recorded in the earliest astronomical records. The hieroglyph for Sothis features a star and a triangle. During the era of the Middle Kingdom, Egyptians based their calendar on the heliacal rising of Sirius, namely the day it becomes visible just before sunrise after moving far enough away from the glare of the Sun. This occurred just before the annual flooding of the Nile and the summer solstice, after a 70-day absence from the skies. Sothis was identified with (the embodiment of) Isis, wife and consort of Osiris who appeared in the sky as Orion. Together they formed a trinity with their son Horus. The 70-day period symbolized the passing of Isis and Osiris through the duat (Egyptian underworld). Belt Stars of Orion and the Great Pyramid Sirius, Queen's Chamber (Feminine), Pleiades (Sister Stars) Orion, Kings Chamber, Thuban Thuban was the pole star when the pyramids allegedly were built and the program began. Seamen called it 'The Dragon's Tail' (Reptilian, DNA References) Sothis (isis) and her husband, the god named Sah (Orion), came to be viewed as manifestations of Isis and Osiris. She was not only represented as a woman with a star on top of her headdress, but as a seated cow with a plant between her horns (just as Seshat's hieroglyph might have been a flower or a star) as depicted on an ivory tablet of King Djer. The plant may have been symbolic of the year, and thus linking her to the yearly rising of Sirius and the New Year. She was very occasionally depicted as a large dog, or in Roman times, as the goddess Isis-Sopdet, she was shown riding side-saddle on a large dog. Sirius was both the most important star of ancient Egyptian astronomy, and one of the Decans (star groups into which the night sky was divided, with each group appearing for ten days annually). The heliacal rising (the first night that Sirius is seen, just before dawn) was noticed every year during July. Early Egyptians used this to mark the start of the New Year ('The Opening of the Year'). It was celebrated with a festival known as 'The Coming of Sopdet'. As early as the 1st Dynasty, Sophis was known as 'the bringer of the new year and the Nile flood'. When Sirius appeared in the sky each year, the Nile generally started to flood and bring fertility to the land. The ancient Egyptians connected the two events, and so Sopdet took on the aspects of a goddess of not only the star and of the inundation, but of the fertility that came to the land of Egypt with the flood. The flood and the rising of Sirius also marked the ancient Egyptian New Year, and so she also was thought of as a goddess of the New Year. Her aspect of being a fertility goddess was not just linked to the Nile. By the Middle Kingdom, she was believed to be a mother goddess, and a nurse goddess, changing her from a goddess of agriculture to a goddess of motherhood. This probably was due to her strong connection with the mother-goddess Isis. Not just a goddess of the waters of the inundation, Sopdet had another link with water - she was believed to cleanse the pharaoh in the afterlife. It is interesting to note that the embalming of the dead took seventy days - the same amount of time that Sirius was not seen in the sky, before it's yearly rising. She was a goddess of fertility to both the living and the dead. In the Pyramid Texts, she is the goddess who prepares yearly sustenance for the pharaoh, 'in this her name of "Year"'. She is also thought to be a guide in the afterlife for the pharaoh, letting him fly into the sky to join the gods, showing him 'goodly roads' in the Field of Reeds and helping him become one of the imperishable stars. She was thought to be living on the horizon, encircled by the Duat. Paralleling the story of Osiris and Isis, the pharaoh was believed to have had a child with Sopdet. The Dogon describe this 'star' specifically as having a circle of reddish rays around it, and this circle of rays is 'like a spot spreading' but remaining the same size. The Dogon are a West African tribe who have known about, and worshipped, Sirius A and its twin the invisible star Sirius B, for the past 5,000 years. They are have also been aware of the planets circle the sun in elliptical orbits, the four moons of Jupiter and the rings of Saturn. They say that Sirius B is immensely heavy, invisible, very small, yet extremely powerful. Their understanding of the two stars' orbits coincides exactly with modern astronomical findings, yet was arrived at thousands of years before it was scientifically proven. They also claim that a third star Emme Ya - Sorghum Female - exists in the Sirius system. Larger and lighter than Sirius B, this star revolves around Sirius A as well. The Dogon also believe that approximately 5,000 years ago, Amphibious Gods, called Nommo, came to Earth in three legged space ships from the Sirius Star System. They have described perfectly the DNA pattern made by this elliptical orbit created by the two stars as they rotate make around each other. They believe Sirius to be the axis of the universe, and from it all matter and all souls are produced in a great spiral motion. The ancient Greeks observed that the appearance of Sirius heralded the hot and dry summer, and feared that it caused plants to wilt, men to weaken, and women to become aroused. Due to its brightness, Sirius would have been noted to twinkle more in the unsettled weather conditions of early summer. To Greek observers, this signified certain emanations which caused its malignant influence. People suffering its effects were said to be astroboletosor "star-struck". It was described as "burning" or "flaming" in literature. The season following the star's appearance came to be known as the Dog Days of summer. The inhabitants of the island of Ceos in the Aegean Sea would offer sacrifices to Sirius and Zeus to bring cooling breezes, and would await the reappearance of the star in summer. If it rose clear, it would portend good fortune; if it was misty or faint then it foretold (or emanated) pestilence. Coins retrieved from the island from the 3rd century BC feature dogs or stars with emanating rays, highlighting Sirius' importance. The Romans celebrated the heliacal setting of Sirius around April 25, sacrificing a dog, along with incense, wine, and a sheep, to the goddess Robigo so that the star's emanations would not cause wheat rust on wheat crops that year. Ptolemy of Alexandria mapped the stars in Books VII and VIII of his Almagest, in which he used Sirius as the location for the globe's central meridian. He curiously depicted it as one of six red-colored stars. The other five are class M and K stars, such as Arcturus and Betelgeuse. In Chinese astronomy the star is known as the star of the "celestial wolf". Several cultures also associated the star with a bow and arrows. The Ancient Chinese visualized a large bow and arrow across the southern sky, formed by the constellations of Puppis and Canis Major. In this, the arrow tip is pointed at the wolf Sirius. A similar association is depicted at the Temple of Hathor in Dendera, where the goddess Satet has drawn her arrow at Hathor (Sirius). Known as "Tir", the star was portrayed as the arrow itself in later Persian culture. In the Sumerian Civilization, predating the Egyptians, their epic poem Epic of Gilgamesh describes a dream of Gilgamesh where the hero is drawn irresistibly to a heavy star that cannot be lifted despite immense effort. This star descends from heaven to him and is described as having a very 'potent essence' and being "the God of heaven". Gilgamesh had for his companions, 50 oarsmen in the great ship, Argo, a constellation bordering Canis Major, where Sirius is found. The Quran mentions Sirius in Surah 53, An-Najm ("The Star"), of the Qur'an, where it is given the name (al-shi'raa.) The verse is "That He is the Lord of Sirius (the Mighty Star)." (53:49) Just as the appearance of Sirius in the morning sky marked summer in Greece, so it marked the chilly onset of winter for the Maori, whose name Takurua described both the star and the season. Its culmination at the winter solstice was marked by celebration in Hawaii, where it was known as Ka'ulua, "Queen of Heaven". Many other Polynesian names have been recorded, including Tau-ua in the Marquesas Islands, Rehua in New Zealand, and Aa and Hoku-Kauopae in Hawaii. Bright stars were important to the ancient Polynesians for navigation between the many islands and atolls of the Pacific Ocean. Low on the horizon, they acted as stellar compasses to assist mariners in charting courses to particular destinations. They also served as latitude markers; the declination of Sirius matches the latitude of the archipelago of Fiji at 17íS and thus passes directly over the islands each night. Sirius served as the body of a "Great Bird" constellation called Manu, with Canopus as the southern wingtip and Procyon the northern wingtip, which divided the Polynesian night sky into two hemispheres. Several cultures also associated the star with a bow and arrows. Many nations among the indigenous peoples of North America also associated Sirius with canines; the Seri and Tohono O'odham of the southwest note the star as a dog that follows mountain sheep, while the Blackfoot called it "Dog-face". The Cherokee paired Sirius with Antares as a dog-star guardian of either end of the "Path of Souls". The Pawnee of Nebraska had several associations; the Wolf (Skidi) tribe knew it as the "Wolf Star", while other branches knew it as the "Coyote Star". Hopi Prophecy states, When the Blue Star Kachina (Sirius) makes its appearance in the heavens, the Fifth World will emerge. Further north, the Alaskan Inuit of the Bering Strait called it "Moon Dog". Based on changes in its proper motion, in 1844 Friedrich Wilhelm Bessel deduced that Sirius had a hidden companion. In 1844 German astronomer Friedrich Bessel deduced from changes in the proper motion of Sirius that it had an unseen companion. Nearly two decades later, on January 31, 1862, American telescope-maker and astronomer Alvan Graham Clark first observed the faint companion, which is now called Sirius B, or affectionately "the Pup". This happened during testing of a 18.5 inch aperture great refractor telescope for Dearborn Observatory, which was the largest refracting telescope lens in existence at the time, and the largest telescope in America. The visible star is now sometimes known as Sirius A. Since 1894, some apparent orbital irregularities in the Sirius system have been observed, suggesting a third very small companion star, but this has never been definitely confirmed. The best fit to the data indicates a six-year orbit around Sirius A and a mass of only 0.06 solar masses. This star would be five to ten magnitudes fainter than the white dwarf Sirius B, which would account for the difficulty of observing it. Observations published in 2008 were unable to detect either a third star or a planet. An apparent "third star" observed in the 1920s is now confirmed as a background object. In 1909 Ejnar Hertzsprung suggested that Sirius was a member of the Ursa Major Moving Group, based on the systems movements across the sky. However, more recent research by Jeremy King et al. at Clemson University in 2003 questions whether that is true, since the two components of Sirius appear to be too young. Sirius is roughly half the age of the other members of the stream, so their common motion is most likely a coincidence. In 1915, Walter Sydney Adams, using a 60-inch (1.5 m) reflector at Mount Wilson Observatory, observed the spectrum of Sirius B and determined that it was a faint whitish star. This led astronomers to conclude that it was a white dwarf, the second to be discovered. This means that Sirius B must have originally been by far the more massive of the two, since it has already evolved off the main sequence. In 1920 the first spectrum of Sirius B was obtained at Mount Wilson Obvservatory. Sirius B although small and faint and about 10,000 times dimmer than Sirius A is extremely dense and heavy enough to exert influence on Sirius A. The pull of its gravity caused Sirius' wavy movement The diameter of Sirius A was first measured by Robert Hanbury Brown and Richard Q. Twiss in 1959 at Jodrell Bank using their stellar intensity interferometer. In 1970 the first photograph was taken of Sirius B by Dr. Irving W. Lendenblad of the US Naval Observatory. In 2005, using the Hubble Space Telescope, astronomers determined that Sirius B has nearly the diameter of the Earth, 12,000 kilometers (7,500 miles), with a mass that is 98% of the Sun. The Voyager 2 spacecraft, launched in 1977 to study the four Jovian planets in the Solar System, is expected to pass within 4.3 light years of Sirius in approximately 296,000 years time. Could there be a Sirius C? In 1995 two French researchers, Daniel Benest and J.L. Duvent, authored an article in the prestigious journal Astronomy and Astrophysics with the title Is Sirius a Triple Star? and suggested (based on observations of motions in the Sirius system) there is a small third star there. They thought the star was probably of a type known as a brown dwarf and only had about .05 the mass of Sirius B. In visible light Sirius A (Alpha Canis Majoris) is the brightest star in the night sky, a closely watched celestial beacon throughout recorded history. Part of a binary star system only 8 light-years away, it was known in modern times to have a small companion star, Sirius B. Sirius B is much dimmer and appears so close to the brilliant Sirius A that it was not actually sighted until 1862, during Alvan Clark's testing of a large, well made optical refracting telescope. For orbiting x-ray telescopes, the Sirius situation is exactly reversed, though. A smaller but hotter Sirius B appears as the overwhelmingly intense x-ray source in this Chandra Observatory x-ray image (lines radiating from Sirius B are image artifacts). The fainter source seen at the position of Sirius A may be largely due to ultraviolet light from the star leaking into the x-ray detector. With a surface temperature of 25,000 kelvins, the mass of the Sun, and a radius just less than Earth's, Sirius B is the closest known white dwarf star. Can you guess what makes Sirius B like Neptune, the Sun's most distant gas giant planet? While still unseen, the presence of both celestial bodies was detected based on their gravitational influence alone ... making them early examples of dark matter. In Theosophy, it is believed the Seven Stars of the Pleiades transmit the spiritual energy of the Seven Rays from the Galactic Logos to the Seven Stars of the Great Bear, then to Sirius. From there is it sent via the Sun to the god of Earth (Sanat Kumara), and finally through the seven Masters of the Seven Rays to the human race. In the astrology of the Middle Ages, Sirius was a Behenian fixed star, associated with beryl and juniper. Its kabbalistic symbol was listed by Heinrich Cornelius Agrippa. Sirius is a BLUE-white star - the color of electricity. Reality is created by electromagnetic Consciousness grids. Ancient aliens from Sirius were allegedly BLUE - their descendants thought of as bluebloods or royalty. ALPHABETICAL INDEX OF ALL FILES CRYSTALINKS HOME PAGE PSYCHIC READING WITH ELLIE 2012 THE ALCHEMY OF TIME
http://www.crystalinks.com/sirius.html
13
23
Evidence of Design in Mathematics Galileo, one of the founders of modern science, said, "The book of nature is written by the hand of God in the language of mathematics." Paul Dirac, one of the leading figures in twentieth century physics said, "God chose to make the world according to very beautiful mathematics." To any perceptive mind, the mathematical structure of the universe is one of the most compelling evidences of design. Actually, mathematics furnishes four independent lines of evidence. 1. Not only are the basic principles of logic, arithmetic, and algebra true in our universe, but also it is impossible to imagine a universe in which they would not be true. How could there be a universe in which both "A is B" and "A is not B" were true (an example from logic), in which 3 + 5 = 8 (an example from arithmetic), or in which a + b = b + a (an example from algebra)? It would appear that there can be no reality which is not obedient to the basic laws of mathematics. Yet these laws are merely ideas; they have and can have no existence except when they are mentally conceived. Therefore, in the very structure of reality we see evidence of a mind at work. Whose mind if not the mind of God? 2. Even within the constraints of these inviolable laws, you could build a universe in many different ways. Yet, as Dirac said, the blueprint of the universe in which we live is drawn according to very beautiful mathematics. It would not be far-fetched to say that our world is the most mathematical of all possible worlds. In geometry we study the characteristics of space and learn that from a few basic properties of this space we can deduce an elaborate system of informative theorems about geometrical figures: for example, the Pythagorean theorem—c^2 = a^2 + b^2. Perhaps we could imagine a world where this theorem was not true. But it is much more convenient to live in our world, since this theorem gives us a handle on many practical problems. Indeed, modern technology would not be possible except for our ability to find mathematical order wherever we look. The most pervasive and fundamental relations tend to be very simple. Newton's three laws of motion, for example, can be understood by a child. Throughout physics, the basic equations are not difficult: F = ma, W = FD, λ = v/f, E = F/q, E = mc^2. What does all this mean? It gives us another proof of the anthropic principle—that the world was evidently made for the sake of man. The mathematical structure of the world makes it easy for man to formulate predictions as to what will happen under stated conditions and on the basis of these predictions to control nature for his own benefit. Perhaps the most convincing evidence that the world was expressly designed to conform to simple laws that man would readily discover is furnished by the universal law of gravitation: F= Gmm/r^2. Notice the exponent 2. Why is it not 1.9999999 . . ., or 4.3785264 . . ., or something else hard to use in computations? Yet research has been able to specify the exponent as far as the first six digits, giving 2.00000. Thus, so far as we can tell, the exponent is exactly 2. Coulomb's law of electric force is similar: F = kqq/r^2. In this case, research has established that the exponent is no different from exactly 2 as far as the first 17 digits. Would we find such laws in an accidental universe? 3. Mathematics furnishes many examples of elegant relationships based on real-world properties, but having no physical meaning or practical value in themselves. The only plausible explanation for such relationships is that God created the world so that its mathematical structure would be a passageway to a much larger structure of abstract mathematics. Why did He adjoin this larger structure to the mathematics of the real world? Because it is His nature to express Himself in things of beauty, and abstract mathematics is a grand symphony, an epic poem, a rich tapestry intelligible to those who are most diligent in thinking God's thoughts after Him. Abstract mathematics is a most puzzling feature of reality if we do not see it as the handiwork of an infinitely clever mind. Let me give you an example of a relationship discoverable only by abstract math. Never could this be derived from study of the physical universe. In math, three numbers are so important that they are named by letters: π: ratio of the circumference of a circle to its diameter e: the number such that ∫(e^x)dx = e^x + c. In other words, if we were to graph the exponential function e^x, the difference between the values of the function at x and x would equal the area under the curve between those two points. √-1: Since -1 has no square root, the number is called i, which means "imaginary." Now watch. When any budding mathematician comes to this equation in the course of his mathematical education, his mouth drops open in sheer wonder and admiration. e^iπ = -1 This equation, a special case of Euler's formula, has been called the most beautiful equation in mathematics. The question raised by this equation is obvious. Though both π and e are concepts well grounded in the real world, their real-world meanings seem totally independent, and i has no real-world meaning whatever. How then can we account for their simple relationship except by invoking a divine mathematician? In this simple equation we perceive the existence of God. 4. Yet we would never perceive the mathematical structure of the universe unless our minds had a knack for mathematics. It is fairly easy for us to grasp the first principles of math and science. These principles are ideas. That is, they are not directly observable in the world about us, nor are they synonymous with any sequence of biochemical events in the brain. Ideas are transphysical. Therefore, the mind which apprehends them cannot be physical in nature. It must belong to another realm, a realm we describe as the realm of the soul. Therefore, man's capacity for mathematics and, more generally, his ability to think are impossible outcomes of organic evolution. His intelligence is the crowning evidence of purpose and design in the universe. In summary, mathematics furnishes evidence of design in four ways: - All reality must be obedient to laws which are merely ideas. - The structure of our universe is mathematical throughout, the most fundamental principles being exceedingly simple. - The mathematics of the real world is a bridge to a much larger realm of abstract mathematics. - Human beings are capable of mathematical thought. Evidence of Design in Natural Law One remarkable feature of the natural world is that all of its phenomena obey relatively simple laws. The scientific enterprise exists because man has discovered that wherever he probes nature, he finds laws shaping its operation. If all natural events have always been lawful, we must presume that the laws came first. How could it be otherwise? How could the whole world of nature have ever precisely obeyed laws that did not yet exist? But where did they exist? A law is simply an idea, and an idea exists only in someone's mind. Since there is no mind in nature, nature itself has no intelligence of the laws which govern it. Modern science takes it for granted that the universe has always danced to rhythms it cannot hear, but still assigns power of motion to the dancers themselves. How is that possible? The power to make things happen in obedience to universal laws cannot reside in anything ignorant of these laws. Would it be more reasonable to suppose that this power resides in the laws themselves? Of course not. Ideas have no intrinsic power. They affect events only as they direct the will of a thinking person. Only a thinking person has the power to make things happen. Since natural events were lawful before man ever conceived of natural laws, the thinking person responsible for the orderly operation of the universe must be a higher Being, a Being we know as God. © 2007, 2012 Stanley Edgar Rickard (Ed Rickard, the author). All rights reserved.
http://www.themoorings.org/apologetics/theisticarg/teleoarg/teleo2.html
13
74
Elementary Human Genetics The Central Asian Gene Pool The Karakalpak Gene Pool Discussion and Conclusions Elementary Human Genetics Every human is defined by his or her library of genetic material, copies of which are stored in every cell of the body apart from the red blood cells. Cells are classified as somatic, meaning body cells, or gametic, the cells involved in reproduction, namely the sperm and the egg or ovum. The overwhelming majority of human genetic material is located within the small nucleus at the heart of each somatic cell. It is commonly referred to as the human genome. Within the nucleus it is distributed between 46 separate chromosomes, two of which are known as the sex chromosomes. The latter occur in two forms, designated X and Y. Chromosomes are generally arranged in pairs - a female has 22 pairs of autosome chromosomes plus one pair of X chromosomes, while a male has a similar arrangement apart from having a mixed pair of X and Y sex chromosomes. A neutron crystallography cross-sectional image of a chromosome, showing the double strand of DNA wound around a protein core. Image courtesy of the US Department of Energy Genomics Program A single chromosome consists of just one DNA macromolecule composed of two separate DNA strands, each of which contains a different but complementary sequence of four different nucleotide bases - adenine (A), thymine (T), cytasine (C), and guanine (G). The two strands are aligned in the form of a double helix held together by hydrogen bonds, adenine always linking with thymine and cytasine always linking with guanine. Each such linkage between strands is known as a base pair. The total human genome contains about 3 billion such base pairs. As such it is an incredibly long molecule that could be from 3 cm to 6 cm long were it possible to straighten it. In reality the double helix is coiled around a core of structural proteins and this is then supercoiled to create the chromosome, 23 pairs of which reside within a cell nucleus with a diameter of just 0.0005 cm. A gene is a segment of the DNA nucleotide sequence within the chromosome that can be chemically read to make one specific protein. Each gene is located at a certain point along the DNA strand, known as its locus. The 22 autosome chromosome pairs vary in size from 263 million base pairs in chromosome 1 (the longest) down to about 47 million base pairs in chromosome 21 (the shortest - chromosome 22 is the second shortest with 50 million base pairs), equivalent to from 3,000 down to 300 genes. The two sex genes are also very different, X having about 140 million base pairs and expressing 1,100 genes, Y having only 23 million base pairs and expressing a mere 78 genes. The total number of genes in the human genome is around 30,000. A complete set of 23 human homologous chromosome pairs Image courtesy of the National Human Genome Research Institute, Maryland Each specific pair of chromosomes have their own distinct characteristics and can be identified under the microscope after staining with a dye and observing the resulting banding. With one exception the chromosome pairs are called homologous because they have the same length and the same sequence of genes. For example the 9th pair always contain the genes for melanin production and for ABO blood type, while the 14th pair has two genes critical to the body's immune response. Even so the individual chromosomes within each matching pair are not identical since each one is inherited from each parent. A certain gene at a particular locus in one chromosome may differ from the corresponding gene in the other chromosome, one being dominant and the other recessive. The one exception relates to the male sex chromosomes, a combination of X and Y, which are not the same length and are therefore not homologous. A set of male human chromosomes showing typical banding Various forms of the same gene (or of some other DNA sequence within the chromosome) are known as alleles. Differences in DNA sequences at a specific chromosome locus are known as genetic polymorphisms. They can be categorized into various types, the most simple being the difference in just a single nucleotide - a single nucleotide polymorphism. When a normal somatic cell divides and replicates, the 23 homologous chromosome pairs (the genome) are duplicated through a complex process known as mitosis. The two strands of DNA within each chromosome unravel and unzip themselves in order to replicate, eventually producing a pair of sister chromatids - two brand new copies of the original single chromosome joined together. However because the two chromosomes within each homologous pair are slightly different (one being inherited from each parent) the two sister chromatids are divided in two. The two halves of each sister chromatid are allocated to each daughter cell, thus replicating the original homologous chromosome pair. Such cells are called diploid because they contain two (slightly different) sets of genetic information. The production of gametic cells involves a quite different process. Sperm and eggs are called haploid cells, meaning single, because they contain only one set of genetic information - 22 single unpaired chromosomes and one sex chromosome. They are formed through another complex process known as meiosis. It involves a deliberate reshuffling of the parental genome in order to increase the genetic diversity within the resulting sperm or egg cells and consequently among any resulting offspring. As before each chromosome pair is replicated in the form of a pair of sister chromatids. This time however, each half of each chromatid embraces its opposite neighbour in a process called synapsis. An average of two or three segments of maternal and paternal DNA are randomly exchanged between chromatids by means of molecular rearrangements called crossover and genetic recombination. The new chromatid halves are not paired with their matching partners but are all separated to create four separate haploid cells, each containing one copy of the full set of 23 chromosomes, and each having its own unique random mix of maternal and paternal DNA. In the male adult this process forms four separate sperm cells, but in the female only one of the four cells becomes an ovum, the other three forming small polar bodies that progressively decay. During fertilization the two haploid cells - the sperm and the ovum or egg - interact to form a diploid zygote (zyg meaning symetrically arranged in pairs). In fact the only contribution that the sperm makes to the zygote is its haploid nucleus containing its set of 23 chromosomes. The sex of the offspring is determined by the sex chromosome within the sperm, which can be either X (female) or Y (male). Clearly the sex chromosome within the ovum has to be X. The X and the Y chromosomes are very different, the Y being only one third the size of the X. During meiosis in the male, the X chromosome recombines and exchanges DNA with the Y only at its ends. Most of the Y chromosome is therefore unaffected by crossover and recombination. This section is known as the non-recombining part of the Y chromosome and it is passed down the male line from father to son relatively unchanged. Scanning electron micrograph of an X and Y chromosome Image courtesy of Indigo Instruments, Canada Not all of the material within the human cell resides inside the nucleus. Both egg and sperm cells contain small energy-producing organelles within the cytoplasm called mitochondria that have their own genetic material for making several essential mitochondrial proteins. However the DNA content is tiny in comparison with that in the cell nucleus - it consists of several rings of DNA totalling about 16,500 base pairs, equivalent to just 13 genes. The genetic material in the nucleus is about 300,000 times larger. When additional mitochondria are produced inside the cell, the mitochondrial DNA is replicated and copies are transferred to the new mitochondria. The reason why mitochondrial DNA, mtDNA for short, is important is because during fertilization virtually no mitochondria from the male cell enters the egg and those that do are tagged and destroyed. Consequently the offspring only inherit the female mitochondria. mtDNA is therefore inherited through the female line. Population genetics is a branch of mathematics that attempts to link changes in the overall history of a population to changes in its genetic structure, a population being a group of interbreeding individuals of the same species sharing a common geographical area. By analysing the nature and diversity of DNA within and between different populations we can gain insights into their separate evolution and the extent to which they are or are not related to each other. We can gain insights into a population's level of reproductive isolation, the minimum time since it was founded, how marriage partners were selected, past geographical expansions, migrations, and mixings. The science is based upon the property of the DNA molecule to occasionally randomly mutate during replication, creating the possibility that the sequence of nucleotides in the DNA of one generation may differ slightly in the following generation. The consequence of this is that individuals within a homogenous population will in time develop different DNA sequences, the characteristic that we have already identified as genetic polymorphism. Because mutations are random, two identical but isolated populations will tend to change in different directions over time. This property is known as random genetic drift and its effect is greater in smaller To study genetic polymorphisms, geneticists look for specific genetic markers. These are clearly recognizable mutations in the DNA whose frequency of incidence varies widely across populations from different geographical areas. In reality the vast majority of human genetic sequences are identical, only around 0.1% of them being affected by polymorphisms. There are several types of genetic marker. The simplest are single nucleotide polymorphisms (SNPs), mentioned above, where just one nucleotide has been replaced with another (for example A replaces T or C replaces G). SNPs in combination along a stretch of DNA are called haplotypes, shorthand for haploid genotypes. These have turned out to be valuable markers because they are genetically relatively stable and are found at differing frequencies in many populations. Some are obviously evolutionarily related to each other and can be classified into haplogroups (Hg). Another type of polymorphism is where short strands of DNA have been randomly inserted into the genetic DNA. This results in so-called biallelic polymorphism, since the strand is either present or absent. These are useful markers because the individuals that have the mutant insert can be traced back to a single common ancestor, while those who do not have the insert represent the original ancestral state . Biallelic polymorphisms can be assigned to certain haplotypes. A final type of marker is based upon microsatellites, very short sequences of nucleotides, such as GATA, that are repeated in tandem numerous times. A polymorphism occurs if the number of repetitions increases or decreases. Microsatellite polymorphisms, sometimes also called short-tandem-repeat polymorphisms, occur more frequently over time, providing a different tool to study the rate of genetic change against time. Of course the whole purpose of sexual reproduction is to deliberately scramble the DNA from both parents in order to create a brand new set of chromosome pairs for their offspring that are not just copies of the parental chromosomes. Studies show that about 85% of genetic variation in autosomal sequences occurs within rather than between populations. However it is the genetic variation between populations that is of the greatest interest when we wish to study their history. Because of this, population geneticists look for more stable pieces of DNA that are not disrupted by reproduction. These are of two radically different types, namely the non-recombining part of the Y chromosome and the mitochondrial DNA or mtDNA. A much higher 40% of the variations in the Y chromosome and 30% of the variations in mtDNA are found between populations. Each provides a different perspective on the genetic evolution of a particular population. Y Chromosome Polymorphisms By definition the Y chromosome is only carried by the male line. Although smaller than the other chromosomes, the Y chromosome is still enormous compared to the mtDNA. The reason that it carries so few genes is because most of it is composed of "junk" DNA. As such it is relatively unaffected by natural selection. The non-recombining part of the Y chromosome is passed on from father to son with little change apart from the introduction of genetic polymorphisms as a result of random mutations. The only problem with using the Y chromosome to study inheritance has been the practical difficulty of identifying a wide range of polymorphisms within it, although the application of special HPLC techniques has overcome some of this limitation in recent years. Y chromosome polymorphisms seem to be more affected by genetic drift and may give a better resolution between closely related populations where the time since their point of divergence has been relatively short. By contrast the mtDNA is carried by the female line. Although less than one thousandth the size of the DNA in the non-recombinant Y chromosome, polymorphisms are about 10 times more frequent in mtDNA than in autosome chromosomes. Techniques and Applications Population genetics is a highly statistical science and different numerical methods can be used to calculate the various properties of one or several populations. Our intention here is to cover the main analytical tools used in the published literature relating to Karakalpak and the other Central Asian populations. The genetic diversity of a population is the diversity of DNA sequences within its gene pool. It is calculated by a statistical method known as the analysis of molecular variance (AMOVA) in the DNA markers from that population. It is effectively a summation of the frequencies of individual polymorphisms found within the sample, mathematically normalized so that a diversity of 0 implies all the individuals in that population have identical DNA and a diversity of 1 implies that the DNA of every individual is different. The genetic distance between two populations is a measure of the difference in their polymorphism frequencies. It is calculated statistically by comparing the pairwise differences between the markers identified for each population, to the pairwise differences within each of the two populations. This distance is a multi-dimensional not a linear measure. However it is normally illustrated graphically in two dimensions. New variables are identified by means of an angular transformation, the first two of which together account for the greatest proportion of the differences between the populations studied. Another property that can be measured statistically is kinship - the extent to which members of a population are related to each other as a result of a common ancestor. Mathematically, a kinship coefficient is the probability that a randomly sampled sequence of DNA from a randomly selected locus is identical across all members of the same population. A coefficient of 1 implies everyone in the group is related while a coefficient of 0 implies no kinship at all. By making assumptions about the manner in which genetic mutations occur and their frequency over time it is possible to work backwards and estimate how many generations (and therefore years) have elapsed from the most recent common ancestor, the individual to whom all the current members of the population are related by descent. This individual is not necessarily the founder of the population. For example if we follow the descent of the Y chromosome, this can only be passed down the male line from father to son. If a male has no sons his non-combining Y chromosome DNA is eliminated from his population for ever more. Over time, therefore, the Y chromosomes of the populations ancestors will be progressively lost. There may well have been ancestors older than the most recent common ancestor, even though we can find no signs for those ancestors in the Y chromosome DNA of the current population. A similar situation arises with mtDNA in the female half of the population because some women do not have daughters. In 1977 the American anthropologist Gordon T. Bowles published an analysis of the anthropometric characteristics of 519 different populations from across Asia, including the Karakalpaks and two regional groups of Uzbeks. Populations were characterized by 9 standard measurements, including stature and various dimensions of the head and face. A multivariate analysis was used to separate the different populations by their physical features. Bowles categorized the populations across four regions of Asia (West, North, East, and South) into 19 geographical groups. He then analysed the biological distances between the populations within each group to identify clusters of biologically similar peoples. Central Asia was divided into Group XVII encompassing Mongolia, Singkiang, and Kazakhstan and Group XVIII encompassing Turkestan and Tajikistan. Each Group was found to contain three population clusters: Anthropological Cluster Analysis of Central Asia | Group || Cluster ||Regional Populations| |XVII ||1||Eastern Qazaqs| Alai Valley Kyrgyz |2||Aksu Rayon Uighur Alma Ata Uighur |Alma Ata Qazaqs| T'ien Shan Kyrgyz |Total Turkmen | Within geographical Group XVIII, the Karakalpaks clustered with the Uzbeks of Tashkent and the Uzbeks of Samarkand. The members of this first cluster were much more heterogeneous than the other two clusters of neighbouring peoples. Conversely the Turkmen cluster had the lowest variance of any of the clusters in the North Asia region, showing that different Turkmen populations are closely related. The results of this study were re-presented by Cavalli-Sforza in a more readily understandable graphical form. The coordinates used are artificial mathematical transformations of the original 9 morphological measurements, designed to identify the distances between different populations in a simple two-dimensional format. The first two principal coordinates identify a clear division between the Uzbek/Karakalpaks, and the Turkmen and Iranians, but show similarities between the Uzbek/Karakalpaks and the Tajiks, and also with the western Siberians. Though not so close there are some similarities between the Uzbek/Karakalpaks and the Qazaqs, Kyrgyz, and Mongols: Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977 First and Second Principal Coordinates The second and third principal coordinates maintains the similarity between Uzbek/Karakalpaks and Tajiks but emphasizes the more eastern features of the Qazaqs, Kyrgyz, and Mongols: Physical Anthropology of Asia redrawn by David Richardson after Bowles 1977 Second and Third Principal Coordinates The basic average morphology of the Uzbeks and Karakalpaks shows them to be of medium stature, with heads that have an average length but an above average breadth compared to the other populations of Asia. Their faces are broad and are of maximum height. Their noses are of average width but have the maximum length found in Asia. Qazaqs have the same stature but have longer and broader heads. Their faces are shorter but broader, having the maximum breadth found in Asia, while their noses too are shorter and slightly broader. Some of these differences in features were noted by some of the early Russian visitors, such as N. N. Karazin, who observed the differences between the Karakalpaks and the Qazaqs (who at that time were called Kirghiz) when he first entered the northern Aral delta: "In terms of type, the Karakalpak people themselves differ noticeably from the Kirghizs: flattened Mongolian noses are already a rarity here, cheek-bones do not stand out so, beards and eyebrows are considerably thicker - there is a noticeably strong predominance of the Turkish race." The Central Asian Gene Pool Western researchers tended to under represent Central Asian populations in many of the earlier studies of population genetics. Cavalli-Sforza, Menozzi, and Piazza, 1994 In 1994 Cavalli-Sforza and two of his colleagues published a landmark study of the worldwide geographic distribution of human genes. In order to make global comparisons the study was forced to rely upon the most commonly available genetic markers, and analysed classical polymorphisms based on blood groups, plasma proteins, and red cell enzymes. Sadly no information was included for Karakalpaks or Qazaqs. Results were analysed continent by continent. The results for the different populations of Asia grouped the Uzbeks, Turkmen, and western Turks into a central cluster, located on the borderline between the Caucasian populations of the west and south and the populations of Northeast Asia and East Asia: Principal Component Analysis of Asian Populations Redrawn by David Richardson after Cavelli-Sforza et al, 1994 Comas, Calafell, Pérez-Lezaun et al, 1998 In 1993-94 another Italian team collected DNA samples from four different populations close to the Altai: Qazaq highlanders living close to Almaty, Uighur lowlanders in the same region, and two Kyrgyz communities - one in the southern highlands, the other in the northern lowlands of Kyrgyzstan. The data was used in two studies, both published in 1998. In the first, by Comas et al, mtDNA polymorphisms in these four communities were compared with other Eurasian populations in the west (Europe, Middle East, and Turkey), centre (the Altai) and the east (Mongolia, China, and Korea). The four Central Asian populations all showed high levels of sequence diversity - in some cases the highest in Eurasia. At the same time they were tightly clustered together, almost exactly halfway between the western and the eastern populations, the exception being that the Mongolians occupied a position close to this central cluster. The results suggested that the Central Asian gene pool was an admixture of the western and eastern gene pools, formed after the western and eastern Eurasians had diverged. The authors suggested that this diversity had possibly been enhanced by human interaction along the Silk Road. In the second, by Pérez-Lezaun et al, short-tandem-repeat polymorphisms in the Y chromosome were analysed for the four Central Asian populations alone. Each of the four was found to be highly heterogeneous yet very different from the other three, the latter finding appearing to contradict the mtDNA results. However the two highland groups had less genetic diversity because each had very high frequencies for one specific polymorphism: Y chromosome haplotype frequencies, with labels given to those shared by more than one population From Pérez-Lezaun et al, 1998. The researchers resolved the apparent contradiction between the two studies in terms of different migration patterns for men and women. All four groups practised a combination of exogamy and patrilocal marriage - in other words couples within the same clan could not marry and brides always moved from their own village to the village of the groom. Consequently the males, and their genes, were isolated and localized, while the females were mobile and there were more similarities in their genes. The high incidence of a single marker in each highland community was presumed to be a founder's effect, supported by evidence that the highland Qazaq community had only been established by lowland Qazaqs a few hundred years ago. Zerjal, Spencer Wells, Yuldasheva, Ruzibakiev, and Tyler-Smith, 2002 In 2002 a joint Oxford University/Imperial Cancer Research Fund study was published, analysing Y chromosome polymorphisms in 15 different Central Asian populations, from the Caucasus to Mongolia. It included Uzbeks from the eastern viloyat of Kashkadarya, Qazaqs and Uighurs from eastern Kazakhstan, Tajiks, and Kyrgyz. Blood samples had been taken from 408 men, living mainly in villages, between 1993 and 1995. In the laboratory the Y chromosomes were initially typed with binary markers to identify 13 haplogroups. Following this, microsatellite variations were typed in order to define more detailed haplotypes. Haplogroup frequencies were calculated for each population and were illustrated by means of the following chart: Haplogroup frequencies across Central Asia From Zerjal et al, 2002. Many of the same haplogroups occurred across the 5,000 km expanse of Central Asia, although with large variations in frequency and with no obvious overall pattern. Haplogroups 1, 2, 3, 9, and 26 accounted for about 70% of the total sample. Haplogroups (Hg) 1 and 3 were common in almost all populations, but the highest frequencies of Hg1 were found in Turkmen and Armenians, while the highest frequencies of Hg3 were found in Kyrgyz and Tajiks. Hg3 was more frequent in the eastern populations, but was only present at 3% in the Qazaqs. Hg3 is the equivalent of M17, which seems to originate from Russia and the Ukraine, a region not covered by this survey - see Spencer Wells et al, 2001 below. Hg9 was very frequent in the Middle East and declined in importance across Central Asia from west to east. However some eastern populations had a higher frequency - the Uzbeks, Uighurs, and Dungans. Hg10 and its derivative Hg36 showed the opposite pattern, together accounting for 54% of haplogroups for the Mongolians and 73% for the Qazaqs. Hg26, which is most frequently found in Southeast Asia, occurs with the highest frequencies among the Dungans (26%), Uighurs (15%), Mongolians (13%), and Qazaqs (13%) in eastern Central Asia. Hg12 and Hg 16 are widespread in Siberia and northern Eurasia but are rare in Central Asia except for the Turkmen and Mongolians. Hg21 was restricted to the Caucasus region. The most obvious observation is that virtually each population is quite distinct. As an example, the Uzbeks are quite different from the Turkmen, Qazaqs, or Mongolians. Only two populations, the Kyrgyz from central Kyrgyzstan and the Tajiks from Pendjikent, show any The researchers measured the genetic diversity of each population using both haplogroup and microsatellite frequencies. Within Central Asia, the Uzbeks, Uighurs, Dungans, and Mongolians exhibited high genetic diversity, while the Qazaqs, Kyrgyz, Tajiks, and Turkmen showed low genetic diversity. These differences were explored by examining the haplotype variation within each haplogroup for each population. Among the Uzbeks, for example, many different haplotypes are widely dispersed across all chromosomes. Among the Qazaqs, however, the majority of the haplotypes are clustered together and many chromosomes share the same or related haplotypes. Low diversity coupled with high frequencies of population-specific haplotype clusters are typical of populations that have experienced a bottleneck or a founder event. The most recent common ancestor of the Tajik population was estimated to date from the early part of the 1st millennium AD, while the most recent common ancestors of the Qazaq and Kyrgyz populations were placed in the period 1200 to 1500 AD. The authors suggested that bottlenecks might be a feature of societies like the Qazaqs and Kyrgyz with small, widely dispersed nomadic groups, especially if they had suffered massacres during the Mongol invasion. Of course these calculations have broad confidence intervals and must be interpreted with caution. Microsatellite haplotype frequencies were used to investigate the genetic distances among the separate populations. The best two-dimentional fit produces a picture with no signs of general clustering on the basis of either geography or linguistics: Genetic distances based on micosatellite haplotypes From Zerjal et al, 2002. The Kyrgyz (ethnically Turkic) do cluster next to the Tajiks (supposedly of Indo-Iranian origin), but both are well separated from the neighbouring Qazaqs. The Turkmen, Qazaqs, and Georgians tend to be isolated from the other groups, leaving the Uzbeks in a somewhat central position, clustered with the Uighurs and Dungans. The authors attempted to interpret the results of their study in terms of the known history of the region. The apparently underlying graduation in haplogroup frequencies from west to east was put down to the eastward agricultural expansion out of the Middle East during the Neolithic, some of the haplogroup markers involved being more recent than the Palaeolithic. Meanwhile Hg3 (equivalent to M17 and Eu19), which is widespread in Central Asia, was attributed to the migration of the pastoral Indo-Iranian "kurgan culture" eastwards from the Ukraine in the late 3rd/early 2nd millennium BC. The mountainous Caucasus region seems to have been bypassed by this migration, which seems to have extended across Central Asia as far as the borders of Siberia and China. Later events also appear to have left their mark. The presence of a high number of low-frequency haplotypes in Central Asian populations was associated with the spread of Middle Eastern genes, either through merchants associated with the early Silk Route or the later spread of Islam. Uighurs and Dungans show a relatively high Middle Eastern admixture, including higher frequencies of Hg9, which might indicate their ancestors migrated from the Middle East to China before moving into Central Asia. High frequencies of Hg10 and its derivative Hg36 are found in the majority of Altaic-speaking populations, especially the Qazaqs, but also the Uzbeks and Kyrgyz. Yet its contribution west of Uzbekistan is low or undetectable. This feature is associated with the progressive migrations of nomadic groups from the east, from the Hsiung-Nu, to the Huns, the Turks, and the Mongols. Of course Central Asians have not only absorbed immigrants from elsewhere but have undergone expansions, colonizations and migrations of their own, contributing their DNA to surrounding populations. Hg1, the equivalent of M45 and its derivative markers, is believed to have originated in Central Asia and is found throughout the Caucasus and in Mongolia. The Karakalpak Gene Pool Spencer Wells et al, 2001 The first examination of Karakalpak DNA appeared as part of a widespread study of Eurasian Y chromosome diversity published by Spencer Wells et al in 2001. It included samples from 49 different Eurasian groups, ranging from western Europe, Russia, the Middle East, the Caucasus, Central Asia, South India, Siberia, and East Asia. Data on 12 other groups was taken from the literature. In addition to the Karakalpaks, the Central Asian category included seven separate Uzbek populations selected from Ferghana to Khorezm, along with Turkmen from Ashgabat, Tajiks from Samarkand, and Qazaqs and Uighurs from Almaty. The study used biallelic markers that were then assigned to 23 different haplotypes. To illustrate the results the latter were condensed into 7 evolutionary-related groups. The study found that the Uzbek, Karakalpak, and Tajik populations had the highest haplotype diversity in Eurasia, the Karakalpaks having the third highest diversity of all 49 groups. The Qazaqs and Kyrgyz had a significantly lower diversity. This diversity is obvious from the chart comparing haplotype frequencies across Eurasia: Distribution of Y chromosome haplotype lineages across various Eurasian populations From Spencer Wells et al, 2001. Uzbeks have a fairly balanced haplotype profile, while populations in the extreme west and east are dominated by one specific haplotype lineage - the M173 lineage in the extreme west and the M9 lineage in the extreme east and Siberia. The Karakalpaks are remarkably similar to the Uzbeks: Distribution of Y chromosome haplotype lineages in Uzbeks and Karakalpaks From Spencer Wells et al, 2001. the main differences being that Karakalpaks have a higher frequency of M9 and M130 and a lower frequency of M17 and M89 haplotype lineages. M9 is strongly linked to Chinese and other far-eastern peoples, while M130 is associated with Mongolians and Qazaqs. On the other hand, M17 is strong in Russia, the Ukraine, the Czech and Slovak Republics as well as in Kyrgyz populations, while M89 has a higher frequency in the west. It seems that compared to Uzbeks, the Karakalpak gene pool has a somewhat higher frequency of haplotypes that are associated with eastern as opposed to western Eurasian populations. In fact the differences between Karakalpaks and Uzbeks are no more pronounced than between the Uzbeks themselves. Haplotype frequencies for the Karakalpaks tend to be within the ranges measured across the different Uzbek populations: Comparison of Karakalpak haplotype lineage frequencies to other ethnic groups in Central Asia || M130|| M89 || M9 || M45 || M173 || M17 || Total | ||0 - 7||7-18||19-34||5-21||4-11 Statistically Karakalpaks are genetically closest to the Uzbeks from Ferghana, followed by those from Surkhandarya, Samarkand, and finally Khorezm. They are furthest from the Uzbeks of Bukhara, Tashkent, and Kashkadarya. These results also show the distance between the Karakalpaks and the other peoples of Central Asia and its neighbouring regions. Next to the Uzbeks, the Karakalpaks are genetically closest to the Tatars and Uighurs. However they are quite distant from the Turkmen, Qazaqs, Kyrgyz, Siberians, and Iranians. The researchers produced a "neighbour-joining" tree, which clustered the studied populations into eight categories according to the genetic distances between them. The Karakalpaks were classified into cluster VIII along with Uzbeks, Tatars, and Uighurs - the populations with the highest genetic diversity. They appear sandwiched between the peoples of Russian and the Ukraine and the Mongolians and Qazaqs. Neighbour-joining tree of 61 Eurasian Populations Karakalpaks are included in cluster VIII along with Uzbeks, Tatars, and Uighurs From Spencer Wells et al, 2001. Spencer Wells and his colleagues did not attempt to explain why the Karakalpak gene pool is similar to Uzbek but is different from the Qazaq, a surprising finding given that the Karakalpaks lived in the same region as the Qazaqs of the Lesser Horde before migrating into Khorezm. Instead they suggested that the high diversity in Central Asia might indicate that its population is among the oldest in Eurasia. M45 is the ancestor of haplotypes M173, the predominant group found in Western Europe, and is thought to have arisen in Central Asia about 40,000 years ago. M173 occurred about 30,000 years old, just as modern humans began their migration from Central Asia into Europe during the Upper Palaeolithic. M17 (also known as the Eu19 lineage) has its origins in eastern Europe and the Ukraine and may have been initially introduced into Central Asia following the last Ice Age and re-introduced later by the south-eastern migration of the Indo-Iranian "kurgan" culture. Comas et al, 2004 At the beginning of 2004 a complementary study was published by David Comas, based on the analysis of mtDNA haplogroups from 12 Central Asian and neighbouring populations, including Karakalpaks, Uzbeks, and Qazaqs. Sample size was only 20, dropping to 16 for Dungans and Uighurs, so that errors in the results for individual populations could be high. The study reconfirmed the high genetic diversity within Central Asian populations. However a high proportion of sequences originated elsewhere, suggesting that the region had experienced "intense gene flow" in the past. The haplogroups were divided into three types according to their origins: West Eurasian, East Asian, and India. Populations showed a graduation from the west to the east with the Karakalpaks occupying the middle ground, with half of their haplogroups having a western origin and the other half having an eastern origin. Uzbek populations contained a small Indian component. Mixture of western and eastern mtDNA haplogroups across Central Asia |Population||West Eurasian|| East Asian || Total | The researchers found that two of the haplogroups of East Asian origin (D4c and G2a) not only occurred at higher frequencies in Central Asia than in neighbouring populations but appeared in many related but diverse forms. These may have originated as founder mutations some 25,000 to 30,000 years ago, expanded as a result of genetic drift and subsequently become dispersed into the neighbouring populations. Their incidence was highest in the Qazaqs, and second highest in the Turkmen and Karakalpaks. The majority of the other lineages separate into two types with either a western or an eastern origin. They do not overlap, suggesting that they were already differentiated before they came together in Central Asia. Furthermore the eastern group contains both south-eastern and north-eastern components. One explanation for their admixture in Central Asia is that the region was originally inhabited by Western people, who were then partially replaced by the arrival of Eastern people. There is genetic evidence from archaeological sites in eastern China of a drastic shift, between 2,500 and 2,000 years ago, from a European-like population to the present-day East Asian population. The presence of ancient Central Asian sequences suggests it is more likely that the people of Central Asia are a mixture of two differentiated groups of peoples who originated in west and east Eurasia respectively. Chaix and Heyer et al, 2004 The most interesting study of Karakalpak DNA so far was published by a team of French workers in the autumn of 2004. It was based on blood samples taken during two separate expeditions to Karakalpakstan in 2001 and 2002, organized with the assistance of IFEAC, the Institut Français d'Etudes sur l'Asie Centrale, based in Tashkent. The samples consisted of males belonging to five different ethnic groups: Qon'ırat Karakalpaks (sample size 53), On To'rt Urıw Karakalpaks (53), Qazaqs (50), Khorezmian Uzbeks (40), and Turkmen (51). The study was based on the analysis of Y chromosome haplotypes from DNA extracted from white blood cells. In addition to providing samples for DNA analysis, participants were also interviewed to gather information on their paternal lineages and tribal and clan Unfortunately the published results only focused on the genetic relationships between the tribes, clans and lineages of these five ethnic groups. However before reviewing these important findings it is worth looking at the more general aspects that emerged from the five samples. These were summarized by Professor Evelyne Heyer and Dr R Chaix at a workshop on languages and genes held in France in 2005, where the results from Karakalpakstan were compared with the results from similar expeditions to Kyrgyzstan, the Bukhara, Samarkand, and Ferghana Valley regions of Uzbekistan, and Tajikistan as well as with some results published by other research teams. In some cases comparisons were limited by the fact that the genetic analysis of samples from different regions was not always done according to the same protocols. The first outcome was the reconfirmation of the high genetic diversity among Karakalpaks and Uzbeks: Y Chromosome Diversity across Central Asia |Population||Region||Sample Size|| Diversity | |Karakalpak On To'rt Urıw||Karakalpakstan||54||0.89| |Tajik Kamangaron||Ferghana Valley||30||0.98| |Tajik Richtan||Ferghana Valley||29||0.98| |Kyrgyz Andijan||Uzbek Ferghana Valley||46||0.82| |Kyrgyz Jankatalab||Uzbek Ferghana Valley||20||0.78| |Kyrgyz Doboloo||Uzbek Ferghana Valley||22||0.70| The high diversities found in Uighur and Tajik communities also agreed with earlier findings. Qon'ırat Karakalpaks had somewhat greater genetic diversity than On To'rt Urıw Karakalpaks. Some of these figures are extremely high. A diversity of zero implies a population where every individual is identical. A diversity of one implies the opposite, the haplotypes of every individual The second more important finding concerned the Y chromosome genetic distances among different Central Asian populations. As usual this was presented in two dimensions: Genetic distances between ethnic populations in Karakalpakstan and the Ferghana Valley From Chaix and Heyer et al, 2004. The researchers concluded that Y chromosome genetic distances were strongly correlated to geographic distances. Not only are Qon'ırat and On To'rt Urıw populations genetically close, both are also close to the neighbouring Khorezmian Uzbeks. Together they give the appearance of a single population that has only relatively recently fragmented into three separate groups. Clearly this situation is mirrored with the two Tajik populations living in the Ferghana Valley and also with two of the three Kyrgyz populations from the same region. Although close to the local Uzbeks, the two Karakalpak populations have a slight bias towards the local Qazaqs. The study of the Y chromosome was repeated for the mitichondrial DNA, to provide a similar picture for the female half of the same populations. The results were compared to other studies conducted on other groups of Central Asians. We have redrawn the chart showing genetic distances among populations, categorizing different ethnic groups by colour to facilitate comparisons: Genetic distances among ethnic populations in Central Asia Based on mitochondrial DNA polymorphisms From Heyer, 2005. The French team concluded that, in this case, genetic distances were not related to either geographical distances or to linguistics. However this is not entirely true because there is some general clustering among populations of the same ethnic group, although by no means as strong as that observed from the Y chromosome data. The three Karakalpak populations highlighted in red consist of the On To'rt Urıw (far right), the Qon'ırat (centre), and the Karakalpak sample used in the Comas 2004 study (left). The Uzbeks are shown in green and those from Karakalpakstan are the second from the extreme left, the latter being the Uzbeks from Samarkand. A nearby group of Uzbeks from Urgench in Khorezm viloyati appear extreme left. There is some relationship between the mtDNA of the Karakalpak and Uzbek populations of the Aral delta therefore, but it is much weaker than the relationship between their Y chromosome DNA. On the other hand the Qazaqs of Karakalpakstan, the uppermost yellow square, are very closely related to the Karakalpak Qon'ırat according to These results are similar to those that emerged from the Italian studies of Qazaq, Uighur, and Kyrgyz Y chromosome and mitochondrial DNA. Ethnic Turkic populations are generally exogamous. Consequently the male DNA is relatively isolated and immobile because men traditionally stay in the same village from birth until death. They had to select their wives from other geographic regions and sometimes married women from other ethnic groups. The female DNA within these groups is consequently more diversified. The results suggest that in the delta, some Qon'ırat men have married Qazaq women and/or some Qazaq men have married Qon'ırat women. Let us now turn to the primary focus of the Chaix and Heyer paper. Are the tribes and clans of the Karakalpaks and other ethnic groups living within the Aral delta linked by kinship? Y chromosome polymorphisms were analysed for each separate lineage, clan, tribe, and ethnic group using single tandem repeats. The resulting haplotypes were used to calculate a kinship coefficient at each respective Within the two Karakalpak samples the Qon'ırat were all Shu'llik and came from several clans, only three of which permitted the computation of kinship: the Qoldawlı, Qıyat, and Ashamaylı clans. However none of these clans had recognized lineages. The Khorezmian Uzbeks have also long ago abandoned their tradition of preserving genealogical lineages. The On To'rt Urıw were composed of four tribes, four clans, and four lineages: - Qıtay tribe - Qıpshaq tribe, Basar clan - Keneges tribe, Omır and No'kis clans - Man'g'ıt tribe, Qarasıraq clan The Qazaq and the Turkmen groups were also structured along tribal, clan, and lineage lines. The results of the study showed that lineages, where they were still maintained, exhibited high levels of kinship, the On To'rt Urıw having by far the highest. People belonging to the same lineage were therefore significantly more related to each other than people selected at random in the overall global population. Put another way, they share a common ancestor who is far more recent than the common ancestor for the population as a whole: Kinship coefficients for five different ethnic populations, including the Qon'ırat and the On To'rt Urıw. From Chaix and Heyer et al, 2004. The kinship coefficients at the clan level were lower, but were still significant in three groups - the Karakalpak Qon'ırat, the Qazaqs, and the Turkmen. However for the Karakalpak On To'rt Urıw and the Uzbeks, men from the same clan were only fractionally more related to each other than were men selected randomly from the population at large. When we reach the tribal level we find that the men in all five ethnic groups show no genetic kinship whatsoever. In these societies the male members of some but not all tribal clans are partially related to varying degrees, in the sense that they are the descendants of a common male ancestor. Depending on the clan concerned this kinship can be strong, weak, or non-existant. However the members of different clans within the same tribe show no such interrelationship at all. In other words, tribes are conglomerations of clans that have no genetic links with each other apart from those occurring between randomly chosen populations. It suggests that such tribes were formed politically, as confederations of unrelated clans, and not organically as a result of the expansion and sub-division of an initially genetically homogenous extended family group. By assuming a constant rate of genetic mutation over time and a generation time of 30 years, the researchers were able to calculate the number of generations (and therefore years) that have elapsed since the existence of the single common ancestor. This was essentially the minimum age of the descent group and was computed for each lineage and clan. However the estimated ages computed were very high. For example, the age of the Qon'ırat clans was estimated at about 460 generations or 14,000 years (late Ice Age), while the age of the On To'rt Urıw lineages was estimated at around 200 generations or 6,000 years (early Neolithic). Clearly these results are ridiculous. The explanation is that each group included immigrants or outsiders who were clearly unrelated to the core population. The calculation was therefore modified, restricting the sample to those individuals who belonged to the modal haplogroup of the descent group. This excluded about 17% of the men in the initial sample. Results were excluded for those descent groups that contained less than three |Descent Group||Population||Number of |Age in years||95% Confidence| || 35||1,058||454 - 3,704| || 20|| 595||255 - 2,083| ||3,051||1,307 - 10,677| On To'rt Urıw || 13|| 397||170 - 1,389| || 415||178 - 1,451| || 516||221 - 1,806| The age of the On To'rt Urıw and other lineages averaged about 15 generations, equivalent to about 400 to 500 years. The age of the clans varied more widely, from 20 generations for the Qazaqs, to 35 generations for the Qon'ırat, and to 102 generations for the Turkmen. This dates the oldest common ancestor of the Qazaq and Qon'ırat clans to a time some 600 to 1,200 years ago. However the common ancestor of the Turkmen clans is some 3,000 years old. The high ages of the Turkmen clans was the result of the occurrence of a significantly mutated haplotype within the modal haplogroup. It was difficult to judge whether these individuals were genuinely related to the other clan members or were themselves recent immigrants. These figures must be interpreted with considerable caution. Clearly the age of a clan's common ancestor is not the same as the age of the clan itself, since that ancestor may have had ancestors of his own, whose lines of descent have become extinct over time. The calculated ages therefore give us a minimum limit for the age of the clan and not the age of the clan itself. In reality however, the uncertainty in the assumed rate of genetic mutation gives rise to extremely wide 95% confidence intervals. The knowledge that certain Karakalpak Qon'ırat clans are most likely older than a time ranging from 450 to 3,700 years is of little practical use to us. Clearly more accurate models are required. Chaix, R.; Quintana-Murci, L.; Hegay, T.; Hammer, M. F.; Mobasher, Z.; Austerlitz, F.; and Heyer, E., 2007 The latest analysis of Karakalpak DNA comes from a study examining the genetic differences between various pastoral and farming populations in Central Asia. In this region these two fundamentally different economies are organized according to quite separate social traditions: The study aims to identify differences in the genetic diversity of the two groups as a result of these two different lifestyles. It examines the genetic diversity of: - pastoral populations are classified into what their members claim to be descent groups (tribes, clans, and lineages), practice exogamous marriage (where men must marry women from clans that are different to their own), and are organized on a patrilineal basis (children being affiliated to the descent group of the father, not the mother). - farmer populations are organized into nuclear and extended families rather than tribes and often practise endogamous marriage (where men marry women from within the same clan, often their cousins). The diversity of mtDNA was examined by investigating one of two short segments, known as hypervariable segment number 1 or HVS-1. This and HVS-2 have been found to contain the highest density of neutral polymorphic variations between individuals. - maternally inherited mitochondrial DNA in 12 pastoral and 9 farmer populations, and - paternally inherited Y-chromosomes in 11 pastoral and 7 farmer populations. The diversity of the Y chromosome was examined by investigating 6 short tandem repeats (STRs) in the non-recombining region of the chromosome. This particular study sampled mtDNA from 5 different populations from Karakalpakstan: On To'rt Urıw Karakalpaks, Qon'ırat Karakalpaks, Qazaqs, Turkmen, and Uzbeks. Samples collected as part of other earlier studies were used to provide mtDNA data on 16 further populations (one of which was a general group of Karakalpaks) and Y chromosome data on 20 populations (two of which were On To'rt Urıw and Qon'ırat Karakalpaks sampled in 2001 and 2002). The sample size for each population ranged from 16 to 65 individuals. Both Karakalpak arıs were classified as pastoral, along with Qazaqs, Kyrgyz, and Turkmen. Uzbeks were classified as farmers, along with Tajiks, Uighurs, Kurds, and Dungans. Results of the mtDNA Analysis The results of the mtDNA analysis are given in Table 1, copied from the paper. Table 1. Sample Descriptions and Estimators of Genetic Diversity from the mtDNA Sequence |Population ||n ||Location ||Long ||Lat ||H ||π ||D ||pD ||Ps |Karakalpaks ||20 ||Uzbekistan ||58 ||43 ||0.99 ||5.29 ||-1.95 ||0.01 ||0.90 ||1.05 | |Karakalpaks (On To'rt Urıw) ||53 ||Uzbekistan/Turkmenistan border ||60 ||42 ||0.99 ||5.98 ||-1.92 ||0.01 ||0.70 ||1.20 | |Karakalpaks (Qon'ırat) ||55 ||Karakalpakstan ||59 ||43 ||0.99 ||5.37 ||-2.01 ||0.01 ||0.82 ||1.15 | |Qazaqs ||50 ||Karakalpakstan ||63 ||44 ||0.99 ||5.23 ||-1.97 ||0.01 ||0.88 ||1.11 | |Qazaqs ||55 ||Kazakhstan ||80 ||45 ||0.99 ||5.66 ||-1.87 ||0.01 ||0.69 ||1.25 | |Qazaqs ||20 || ||68 ||42 ||1.00 ||5.17 ||-1.52 ||0.05 ||1.00 ||1.00 | |Kyrgyz ||20 ||Kyrgyzstan ||74 ||41 ||0.97 ||5.29 ||-1.38 ||0.06 ||0.55 ||1.33 | |Kyrgyz (Sary-Tash) ||47 ||South Kyrgyzstan, Pamirs ||73 ||40 ||0.97 ||5.24 ||-1.95 ||0.01 ||0.49 ||1.52 | |Kyrgyz (Talas) ||48 ||North Kyrgyzstan ||72 ||42 ||0.99 ||5.77 ||-1.65 ||0.02 ||0.77 ||1.14 | |Turkmen ||51 ||Uzbekistan/Turkmenistan border ||59 ||42 ||0.98 ||5.48 ||-1.59 ||0.04 ||0.53 ||1.42 | |Turkmen ||41 ||Turkmenistan ||60 ||39 ||0.99 ||5.20 ||-2.07 ||0.00 ||0.73 ||1.21 | |Turkmen ||20 || ||59 ||40 ||0.98 ||5.28 ||-1.71 ||0.02 ||0.75 ||1.18 | |Dungans ||16 ||Kyrgyzstan ||78 ||41 ||0.94 ||5.27 ||-1.23 ||0.12 ||0.31 ||1.60 | |Kurds ||32 ||Turkmenistan ||59 ||39 ||0.97 ||5.61 ||-1.35 ||0.05 ||0.41 ||1.52 | |Uighurs ||55 ||Kazakhstan ||82 ||47 ||0.99 ||5.11 ||-1.91 ||0.01 ||0.62 ||1.28 | |Uighurs ||16 ||Kyrgyzstan ||79 ||42 ||0.98 ||4.67 ||-1.06 ||0.15 ||0.63 ||1.23 | |Uzbeks (North) ||40 ||Karakalpakstan ||60 ||43 ||0.99 ||5.49 ||-2.03 ||0.00 ||0.68 ||1.21 | |Uzbeks (South) ||42 ||Surkhandarya, Uzbekistan ||67 ||38 ||0.99 ||5.07 ||-1.96 ||0.01 ||0.81 ||1.14 | |Uzbeks (South) ||20 ||Uzbekistan ||66 ||40 ||0.99 ||5.33 ||-1.82 ||0.02 ||0.90 ||1.05 | |Uzbeks (Khorezm) ||20 ||Khorezm, Uzbekistan ||61 ||42 ||0.98 ||5.32 ||-1.62 ||0.04 ||0.70 ||1.18 | |Tajiks (Yagnobi) ||20 || ||71 ||39 ||0.99 ||5.98 ||-1.76 ||0.02 ||0.90 ||1.05 | Key: the pastoral populations are in the grey area; the farmer populations are in the white area. The table includes the following parameters: - sample size, n, the number of individuals sampled in each population. Individuals had to be unrelated to any other member of the same sample for at least two generations. - the geographical longitude and latitude of the population sampled. - heterozygosity, H, the proportion of different alleles occupying the same position in each mtDNA sequence. It measures the frequency of heterozygotes for a particular locus in the genetic sequence and is one of several statistics indicating the level of genetic variation or polymorphism within a population. When H=0, all alleles are the same and when H=1, all alleles are different. - the mean number of pairwise differences, π, measures the average number of nucleotide differences between all pairs of HVS-1 sequences. This is another statistic indicating the level of genetic variation within a population, in this case measuring the level of mismatch - Tajima’s D, D, measures the frequency distribution of alleles in a nucleotide sequence and is based on the difference between two estimations of the population mutation rate. It is often used to distinguish between a DNA sequence that has evolved randomly (D=0) and one that has experienced directional selection favouring a single allele. It is consequently used as a test for natural selection. However it is also influenced by population history and negative values of D can indicate high rates of population growth. - the probability that D is significantly different from zero, pD. - the proportion of singletons, Ps, measures the relative number of unique polymorphisms in the sample. The higher the proportion of singletons, the greater the population has been affected by inward migration. - the mean number of individuals carrying the same mtDNA sequence, C, is an inverse measure of diversity. The more individuals with the same sequence, the less diversity within the population and the higher proportion of individuals who are closely related. The table shows surprisingly little differentiation between pastoral and farmer populations. Both show high levels of within population genetic diversity (for both groups, median H=0.99 and π is around 5.3). Further calculations of genetic distance between populations, Fst, ( not presented in the table but given graphically in the online reference below) showed a corresponding low level of genetic differentiation among pastoral populations as well as among farmer populations. Both groups of populations also showed a significantly negative Tajima’s D, which the authors attribute to a high rate of demographic growth in neutrally evolving populations. Supplementary data made available online showed a weak correlation between genetic distance, Fst, and geographic distance for both pastoral and farmer populations. Click here for redirection to the relevant Results of the Y chromosome Analysis The results of the Y chromosome analysis are given in Table 2, also copied from the paper: Table 1. Sample Descriptions and Estimators of Genetic Diversity from the Y chromosome STRs |Population ||n ||Location ||Long ||Lat ||H ||π ||r ||Ps ||C | |Karakalpaks (On To'rt Urıw) ||54 ||Uzbekistan/Turkmenistan border ||60 ||42 ||0.86 ||3.40 ||1.002 ||0.24 ||2.84 | |Karakalpaks (Qon'ırat) ||54 ||Karakalpakstan ||59 ||43 ||0.91 ||3.17 ||1.003 ||0.28 ||2.35 | |Qazaqs ||50 ||Karakalpakstan ||63 ||44 ||0.85 ||2.36 ||1.004 ||0.16 ||2.78 | |Qazaqs ||38 ||Almaty, KatonKaragay, Karatutuk, Rachmanovsky Kluchi, Kazakhstan |68 ||42 ||0.78 ||2.86 ||1.004 ||0.26 ||2.71 | |Qazaqs ||49 ||South-east Kazakhstan ||77 ||40 ||0.69 ||1.56 ||1.012 ||0.22 ||3.06 | |Kyrgyz ||41 ||Central Kyrgyzstan (Mixed) ||74 ||41 ||0.88 ||2.47 ||1.004 ||0.41 ||1.86 | |Kyrgyz (Sary-Tash) ||43 ||South Kyrgyzstan, Pamirs ||73 ||40 ||0.45 ||1.30 ||1.003 ||0.12 ||4.78 | |Kyrgyz (Talas) ||41 ||North Kyrgyzstan ||72 ||42 ||0.94 ||3.21 ||1.002 ||0.39 ||1.78 | |Mongolians ||65 ||Ulaanbaatar, Mongolia ||90 ||49 ||0.96 ||3.37 ||1.009 ||0.38 ||1.81 | |Turkmen ||51 ||Uzbekistan/Turkmenistan border ||59 ||42 ||0.67 ||1.84 ||1.006 ||0.27 ||3.00 | |Turkmen ||21 ||Ashgabat, Turkmenistan ||59 ||40 ||0.89 ||3.34 ||1.006 ||0.48 ||1.62 | |Dungans ||22 ||Alexandrovka and Osh, Kyrgyzstan ||78 ||41 ||0.99 ||4.13 ||1.005 ||0.82 ||1.10 | |Kurds ||20 ||Bagyr, Turkmenistan ||59 ||39 ||0.99 ||3.59 ||1.009 ||0.80 ||1.11 | |Uighurs ||33 ||Almaty and Lavar, Kazakhstan ||79 ||42 ||0.99 ||3.72 ||1.007 ||0.67 ||1.22 | |Uighurs ||39 ||South East Kazakhstan ||79 ||43 ||0.99 ||3.79 ||1.008 ||0.77 ||1.15 | |Uzbeks (North) ||40 ||Karakalpakstan ||60 ||43 ||0.96 ||3.42 ||1.005 ||0.48 ||1.54 | |Uzbeks (South) ||28 ||Kashkadarya, Uzbekistan ||66 ||40 ||1.00 ||3.53 ||1.008 ||0.93 ||1.04 | |Tajiks (Yagnobi) ||22 ||Penjikent, Tajikistan ||71 ||39 ||0.87 ||2.69 ||1.012 ||0.45 ||1.69 | Key: the pastoral populations are in the grey area; the farmer populations are in the white area. This table also includes the sample size, n, and longitude and latitude of the population sampled, as well as the heterozygosity, H, the mean number of pairwise differences, π, the proportion of singletons, Ps, and the mean number of individuals carrying the same Y STR haplotype, C. In addition it includes a statistical computation of the demographic growth rate, r. In contrast to the results obtained from the mtDNA analysis, both the heterozyosity and the mean pairwise differences computed from the Y chromosome STRs were significantly lower in the pastoral populations than in the farmer populations. Thus Y chromosome diversity has been lost in the pastoral Conversely calculations of the genetic distance, Rst, between each of the two groups of populations showed that pastoral populations were more highly differentiated than farmer populations. The supplemental data given online demonstrates that this is not as a result of geographic distance, there being no perceived correlation between genetic and geographic distance in both population groups. Finally the rate of demographic growth was found to be lower in pastoral than in farmer populations. At first sight the results are counter-intuitive. One would expect that the diversity of mtDNA in pastoral societies would be higher than in farming societies, because the men in those societies are marrying brides who contribute mtDNA from clans other than their own. Similarly one would expect no great difference in Y chromosome diversity between pastoralists and farmers because both societies are patrilinear. Leaving aside the matter of immigration, the males who contribute the Y chromosome are always selected from the local sampled population. To understand the results, Chaix et al investigated the distribution of genetic diversity within individual populations using a statistical technique called multi-dimensional scaling analysis or MDS. This attempts to sort or resolve a sample into its different component parts, illustrating the results in two dimensions. The example chosen in the paper focuses on the Karakalpak On To'rt Urıw arıs. The MDS analysis of the Y chromosome data resolves the sample of 54 individuals into clusters, each of whom have exactly the same STR haplotypes: Multidimensional Scaling Analysis based on the Matrix of Distance between Y STR Haplotypes in a Specific Pastoral Population: the Karakalpak On To'rt Urıw. Thus the sample contains 13 individuals from the O'mir clan of the Keneges tribe with the same haplotype (shown by the large cross), 10 individuals of the Qarasıyraq clan of the Man'g'ıt tribe with the same haplotype (large diamond), and 10 individuals from the No'kis clan of the Keneges tribe with the same haplotype (large triangle). Other members of the same clans have different haplotypes, as shown on the chart. Those close to the so-called "identity core" group may have arisen by mutation. Those further afield might represent immigrants or adoptions. No such clustering is observed following the MDS analysis of the mtDNA data for the same On To'rt Urıw arıs: Multidimensional Scaling Analysis based on the Number of Differences between the Mitochondrial Sequence in the Same Pastoral Population: the Karakalpak On To'rt Urıw. Every individual in the sample, including those from the same clan, has a different HVS-1 sequence. Similar MDS analyses of the different farmer populations apparently showed very few "identity cores" in the Y chromosome data and a total absence of clustering in the mtDNA data, just as in the case of the On To'rt Urıw. The overall conclusion was that the existence of "identity cores" was specific to the Y chromosome data and was mainly restricted to the pastoral populations. This is reflected in the tables above, where we can see that the mean number of individuals carrying the same mtDNA sequence ranges from about 1 to 1½ and shows no difference between pastoral and farming populations. On the other hand the mean number of individuals carrying the same STR haplotype is low for farming populations but ranges from 1½ up to almost 5 for the pastoralists. Pastoral populations also have a lower number of Y chromosome singletons. Chaix et al point to three reinforcing factors to explain the existence of "identity cores" in pastoral as opposed to farming populations: Together these factors reduce overall Y chromosome diversity. - pastoral lineages frequently split and divide with closely related men remaining in the same sub-group, thereby reducing Y chromosome diversity, - small populations segmented into lineages can experience strong genetic drift, creating high frequencies of specific haplotypes, and - random demographic uncertainty in small lineage groups can lead to the extinction of some haplotypes, also reducing diversity. To explain the similar levels of mtDNA diversity in pastoral and farmer populations, Chaix et al point to the complex rules connected with exogamy. Qazaq men for example must marry a bride who has not had an ancestor belonging to the husband's own lineage for at least 7 generations, while Karakalpak men must marry a bride from another clan, although she can belong to the same tribe. Each pastoral clan, therefore, is gaining brides (and mtDNA) from external clans but is losing daughters (and mtDNA) to external clans. Such continuous and intense migration reduces mtDNA genetic drift within the clan. This in turn lowers diversity to a level similar to that observed in farmer populations, which is in any event already high. The process of two-way female migration effectively isolates the mtDNA structure of pastoral societies from their social structure. One aspect overlooked by the study is that, until recent times, Karakalpak clans were geographically isolated in villages located in specific parts of the Aral delta and therefore tended to always intermarry with one of their adjacent neighbouring clans. In effect, the two neighbouring clans behaved like a single population, with females moving between clans in every generation. How such social behaviour affected genetic structure was not investigated. The Uzbeks were traditionally nomadic pastoralists and progressively became settled agricultural communities from the 16th century onwards. The survey provided an opportunity to investigate the effect of this transition in lifestyle on the genetic structure of the Uzbek Y chromosome. Table 2 above shows that the genetic diversity found among Uzbeks, as measured by heterozygosity and the mean number of pairwise differences, was similar to that of the other farmer populations, as was the proportion of singleton haplotypes. Equally the mean number of individuals carrying the same Y STR haplotype was low (1 to 1½), indicating an absence of the haplotype clustering (or "identity cores") observed in pastoral populations. The pastoral "genetic signature" must have been rapidly eroded, especially in the case of the northern Uzbeks from Karakalpakstan, who only settled from the 17th century onwards. Two reasons are proposed for this rapid transformation. Firstly the early collapse and integration of the Uzbek descent groups following their initial settlement and secondly their mixing with traditional Khorezmian farming populations, which led to the creation of genetic admixtures of the two groups. Of course the Karakalpak On To'rt Urıw have been settled farmers for just as long as many Khorezmian Uzbeks and cannot in any way be strictly described as pastoralists. Indeed the majority of Karakalpak Qon'ırats have also been settled for much of the 20th century. However both have strictly maintained their traditional pastoralist clan structure and associated system of exogamous marriage. So although their lifestyles have changed radically , their social behaviour to date has not. Discussion and Conclusions The Karakalpaks and their Uzbek and Qazaq neighbours have no comprehensive recorded history, just occasional historical reports coupled with oral legends which may or may not relate to certain historical events in their past. We therefore have no record of where or when the Karakalpak confederation emerged and for what political or other reasons. In the absence of solid archaeological or historical evidence, many theories have been advanced to explain the origin of the Karakalpaks. Their official history, as taught in Karakalpak colleges and schools today, claims that the Karakalpaks are the descendants of the original endemic nomadic population of the Khorezm oasis, most of whom were forced to leave as a result of the Mongol invasion in 1221 and the subsequent dessication of the Aral delta following the devastation of Khorezm by Timur in the late 14th century, only returning in significant numbers during the 18th century. We fundamentally disagree with this simplistic picture, which uncritically endures with high- ranking support because it purports to establish an ancient Karakalpak origin and justifies tenure of the current homeland. While population genetics cannot unravel the full tribal history of the Karakalpaks per se, it can give us important clues to their formation and can eliminate some of the less likely theories that have been proposed. The two arıs of the Karakalpaks, the Qon'ırat and the On To'rt Urıw, are very similar to each other genetically, especially in the male line. Both are equally close to the Khorezmian Uzbeks, their southern neighbours. Indeed the genetic distances between the different populations of Uzbeks scattered across Uzbekistan is no greater than the distance between many of them and the Karakalpaks. This suggests that Karakalpaks and Uzbeks have very similar origins. If we want to find out about the formation of the Karakalpaks we should look towards the emergence of the Uzbek (Shaybani) Horde and its eastwards migration under the leadership of Abu'l Khayr, who united much of the Uzbek confederation between 1428 and 1468. Like the Uzbeks, the Karakalpaks are extremely diverse genetically. One only has to spend time with them to realize that some look European, some look Caucasian, and some look typically Mongolian. Their DNA turns out to be an admixture, roughly balanced between eastern and western populations. Two of their main genetic markers have far-eastern origins, M9 being strongly linked to Chinese and other Far Eastern peoples and M130 being linked to the Mongolians and Qazaqs. On the other hand, M17 is strong in Russia, the Ukraine, and Eastern Europe, while M89 is strong in the Middle East, the Caucasus, and Russia. M173 is strong in Western Europe and M45 is believed to have originated in Central Asia, showing that some of their ancestry goes back to the earliest inhabitants of that region. In fact the main difference between the Karakalpaks and the Uzbeks is a slight difference in the mix of the same markers. Karakalpaks have a somewhat greater bias towards the eastern markers. One possible cause could be the inter-marriage between Karakalpaks and Qazaqs over the past 400 years, a theory that gains some support from the close similarities in the mitochondrial DNA of the neighbouring female Karakalpak Qon'ırat and Qazaqs of the Aral delta. After the Uzbeks, Karakalpaks are next closest to the Uighurs, the Crimean Tatars, and the Kazan Tatars, at least in the male line. However in the female line the Karakalpaks are quite different from the Uighurs and Crimean Tatars (and possibly from the Kazan Tatars as well). There is clearly a genetic link with the Tatars of the lower Volga through the male line. Of course the Volga region has been closely linked through communications and trade with Khorezm from the earliest days. The Karakalpaks are genetically distant from the Qazaqs and the Turkmen, and even more so from the Kyrgyz and the Tajiks. We know that the Karakalpaks were geographically, politically, and culturally very close to the Qazaqs of the Lesser Horde prior to their migration into the Aral delta and were even once ruled by Qazaq tribal leaders. From their history, therefore, one might have speculated that the Karakalpaks may have been no more than another tribal group within the overall Qazaq confederation. This is clearly not so. The Qazaqs have a quite different genetic history, being far more homogenous and genetically closer to the Mongolians of East Asia. However as we have seen, the proximity of the Qazaqs and Karakalpaks undoubtedly led to intermarriage and therefore some level of genetic exchange. Karakalpak Y chromosome polymorphisms show different patterns from mtDNA polymorphisms in a similar manner to that identified in certain other Central Asian populations. This seems to be associated with the Turkic traditions of exogamy and so-called patrilocal marriage. Marriage is generally not permissible between couples belonging to the same clan, so men must marry women from other clans, or tribes, or in a few cases even different ethnic groups. After the marriage the groom stays in his home village and his bride moves from her village to his. The result is that the male non-recombining part of the Y chromosome becomes localized as a result of its geographical isolation, whereas the female mtDNA benefits from genetic mixing as a result of the albeit short range migration of young brides from different clans One of the most important conclusions is the finding that clans within the same tribe show no sign of genetic kinship, whether the tribe concerned is Karakalpak, Uzbek, Qazaq, or Turkmen. Indeed among the most settled ethnic groups, the Uzbeks and Karakalpak On To'rt Urıw, there is very little kinship even at clan level. It seems that settled agricultural communities soon lose their strong tribal identity and become more openminded to intermarriage with different neighbouring ethnic groups. Indeed the same populations place less importance on their geneaology and no longer maintain any identity according to lineage. It has generally been assumed that most Turkic tribal groups like the Uzbeks were formed as confederations of separate tribes and this is confirmed by the recent genetic study of ethnic groups from Karakalpakstan. We now see that this extends to the tribes themselves, with an absence of any genetic link between clans belonging to the same tribe. Clearly they too are merely associations of disparate groups, formed because of some historical reason other than descent. Possible causes for such an association of clans could be geographic or economic, such as common land use or shared water rights; military, such as a common defence pact or the construction of a shared qala; or perhaps political, such as common allegiance to a strong tribal leader. The history of Central Asia revolves around migrations and conflicts and the formation, dissolution, and reformation of tribal confederations, from the Saka Massagetae and the Sarmatians, to the Oghuz and Pechenegs, the Qimek, Qipchaq, and Karluk, the Mongols and Tatars, the White and Golden Hordes, the Shaybanid and Noghay Hordes, and finally the Uzbek, Qazaq, and Karakalpak confederations. Like making cocktails from cocktails, the gene pool of Central Asia was constantly being scrambled, more so on the female line as a result of exogamy and patrilineal The same tribal and clan names occur over and over again throughout the different ethnic Qipchaq-speaking populations of Central Asia, but in different combinations and associations. Many of the names predate the formation of the confederations to which they now belong, relating to earlier Turkic and Mongol tribal factions. Clearly tribal structures are fluid over time, with some groups withering or being absorbed by others, while new groups emerge or are added. When Abu'l Khayr Sultan became khan of the Uzbeks in 1428-29, their confederation consisted of at least 24 tribes, many with smaller subivisions. The names of 6 of those tribes occur among the modern Karakalpaks. A 16th century list, based on an earlier document, gives the names of 92 nomadic Uzbek tribes, at least 20 of which were shared by the later breakaway Qazaqs. 13 of the 92 names also occur among the modern Karakalpaks. Shortly after his enthronement as the Khan of Khorezm in 1644-45, Abu'l Ghazi Khan reorganized the tribal structure of the local Uzbeks into four tüpe: |Tüpe||Main Tribes||Secondary Tribes |On Tort Urugh||On To'rt Urıw||Qan'glı| |Durman, Yüz, Ming| Shaykhs, Burlaqs, Arabs | || ||Uyg'ır| 8 out of the 11 tribal names associated with the first three tüpe are also found within the Karakalpak tribal structure. Clearly there is greater overlap between the Karakalpak tribes and the local Khorezmian Uzbek tribes than in the Uzbek tribes in general. The question is whether these similarities pre-dated the Karakalpak migration into the Aral delta or are a result of later Uzbek influences? We know that the Qon'ırat were a powerful tribe in Khorezm for Uzbeks and Karakalpaks alike. They were mentioned as one of the Karakalpak "clans" on the Kuvan Darya [Quwan Darya] by Gladyshev in 1741 along with the Kitay, Qipchaq, Kiyat, Kinyagaz-Mangot (Keneges-Man'g'ıt), Djabin, Miton, and Usyun. Munis recorded that Karakalpak Qon'ırat, Keneges, and Qıtay troops supported Muhammad Amin Inaq against the Turkmen in 1769. Thanks to Sha'rigu'l Payzullaeva we have a comparison of the Qon'ırat tribal structure in the Aral Karakalpaks, the Surkhandarya Karakalpaks, and the Khorezmian Uzbeks, derived from genealogical records: The different status of the same Qon'ırat tribal groups among the Aral and Surkhandarya Karakalpaks and the Khorezmian Uzbeks | Khorezmian | |Qostamg'alı||clan||branch of tribe|| | |Qanjıg'alı||tiıre||branch of tribe||tube| |Shu'llik||division of arıs||clan|| | |Tartıwlı||tiıre||branch of tribe||clan| |Sıyraq||clan||branch of clan|| | |Qaramoyın||tribe||branch of clan|| | A tube is a branch of a tribe among the Khorezmian Uzbeks and a tiıre is a branch of a clan among the Aral Karakalpaks. The Karakalpak enclave in Surkhandarya was already established in the first half of the 18th century, some Karakalpaks fleeing to Samarkand and beyond following the devastating Jungar attack of 1723. Indeed it may even be older - the Qon'ırat have a legend that they came to Khorezm from the country of Zhideli Baysun in Surkhandarya. This suggests that some Karakalpaks had originally travelled south with factions from the Shaybani Horde in the early 16th century. The fact that the Karakalpak Qon'ırats remaining in that region have a similar tribal structure to the Khorezmian Uzbeks is powerful evidence that the tribal structure of the Aral Karakalpaks had broadly crystallized prior to their migration into the Aral delta. The Russian ethnographer Tatyana Zhdanko was the first academic to make an in-depth study of Karakalpak tribal structure. She not only uncovered the similarities between the tribal structures of the Uzbek and Karakalpak Qon'ırats in Khorezm but also the closeness of their respective customs and material and spiritual cultures. She concluded that one should not only view the similarity between the Uzbek and Karakalpak Qon'ırats in a historical sense, but should also see the commonality of their present- day ethnic relationships. B. F. Choriyev added that "this kind of similarity should not only be sought amongst the Karakalpak and the Khorezmian Qon'ırats but also amongst the Surkhandarya Qon'ırats. They all have the same ethnic history." Such ethnographic studies provide support to the findings that have emerged from the recent studies of Central Asian genetics. Together they point towards a common origin of the Karakalpak and Uzbek confederations. They suggest that each was formed out of the same melange of tribes and clans inhabiting the Dasht-i Qipchaq following the collapse of the Golden Horde, a vast expanse ranging northwards from the Black Sea coast to western Siberia and then eastwards to the steppes surrounding the lower and middle Syr Darya, encompassing the whole of the Aral region along the way. Of course the study of the genetics of present-day populations gives us the cumulative outcome of hundreds of thousands of years of complex human history and interaction. We now need to establish a timeline, tracking genetic changes in past populations using the human skeletal remains retrieved from Saka, Sarmatian, Turkic, Tatar, and early Uzbek and Karakalpak archaeological burial sites. Such studies might pinpoint the approximate dates when important stages of genetic intermixing occurred. Sha'rigu'l Payzullaeva recalls an interesting encounter at the Regional Studies Museum in No'kis during the month of August 1988. Thirty-eight elderly men turned up together to visit the Museum. Each wore a different kind of headdress, some with different sorts of taqıya, others with their heads wrapped in a double kerchief. They introduced themselves as Karakalpaks from Jarqorghan rayon in Surkhandarya viloyati, just north of the Afghan border. One of them said "Oh daughter, we are getting old now. We decided to come here to see our homeland before we die." During their visit to the Museum they said that they would travel to Qon'ırat rayon the following day. Sha'rigu'l was curious to know why they specifically wanted to visit Qon'ırat. They explained that it was because most of the men were from the Qon'ırat clan. One of the men introduced himself to Sha'rigu'l: "My name is Mirzayusup Khaliyarov, the name of my clan is Qoldawlı. After discovering that Sha'rigu'l was also Qoldawlı his eyes filled with tears and he kissed her on the forehead. Bowles, G. T., The People of Asia, Weidenfeld and Nicolson, London, 1977. Comas, D., Calafell, F., Mateu, E., Pérez-Lezaun, A., Bosch, E., Martínez-Arias, R., Clarimon, J., Facchini, F., Fiori, G., Luiselli, D., Pettener, D., and Bertranpetit, J., Trading Genes along the Silk Road: mtDNA Sequences and the Origin of Central Asian Populations, American Journal of Human Genetics, 63, pages 1824 to 1838, 1998. Cavalli-Sforza, L. L., Menozzi, P., and Piazza, A., The History and Geography of Human Genes, Princeton University Press, Chaix, R., Austerlitz, F., Khegay, T., Jacquesson, S., Hammer, M. F., Heyer, E., and Quintana-Murci, L., The Genetic or Mythical Ancestry of Descent Groups: Lessons from the Y Chromosome, American Journal of Human Genetics, Volume 75, pages 1113 to 1116, 2004. Chaix, R., Quintana-Murci, L., Hegay, T., Hammer, M. F., Mobasher, Z., Austerlitz, F., and Heyer, E., From Social to Genetic Structures in Central Asia, Current Biology, Volume 17, Issue 1, pages 43 to 48, 9 January 2007. Comas, D., Plaza, S., Spencer Wells, R., Yuldaseva, N., Lao, O., Calafell, F., and Bertranpetit, J., Admixture, migrations, and dispersals in Central Asia: evidence from maternal DNA lineages, European Journal of Human Genetics, pages 1 to 10, 2004. Heyer, E., Central Asia: A common inquiry in genetics, linguistics and anthropology, Presentation given at the conference entitled "Origin of Man, Language and Languages", Aussois, France, 22-25 September, 2005. Heyer, E., Private communications to the authors, 14 February and 17 April, 2006. Krader, L., Peoples of Central Asia, The Uralic and Altaic Series, Volume 26, Indiana University, Bloomington, 1971. Passarino, G., Semino, O., Magri, C., Al-Zahery, N., Benuzzi, G., Quintana-Murci, L., Andellnovic, S., Bullc-Jakus, F., Liu, A., Arslan, A., and Santachiara-Benerecetti, A., The 49a,f Haplotype 11 is a New Marker of the EU19 Lineage that Traces Migrations from Northern Regions of the Black Sea, Human Immunology, Volume 62, pages 922 to 932, 2001. Payzullaeva, Sh., Numerous Karakalpaks, many of them! [in Karakalpak], Karakalpakstan Publishing, No'kis, 1995. Pérez-Lezaun, A., Calafell, F., Comas, D., Mateu, E., Bosch, E., Martínez-Arias, R., Clarimón, J., Fiori, G., Luiselli, D., Facchini, F., Pettener, D., and Bertranpetit, J., Sex-Specific Migration Patterns in Central Asian Populations, Revealed by Analysis of Y-Chromosome Short Tandem Repeats and mtDNA, American Journal of Human Genetics, Volume 65, pages 208 to 219, 1999. Spencer Wells, R., The Journey of Man, A Genetic Odyssey, Allen Lane, London, 2002. Spencer Wells, R., et al, The Eurasian Heartland: A continental perspective on Y-chromosome diversity, Proceedings of the National Academy of Science, Volume 98, pages 10244 to 10249, USA, 28 August 2001. Underwood, J. H., Human Variation and Human Micro-Evolution, Prentice-Hall Inc., New Jersey, 1979. Underwood, P. A., et al, Detection of Numerous Y Chromosome Biallelic Polymorphisms by Denaturing High-Performance Liquid Chromatography, Genome Research, Volume 7, pages 996 to 1005, 1997. Zerjal, T., Spencer Wells, R., Yuldasheva, N., Ruzibakiev, R., and Tyler-Smith, C., A Genetic Landscape Reshaped by Recent Events: Y Chromosome Insights into Central Asia, American Journal of Human Genetics, Volume 71, pages 466 to 482, 2002. Visit our sister site www.qaraqalpaq.com, which uses the correct transliteration, Qaraqalpaq, rather than the Russian transliteration, Karakalpak. Return to top of page
http://www.karakalpak.com/genetics.html
13
15
Issue Date: September 28, 2009 Moon's Surface Holds Water Less than two weeks before a spacecraft is set to slam into the moon’s surface in search of water, a flurry of new reports from other spacecraft offer convincing evidence that the moon’s surface is lightly permeated with either water or its precursor, hydroxyl radicals. The possibility that water exists on the moon improves prospects that living things—including humans arriving on future space flights—might be able to survive there more easily than if the moon were dry. Although scientists found no evidence of water in lunar rocks brought back to Earth by Apollo astronauts, in the past few decades, the idea that stores of water ice might be cached in permanently shadowed craters at the moon’s poles has gained popularity. Now, using a variety of instruments, international teams have found key spectral evidence that H+2O or HO• covers the moon’s surface (Science, DOI: 10.1126/science.1178658, 10.1126/science.1179788, and 10.1126/science.1178105). “These instruments make it possible to map the lunar hydrogen content on the surface as never before,” said James Green, director of the Planetary Science Division at NASA headquarters, in Washington, D.C., at a press conference announcing the discovery. Team scientists estimate the abundance of water at about 1,000 ppm, which is about a quart of water per ton of soil. “Perhaps the most valuable result of these new observations is that they prompt a critical reexamination of the notion that the moon is dry,” writes astronomy professor Paul G. Lucey of the University of Hawaii in a perspective accompanying the papers. “It is not.” The teams include a group led by Brown University planetary science professor Carle M. Pieters. She monitored visible and near-infrared wavelengths through NASA’s moon mineralogy mapper on Chandrayaan-1, India’s first mission to the moon. Another group, led by astronomer Jessica M. Sunshine of the University of Maryland, College Park, confirmed the results from Chandrayaan-1 using spectrometers on board NASA’s Deep Impact spacecraft during that craft’s recent flybys of the moon. And astronomer Roger N. Clark of the U.S. Geological Survey in Denver examined visible and IR data captured by the Saturn-exploring Cassini spacecraft during its lunar flyby in 1999 and also found spectral evidence of adsorbed H2O and HO•. Coincidentally, on Oct. 9, NASA’s LCROSS spacecraft is slated to twice bombard the moon in search of water. The search focuses on the moon’s permanently dark crater, Cabeus A, located near the south pole because scientists believe that dark craters may contain relic frozen water from bombarding comets. First, the spacecraft will eject the spent second stage of its launch rocket, which will crash onto the surface, throwing up a large amount of debris. LCROSS and its sister spacecraft, NASA’s Lunar Reconnaissance Orbiter, will look for water in the ejected material. Then LCROSS itself will plunge to the surface, tossing up yet another plume. The new reports, however, suggest that dark craters are not the only source of lunar water. Sunshine’s team proposes that the solar wind may provide an essential ingredient for surface water: energetic H+. In the team’s scenario, the H+ flux strikes the moon’s surface, releasing oxygen atoms bound to minerals in the soil, forming HO•, which can then easily form H2O. The group posits that as temperatures climb, more water molecules are released. Similarly, when temperatures decrease, water collects, creating a steady state. Pieters cautioned in a statement that “when we say ‘water on the moon,’ we are not talking about lakes, oceans, or even puddles. Water on the moon means molecules of water and hydroxyl that interact with molecules of rock and dust in the top millimeters of the moon’s surface.” - Chemical & Engineering News - ISSN 0009-2347 - Copyright © American Chemical Society
http://cen.acs.org/articles/87/i39/Moons-Surface-Holds-Water.html
13
17
Why not build a small resonance based linear particle accelerator? The study of particles, the building blocks of matter, revolves around the ability to study their composition, mainly by accelerating them to high velocities and then colliding them with something. While physicists have had this ability for the better part of a century, constructing this sort of device is normally only in the realm of large institutions with equally large research budgets. However, the concept is simple in principle and an accelerator can thus be designed that can be built without extensive resources. A linear particle accelerator can be divided into two major subsystems, mechanical and electrical. Mechanically, an accelerator consists of an ion source, a beam line, a target, and a pump system. Two of these components, the beam line and the pump system require special attention. For an accelerator to function properly, a high vacuum must be maintained. The simplest method to achieve such a vacuum is a mechanical pump combined with a cold trap to condense any pump oils or water vapor. At the lower frequencies of a small accelerator, the beam line must be made of a nonconductive material. In order to maintain vacuum, materials that have low out-gassing must be used, leaving only glass. The electrical system consists of a high voltage, high frequency supply and drift tubes. A microcontroller can be used to generate an adjustable waveform, controlling particle acceleration. With proper planning, an accelerator can thus be constructed. Materials and Schematics: The earliest particle accelerators were one-stage linear accelerators, driven by a static high voltage source; such an accelerator has its limits, however, as the voltage source will eventually arc over, setting an upper limit its acceleration potential. This obstacle was overcome by Rolf Widerøe with the invention of the resonance accelerator. In such an accelerator, drift tubes are used, alternately connected to ground and a high frequency, high voltage AC power source. With this concept, a particle can be accelerated multiple times, reaching far higher energies than with an electrostatic accelerator. While a particle is within a drift tube, it is electrically shielded and is accelerated in the gaps between tubes. As a particle approaches a drift tube connected to the AC power source, it is accelerated toward the tube; the field then changes polarity while the particle is contained within the tube, and the particle is accelerated away from the tube once it exits. Thus, the particle is accelerated as if the acceleration potential were twice what it actually is. In order for particles to continue moving once they are accelerated, a vacuum must be maintained within the beam line. In addition, the electric fields of the drift tubes must reach the particles being accelerated; this can be achieved either by using a non-conductive material for the beam line or by placing the drifts tubes within the beam line itself. The simpler method of using a non-conductive material was used, avoiding issues of electrical insulation and vacuum leakage. Plastic, however, cannot be used as it out gasses in a vacuum, eliminating the possibility to use cheap, readily available PVC pipe. Thus, a borosilicate glass pipe, designed for use with steam boilers, was used. This was then connected to a mechanical vacuum pump using copper pipe. A cold trap consisting of a U-shaped pipe and an isopropanol/dry ice solution was placed between the pump and the accelerator to prevent back streaming of pump oil into the beam line. To power and control the accelerator, electronics were designed and assembled to produce a waveform with a controllable frequency. In order to create both positive and negative voltages, a microcontroller was connected to a RS232 level converter to change TTL voltages to +12V and -12V. These signals were connected to transistors to switch the larger current required by the accelerator. An ignition transformer was then used to convert the low voltage waveform into a 30kV waveform for powering the accelerator. This control board is powered by a standard ATX computer power supply as it provides clean, regulated power at both the 5V required for the majority of the electronic components and the +12V and -12V required by the transformer, all in a cheap, compact package. Ions for the accelerator were created from the atmosphere, mostly nitrogen, through the use of an off-the-shelf 7.5kV DC power source connected to points inserted into the ionization chamber. As all the particles accelerated are ions, the effectiveness of the accelerator can easily be determined by counting the number of said ions that reach the end of the beam line. In its simplest form, this can be determined with a Faraday Cup. A copper target was placed at the end on the beam line and was connected to ground across a 1MΩ resistor. An analog to digital converter was then used to find the voltage drop across this resistor and thus the current via Ohm’s Law. Measurements were recorded via a microSD card. Data collection during the experiment consisted of analog to digital converter measurements in regards to a 1.1V reference potential (x is the analog to digital converter measurement). These measurements were taken across a 1MΩ resistor, allowing one to calculate current by way of Ohm’s Law. By dividing by the charge of an ion, the number of ions accelerated can be calculated. Combining these points, a formula can be created to convert the sensor readings into the number of ions accelerated. A review of the data show that the most effective frequency was 125kHz. In addition, this was the frequency at which the data was most consistent, with the least outliers. As frequency decreased, the number of ion hits recorded also decreased. Based on the data collected, 125kHz is the optimal frequency for operating the accelerator with ions generated from the air, mostly nitrogen. The data collected is incomplete, however, as optimal frequency was found to be at the edge of the data set. A frequency generator capable of creating higher frequency waveforms could be used to verify the data collected by expanding the upper limit of the data set, allowing a peak to be determined.
http://www.mpetroff.net/projects/linear-particle-accelerator/
13
23
On February 17, 2006, the village of Guinsaugon on Leyte Island in the Philippines disappeared. After several days of unusually heavy rain, a massive landslide swallowed more than 350 houses and an elementary school, burying more than 1,100 people. Residents of the village, situated at the foot of a mountain, had no warning. Landslides occur everywhere in the world, but the danger of rainfall-induced slides tends to be much greater in tropical mountainous regions like those in the Philippines, Central and South America, and southeastern Asia. Steep terrain and heavy tropical rains put dense populations at risk. Monitoring landslide-producing conditions typically requires extensive networks of ground-based rain gauges and weather instruments. But many of the developing countries in high-risk areas lack the resources to maintain such systems; heavy rains and flooding often wash away ground-based instruments. Robert Adler, a senior scientist in the Laboratory for Atmospheres at Goddard Space Flight Center, and Yang Hong, a research scientist at Goddard Earth Sciences Technology Center, are confronting the problem by developing a satellite-based system for predicting landslides. The system makes data available on the Internet just a few hours after the satellite makes its observations. Adler said, “If we can complete this ‘real-time’ product and make it available on the Web, then almost any government or organization in the world can access this information.” Mapping landslide susceptibility Rainfall is the key factor in Adler and Hong’s product, but first, they needed to piece together a global landslide susceptibility map, which would help reveal terrain and ground properties. Hong said, “Rainfall can be a trigger for landslides, but ground conditions are also very important.” Adler and Hong mapped topography, as well the direction that rivers and runoff would flow across the terrain. Satellite data helped the researchers determine land cover types, including forests, grasslands, wetlands, deserts, and urban areas. They also included information on soil composition and depth. The map revealed no surprises—the researchers already had a general idea which regions of the world were susceptible to landslides. “The most important factors are the slope and soil type. Steep slopes and coarse soil types are more susceptible to landslides,” Hong said. “And, in terms of land cover, bare soil contributes more to landslides.” The landslide susceptibility map provides a background against which the scientists could predict the effect of rainfall. Remotely sensing rainfall Adler and Hong’s primary source of rainfall data is the Tropical Rainfall Measuring Mission (TRMM), a joint NASA-Japanese Space Agency mission that launched in 1997. Adler said, “There are two main things that TRMM provides for this multi-satellite analysis. One, it’s the calibrator for the information from the other satellites. Two, it’s always in the tropics, and gives us very good coverage in a critical area.” TRMM orbits the Earth from west to east along the equator, weaving between 35 degrees north and 35 degrees south. Adler and Hong collect data from other satellites that are in polar orbits, traveling north to south around the Earth. “Because the TRMM orbit crosses over the paths of each polar-orbiting satellite, we’re able to collect subsets of data from both satellites at the same time,” Adler said. “We use TRMM data, which we think is making the best estimate, to calibrate, or adjust the rain estimates from the other satellites.” To test whether their rainfall product accurately detected landslide-triggering rain events, Adler and Hong identified 74 rainfall-induced landslides that occurred between the TRMM launch and 2006, including the Guinsaugon slide. Over the years, scientists have analyzed case studies of landslides to determine the intensity and duration of rainfall—usually measured at ground-based rain gauges—beyond which landslides become likely. Adler and Hong plugged their satellite-based rainfall data into equations that predicted when the rainfall at each landslide location would have reached the threshold. Their results closely matched previous threshold estimates, confirming that satellite observations could detect the extremely intense rainfall needed to trigger the slides. Adler’s and Hong’s satellite-based landslide-prediction products are available online and contain data from 2002 through the present. They are updated in “real time,” allowing anyone on the Web to determine if an area is receiving particularly intense rainfall or if it has reached a critical level of accumulation. People can download data or zoom in on geographic maps that display three-hour rainfall rates or seven-day accumulations. In addition, Hong is making hourly rainfall data available through Google Earth, a popular Web-based browser for viewing a collection of satellite and aerial views of the Earth overlaid with geographical and scientific information. For now, the researchers consider the product to be in an “experimental” phase. They are still evaluating its potential and its limitations. Based on feedback from the system’s first users, they plan to refine the system to make it even more practical to local governments and disaster-response organizations on the ground. In remote, landslide-prone areas like Leyte Island, it can difficult for emergency planners to assess landslide hazards in time to prevent disasters. In these areas, a real-time, satellite-based monitoring system may ultimately save lives. “When national and international organizations have to plan disaster mitigation or relief work,” Adler said, “this system can give them quantitative information about where exactly the hazard is and which areas are affected. And that’s why I think that a lot of people are looking at this information. You don’t get it anywhere else.” About the scientists: Robert Adler is a senior scientist in the Laboratory for Atmospheres at Goddard Space Flight Center and a project scientist for the Tropical Rainfall Measuring Mission (TRMM). Adler’s research focuses on analyzing precipitation observations from space on global and regional scales using TRMM and other satellite data. Adler holds a PhD in meteorology from Colorado State University. Yang Hong is a research scientist at the NASA Goddard Earth Science and Technology (GEST) Center. His research interests include surface hydrology, remote-sensing of precipitation, flood forecasting and landslide analysis, and sustainable development. Hong received his PhD in hydrology and water resources from the University of Arizona, Tucson. This research was funded by NASA.
http://earthobservatory.nasa.gov/Features/LandslideWarning/
13
39
How the Shape of a Histogram Reflects the Statistical Mean and Median You can connect the shape of a histogram with the mean and median of the statistical data that you use to create it. Conversely, the relationship between the mean and median can help you predict the shape of the histogram. The preceding graph is a histogram showing the ages of winners of the Best Actress Academy Award; you can see it is skewed right. The following table includes calculations of some basic (that is, descriptive) statistics from the data set. Examining these numbers, you find the median age is 33.00 years and the mean age is 35.69 years: The mean age is higher than the median age because of a few actresses that were quite a bit older than the rest when they won their awards. For example, Jessica Tandy won for her role in Driving Miss Daisy when she was 81, and Katharine Hepburn won the Oscar for On Golden Pond when she was 74. The relationship between the median and mean confirms the skewness (to the right) found in the first graph. Here are some tips for connecting the shape of a histogram with the mean and median: If the histogram is skewed right, the mean is greater than the median. This is the case because skewed-right data have a few large values that drive the mean upward but do not affect where the exact middle of the data is (that is, the median). If the histogram is close to symmetric, then the mean and median are close to each other. Close to symmetric means the data are roughly the same in height and location on either side of the center of the histogram; it doesn't need to be exact. Close is defined in the context of the data; for example, the numbers 50 and 55 are said to be close if all the values lie between 0 and 1,000, but they are considered to be farther apart if all the values lie between 49 and 56. The histogram shown in this graph is close to symmetric. Its mean and median are both equal to 3.5: If the histogram is skewed left, the mean is less than the median. This is the case because skewed-left data have a few small values that drive the mean downward but do not affect where the exact middle of the data is (that is, the median). The following graph represents the exam scores of 17 students, and the data are skewed left. The mean and median of the original data set are calculated to be 70.41 and 74.00, respectively. The mean is lower than the median due to a few students who scored quite a bit lower than the others. These findings match the general shape of the histogram shown in the graph: If for some reason you don't have a histogram of the data, and you only have the mean and median to go by, you can compare them to each other to get a rough idea as to the shape of the data set. If the mean is much larger than the median, the data are generally skewed right; a few values are larger than the rest. If the mean is much smaller than the median, the data are generally skewed left; a few smaller values bring the mean down. If the mean and median are close, you know the data is fairly balanced, or symmetric, on each side (but not necessarily bell-shaped).
http://www.dummies.com/how-to/content/how-the-shape-of-a-histogram-reflects-the-statisti.html
13
41
A map projection is a systematic transformation of the latitudes and longitudes of locations on the surface of a sphere or an ellipsoid into locations on a plane. Map projections are necessary for creating maps. All map projections distort the surface in some fashion. Depending on the purpose of the map, some distortions are acceptable and others are not; therefore different map projections exist in order to preserve some properties of the sphere-like body at the expense of other properties. There is no limit to the number of possible map projections. More generally, the surfaces of planetary bodies can be mapped even if they are too irregular to be modeled well with a sphere or ellipsoid. Even more generally, projections are the subject of several pure mathematical fields, including differential geometry and projective geometry. However "map projection" refers specifically to a cartographic projection. Maps can be more useful than globes in many situations: they are more compact and easier to store; they readily accommodate an enormous range of scales; they are viewed easily on computer displays; they can facilitate measuring properties of the terrain being mapped; they can show larger portions of the Earth's surface at once; and they are cheaper to produce and transport. These useful traits of maps motivate the development of map projections. However, Carl Friedrich Gauss's Theorema Egregium proved that a sphere's surface cannot be represented on a plane without distortion. The same applies to other reference surfaces used as models for the Earth. Since any map projection is a representation of one of those surfaces on a plane, all map projections distort. Every distinct map projection distorts in a distinct way. The study of map projections is the characterization of these distortions. Projection is not limited to perspective projections, such as those resulting from casting a shadow on a screen, or the rectilinear image produced by a pinhole camera on a flat film plate. Rather, any mathematical function transforming coordinates from the curved surface to the plane is a projection. Few projections in actual use are perspective. For simplicity most of this article assumes that the surface to be mapped is that of a sphere. In reality, the Earth and other large celestial bodies are generally better modeled as oblate spheroids, whereas small objects such as asteroids often have irregular shapes. These other surfaces can be mapped as well. Therefore, more generally, a map projection is any method of "flattening" into a plane a continuous curved surface. Metric properties of maps Many properties can be measured on the Earth's surface independently of its geography. Some of these properties are: Map projections can be constructed to preserve one or more of these properties, though not all of them simultaneously. Each projection preserves or compromises or approximates basic metric properties in different ways. The purpose of the map determines which projection should form the base for the map. Because many purposes exist for maps, many projections have been created to suit those purposes. Another consideration in the configuration of a projection is its compatibility with data sets to be used on the map. Data sets are geographic information; their collection depends on the chosen datum (model) of the Earth. Different datums assign slightly different coordinates to the same location, so in large scale maps, such as those from national mapping systems, it is important to match the datum to the projection. The slight differences in coordinate assignation between different datums is not a concern for world maps or other vast territories, where such differences get shrunk to imperceptibility. Which projection is best? The mathematics of projection do not permit any particular map projection to be "best" for everything. Something will always get distorted. Therefore a diversity of projections exists to service the many uses of maps and their vast range of scales. Modern national mapping systems typically employ a transverse Mercator or close variant for large-scale maps in order to preserve conformality and low variation in scale over small areas. For smaller-scale maps, such as those spanning continents or the entire world, many projections are in common use according to their fitness for the purpose. Thematic maps normally require an equal area projection so that phenomena per unit area are shown in correct proportion. However, representing area ratios correctly necessarily distorts shapes more than many maps that are not equal-area. Hence reference maps of the world often appear on compromise projections instead. Due to the severe distortions inherent in any map of the world, within reason the choice of projection becomes largely one of æsthetics. The Mercator projection, developed for navigational purposes, has often been used in world maps where other projections would have been more appropriate. This problem has long been recognized even outside professional circles. For example a 1943 New York Times editorial states: The time has come to discard [the Mercator] for something that represents the continents and directions less deceptively... Although its usage... has diminished... it is still highly popular as a wall map apparently in part because, as a rectangular map, it fills a rectangular wall space with more map, and clearly because its familiarity breeds more popularity. A controversy in the 1980s over the Peters map motivated the American Cartographic Association (now Cartography and Geographic Information Society) to produce a series of booklets (including Which Map is Best) designed to educate the public about map projections and distortion in maps. In 1989 and 1990, after some internal debate, seven North American geographic organizations adopted a resolution recommending against using any rectangular projection (including Mercator and Gall–Peters) for reference maps of the world. Construction of a map projection The creation of a map projection involves two steps: - Selection of a model for the shape of the Earth or planetary body (usually choosing between a sphere or ellipsoid). Because the Earth's actual shape is irregular, information is lost in this step. - Transformation of geographic coordinates (longitude and latitude) to Cartesian (x,y) or polar plane coordinates. Cartesian coordinates normally have a simple relation to eastings and northings defined on a grid superimposed on the projection. Some of the simplest map projections are literally projections, as obtained by placing a light source at some definite point relative to the globe and projecting its features onto a specified surface. This is not the case for most projections, which are defined only in terms of mathematical formulae that have no direct geometric interpretation. Choosing a projection surface A surface that can be unfolded or unrolled into a plane or sheet without stretching, tearing or shrinking is called a developable surface. The cylinder, cone and of course the plane are all developable surfaces. The sphere and ellipsoid do not have developable surfaces, so any projection of them onto a plane will have to distort the image. (To compare, one cannot flatten an orange peel without tearing and warping it.) One way of describing a projection is first to project from the Earth's surface to a developable surface such as a cylinder or cone, and then to unroll the surface into a plane. While the first step inevitably distorts some properties of the globe, the developable surface can then be unfolded without further distortion. Aspects of the projection Once a choice is made between projecting onto a cylinder, cone, or plane, the aspect of the shape must be specified. The aspect describes how the developable surface is placed relative to the globe: it may be normal (such that the surface's axis of symmetry coincides with the Earth's axis), transverse (at right angles to the Earth's axis) or oblique (any angle in between). The developable surface may also be either tangent or secant to the sphere or ellipsoid. Tangent means the surface touches but does not slice through the globe; secant means the surface does slice through the globe. Moving the developable surface away from contact with the globe never preserves or optimizes metric properties, so that possibility is not discussed further here. A globe is the only way to represent the earth with constant scale throughout the entire map in all directions. A map cannot achieve that property for any area, no matter how small. It can, however, achieve constant scale along specific lines. Some possible properties are: - The scale depends on location, but not on direction. This is equivalent to preservation of angles, the defining characteristic of a conformal map. - Scale is constant along any parallel in the direction of the parallel. This applies for any cylindrical or pseudocylindrical projection in normal aspect. - Combination of the above: the scale depends on latitude only, not on longitude or direction. This applies for the Mercator projection in normal aspect. - Scale is constant along all straight lines radiating from a particular geographic location. This is the defining characteristic of an equidistant projection such as the Azimuthal equidistant projection. There are also projections (Maurer, Close) where true distances from two points are preserved. Choosing a model for the shape of the Earth Projection construction is also affected by how the shape of the Earth is approximated. In the following section on projection categories, the earth is taken as a sphere in order to simplify the discussion. However, the Earth's actual shape is closer to an oblate ellipsoid. Whether spherical or ellipsoidal, the principles discussed hold without loss of generality. Selecting a model for a shape of the Earth involves choosing between the advantages and disadvantages of a sphere versus an ellipsoid. Spherical models are useful for small-scale maps such as world atlases and globes, since the error at that scale is not usually noticeable or important enough to justify using the more complicated ellipsoid. The ellipsoidal model is commonly used to construct topographic maps and for other large- and medium-scale maps that need to accurately depict the land surface. A third model of the shape of the Earth is the geoid, a complex and more accurate representation of the global mean sea level surface that is obtained through a combination of terrestrial and satellite gravity measurements. This model is not used for mapping because of its complexity, but rather is used for control purposes in the construction of geographic datums. (In geodesy, plural of "datum" is "datums" rather than "data".) A geoid is used to construct a datum by adding irregularities to the ellipsoid in order to better match the Earth's actual shape. It takes into account the large-scale features in the Earth's gravity field associated with mantle convection patterns, and the gravity signatures of very large geomorphic features such as mountain ranges, plateaus and plains. Historically, datums have been based on ellipsoids that best represent the geoid within the region that the datum is intended to map. Controls (modifications) are added to the ellipsoid in order to construct the datum, which is specialized for a specific geographic region (such as the North American Datum). A few modern datums, such as WGS84 which is used in the Global Positioning System, are optimized to represent the entire earth as well as possible with a single ellipsoid, at the expense of accuracy in smaller regions. A fundamental projection classification is based on the type of projection surface onto which the globe is conceptually projected. The projections are described in terms of placing a gigantic surface in contact with the earth, followed by an implied scaling operation. These surfaces are cylindrical (e.g. Mercator), conic (e.g., Albers), or azimuthal or plane (e.g. stereographic). Many mathematical projections, however, do not neatly fit into any of these three conceptual projection methods. Hence other peer categories have been described in the literature, such as pseudoconic, pseudocylindrical, pseudoazimuthal, retroazimuthal, and polyconic. Another way to classify projections is according to properties of the model they preserve. Some of the more common categories are: - Preserving direction (azimuthal), a trait possible only from one or two points to every other point - Preserving shape locally (conformal or orthomorphic) - Preserving area (equal-area or equiareal or equivalent or authalic) - Preserving distance (equidistant), a trait possible only between one or two points and every other point - Preserving shortest route, a trait preserved only by the gnomonic projection Because the sphere is not a developable surface, it is impossible to construct a map projection that is both equal-area and conformal. Projections by surface The three developable surfaces (plane, cylinder, cone) provide useful models for understanding, describing, and developing map projections. However, these models are limited in two fundamental ways. For one thing, most world projections in actual use do not fall into any of those categories. For another thing, even most projections that do fall into those categories are not naturally attainable through physical projection. As L.P. Lee notes, No reference has been made in the above definitions to cylinders, cones or planes. The projections are termed cylindric or conic because they can be regarded as developed on a cylinder or a cone, as the case may be, but it is as well to dispense with picturing cylinders and cones, since they have given rise to much misunderstanding. Particularly is this so with regard to the conic projections with two standard parallels: they may be regarded as developed on cones, but they are cones which bear no simple relationship to the sphere. In reality, cylinders and cones provide us with convenient descriptive terms, but little else. Lee's objection refers to the way the terms cylindrical, conic, and planar (azimuthal) have been abstracted in the field of map projections. If maps were projected as in light shining through a globe onto a developable surface, then the spacing of parallels would follow a very limited set of possibilities. Such a cylindrical projection (for example) is one which: - Is rectangular; - Has straight vertical meridians, spaced evenly; - Has straight parallels symmetrically placed about the equator; - Has parallels constrained to where they fall when light shines through the globe onto the cylinder, with the light source someplace along the line formed by the intersection of the prime meridian with the equator, and the center of the sphere. (If you rotate the globe before projecting then the parallels and meridians will not necessarily still be straight lines. Rotations are normally ignored for the purpose of classification.) Where the light source emanates along the line described in this last constraint is what yields the differences between the various "natural" cylindrical projections. But the term cylindrical as used in the field of map projections relaxes the last constraint entirely. Instead the parallels can be placed according to any algorithm the designer has decided suits the needs of the map. The famous Mercator projection is one in which the placement of parallels does not arise by "projection"; instead parallels are placed how they need to be in order to satisfy the property that a course of constant bearing is always plotted as a straight line. The term "normal cylindrical projection" is used to refer to any projection in which meridians are mapped to equally spaced vertical lines and circles of latitude (parallels) are mapped to horizontal lines. The mapping of meridians to vertical lines can be visualized by imagining a cylinder whose axis coincides with the Earth's axis of rotation. This cylinder is wrapped around the Earth, projected onto, and then unrolled. By the geometry of their construction, cylindrical projections stretch distances east-west. The amount of stretch is the same at any chosen latitude on all cylindrical projections, and is given by the secant of the latitude as a multiple of the equator's scale. The various cylindrical projections are distinguished from each other solely by their north-south stretching (where latitude is given by φ): - North-south stretching equals east-west stretching (secant φ): The east-west scale matches the north-south scale: conformal cylindrical or Mercator; this distorts areas excessively in high latitudes (see also transverse Mercator). - North-south stretching grows with latitude faster than east-west stretching (secant² φ): The cylindric perspective (= central cylindrical) projection; unsuitable because distortion is even worse than in the Mercator projection. - North-south stretching grows with latitude, but less quickly than the east-west stretching: such as the Miller cylindrical projection (secant[4φ/5]). - North-south distances neither stretched nor compressed (1): equirectangular projection or "plate carrée". - North-south compression precisely the reciprocal of east-west stretching (cosine φ): equal-area cylindrical. This projection has many named specializations differing only in the scaling constant. Some of those specializations are the Gall–Peters or Gall orthographic, Behrmann, and Lambert cylindrical equal-area). This kind of projection divides north-south distances by a factor equal to the secant of the latitude, preserving area at the expense of shapes. In the first case (Mercator), the east-west scale always equals the north-south scale. In the second case (central cylindrical), the north-south scale exceeds the east-west scale everywhere away from the equator. Each remaining case has a pair of secant lines—a pair of identical latitudes of opposite sign (or else the equator) at which the east-west scale matches the north-south-scale. Normal cylindrical projections map the whole Earth as a finite rectangle, except in the first two cases, where the rectangle stretches infinitely tall while retaining constant width. Pseudocylindrical projections represent the central meridian as a straight line segment. Other meridians are longer than the central meridian and bow outward away from the central meridian. Pseudocylindrical projections map parallels as straight lines. Along parallels, each point from the surface is mapped at a distance from the central meridian that is proportional to its difference in longitude from the central meridian. On a pseudocylindrical map, any point further from the equator than some other point has a higher latitude than the other point, preserving north-south relationships. This trait is useful when illustrating phenomena that depend on latitude, such as climate. Examples of psuedocylindrial projections include: - Sinusoidal, which was the first pseudocylindrical projection developed. Vertical scale and horizontal scale are the same throughout, resulting in an equal-area map. On the map, as in reality, the length of each parallel is proportional to the cosine of the latitude. Thus the shape of the map for the whole earth is the region between two symmetric rotated cosine curves. The true distance between two points on the same meridian corresponds to the distance on the map between the two parallels, which is smaller than the distance between the two points on the map. The distance between two points on the same parallel is true. The area of any region is true. - Collignon projection, which in its most common forms represents each meridian as 2 straight line segments, one from each pole to the equator. The term "conic projection" is used to refer to any projection in which meridians are mapped to equally spaced lines radiating out from the apex and circles of latitude (parallels) are mapped to circular arcs centered on the apex. When making a conic map, the map maker arbitrarily picks two standard parallels. Those standard parallels may be visualized as secant lines where the cone intersects the globe—or, if the map maker chooses the same parallel twice, as the tangent line where the cone is tangent to the globe. The resulting conic map has low distortion in scale, shape, and area near those standard parallels. Distances along the parallels to the north of both standard parallels or to the south of both standard parallels are necessarily stretched. The most popular conic maps either - Albers conic - compress north-south distance between each parallel to compensate for the east-west stretching, giving an equal-area map, or - Equidistant conic - keep constant distance scale along the entire meridian, typically the same or near the scale along the standard parallels, or - Lambert conformal conic - stretch the north-south distance between each parallel to equal the east-west stretching, giving a conformal map. - Werner cordiform, upon which distances are correct from one pole, as well as along all parallels. - Continuous American polyconic Azimuthal (projections onto a plane) Azimuthal projections have the property that directions from a central point are preserved and therefore great circles through the central point are represented by straight lines on the map. Usually these projections also have radial symmetry in the scales and hence in the distortions: map distances from the central point are computed by a function r(d) of the true distance d, independent of the angle; correspondingly, circles with the central point as center are mapped into circles which have as center the central point on the map. The radial scale is r'(d) and the transverse scale r(d)/(R sin(d/R)) where R is the radius of the Earth. Some azimuthal projections are true perspective projections; that is, they can be constructed mechanically, projecting the surface of the Earth by extending lines from a point of perspective (along an infinite line through the tangent point and the tangent point's antipode) onto the plane: - The gnomonic projection displays great circles as straight lines. Can be constructed by using a point of perspective at the center of the Earth. r(d) = c tan(d/R); a hemisphere already requires an infinite map, - The General Perspective projection can be constructed by using a point of perspective outside the earth. Photographs of Earth (such as those from the International Space Station) give this perspective. - The orthographic projection maps each point on the earth to the closest point on the plane. Can be constructed from a point of perspective an infinite distance from the tangent point; r(d) = c sin(d/R). Can display up to a hemisphere on a finite circle. Photographs of Earth from far enough away, such as the Moon, give this perspective. - The azimuthal conformal projection, also known as the stereographic projection, can be constructed by using the tangent point's antipode as the point of perspective. r(d) = c tan(d/2R); the scale is c/(2R cos²(d/2R)). Can display nearly the entire sphere's surface on a finite circle. The sphere's full surface requires an infinite map. Other azimuthal projections are not true perspective projections: - Azimuthal equidistant: r(d) = cd; it is used by amateur radio operators to know the direction to point their antennas toward a point and see the distance to it. Distance from the tangent point on the map is proportional to surface distance on the earth (; for the case where the tangent point is the North Pole, see the flag of the United Nations) - Lambert azimuthal equal-area. Distance from the tangent point on the map is proportional to straight-line distance through the earth: r(d) = c sin(d/2R) - Logarithmic azimuthal is constructed so that each point's distance from the center of the map is the logarithm of its distance from the tangent point on the Earth. r(d) = c ln(d/d0); locations closer than at a distance equal to the constant d0 are not shown (, figure 6-5) Projections by preservation of a metric property Conformal, or orthomorphic, map projections preserve angles locally, implying that they map infinitesimal circles of constant size anywhere on the Earth to infinitesimal circles of varying sizes on the map. In contrast, mappings that are not conformal distort most such small circles into ellipses of distortion. An important consequence of conformality is that relative angles at each point of the map are correct, and the local scale (although varying throughout the map) in every direction around any one point is constant. These are some conformal projections: - Mercator: Rhumb lines are represented by straight segments - Transverse Mercator - Stereographic: Any circle of a sphere, great and small, maps to a circle or straight line. - Lambert conformal conic - Peirce quincuncial projection - Adams hemisphere-in-a-square projection - Guyou hemisphere-in-a-square projection These are some projections that preserve area: - Gall orthographic (also known as Gall–Peters, or Peters, projection) - Albers conic - Lambert azimuthal equal-area - Lambert cylindrical equal-area - Goode's homolosine - Tobler hyperelliptical - Snyder’s equal-area polyhedral projection, used for geodesic grids. These are some projections that preserve distance from some standard point or line: - Equirectangular—distances along meridians are conserved - Plate carrée—an Equirectangular projection centered at the equator - Azimuthal equidistant—distances along great circles radiating from centre are conserved - Equidistant conic - Sinusoidal—distances along parallels are conserved - Werner cordiform distances from the North Pole are correct as are the curved distance on parallels - Two-point equidistant: two "control points" are arbitrarily chosen by the map maker. Distance from any point on the map to each control point is proportional to surface distance on the earth. Great circles are displayed as straight lines: Direction to a fixed location B (the bearing at the starting location A of the shortest route) corresponds to the direction on the map from A to B: - Littrow—the only conformal retroazimuthal projection - Hammer retroazimuthal—also preserves distance from the central point - Craig retroazimuthal aka Mecca or Qibla—also has vertical meridians Compromise projections Compromise projections give up the idea of perfectly preserving metric properties, seeking instead to strike a balance between distortions, or to simply make things "look right". Most of these types of projections distort shape in the polar regions more than at the equator. These are some compromise projections: - van der Grinten - Miller cylindrical - Winkel Tripel - Buckminster Fuller's Dymaxion - B.J.S. Cahill's Butterfly Map - Kavrayskiy VII - Wagner VI projection - Chamberlin trimetric - Oronce Finé's cordiform See also - Snyder, J.P. (1989). Album of Map Projections, United States Geological Survey Professional Paper. United States Government Printing Office. 1453. - Nirtsov, Maxim V. (2007). "The problems of mapping irregularly-shaped celestial bodies". International Cartographic Association. - Choosing a World Map. Falls Church, Virginia: American Congress on Surveying and Mapping. 1988. p. 1. ISBN 0-9613459-2-6. - Slocum, Terry A.; Robert B. McMaster, Fritz C. Kessler, Hugh H. Howard (2005). Thematic Cartography and Geographic Visualization (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall. p. 166. ISBN 0-13-035123-7. - Bauer, H.A. (1942). "Globes, Maps, and Skyways (Air Education Series)". New York. p. 28 - Miller, Osborn Maitland (1942). "Notes on Cylindrical World Map Projections". Geographical Review 43 (3): 405–409. - Raisz, Erwin Josephus. (1938). General Cartography. New York: McGraw–Hill. 2d ed., 1948. p. 87. - Robinson, Arthur Howard. (1960). Elements of Cartography, second edition. New York: John Wiley and Sons. p. 82. - Snyder, John P. (1993). Flattening the Earth: Two Thousand Years of Map Projections p. 157. Chicago and London: The University of Chicago Press. ISBN 0-226-76746-9. (Summary of the Peters controversy.) - American Cartographic Association's Committee on Map Projections, 1986. Which Map is Best p. 12. Falls Church: American Congress on Surveying and Mapping. - American Cartographer. 1989. 16(3): 222–223. - Snyder, John P. (1993). Flattening the earth: two thousand years of map projections. University of Chicago Press. ISBN 0-226-76746-9. - Snyder, John P. (1997). Flattening the earth: two thousand years of map projections. University of Chicago Press. ISBN 978-0-226-76747-5. - Lee, L.P. (1944). "The nomenclature and classification of map projections". Empire Survey Review VII (51): 190–200. p. 193 - Weisstein, Eric W., "Sinusoidal Projection", MathWorld. - Carlos A. Furuti. "Conic Projections" - Weisstein, Eric W., "Gnomonic Projection", MathWorld. - "The Gnomonic Projection". Retrieved November 18, 2005. - Weisstein, Eric W., "Orthographic Projection", MathWorld. - Weisstein, Eric W., "Stereographic Projection", MathWorld. - Weisstein, Eric W., "Azimuthal Equidistant Projection", MathWorld. - Weisstein, Eric W., "Lambert Azimuthal Equal-Area Projection", MathWorld. - "http://www.gis.psu.edu/projection/chap6figs.html". Retrieved November 18, 2005. - Fran Evanisko, American River College, lectures for Geography 20: "Cartographic Design for GIS", Fall 2002 - Map Projections—PDF versions of numerous projections, created and released into the Public Domain by Paul B. Anderson ... member of the International Cartographic Association's Commission on Map Projections |Wikimedia Commons has media related to: Map projections| - A Cornucopia of Map Projections, a visualization of distortion on a vast array of map projections in a single image. - G.Projector, free software by can render many projections (NASA GISS). - Color images of map projections and distortion (Mapthematics.com). - Geometric aspects of mapping: map projection (KartoWeb.itc.nl). - Java world map projections, Henry Bottomley (SE16.info). - Map projections http://www.3dsoftware.com/Cartography/USGS/MapProjections/, archived by the Wayback Machine (3DSoftware). - Map projections, John Savard. - Map Projections (MathWorld). - Map Projections An interactive JAVA applet to study deformations (area, distance and angle) of map projections (UFF.br). - Map Projections: How Projections Work (Progonos.com). - Map Projections Poster (U.S. Geographical Survey). - MapRef: The Internet Collection of MapProjections and Reference Systems in Europe - PROJ.4 - Cartographic Projections Library. - Projection Reference Table of examples and properties of all common projections (RadicalCartography.net). - PDF (1.70 MB), Melita Kennedy (ESRI). - World Map Projections, Stephen Wolfram based on work by Yu-Sung Chang (Wolfram Demonstrations Project).
http://en.wikipedia.org/wiki/Map_projection
13
72
Some Common Alternative Conceptions (Misconceptions) Earth Systems, Cosmology and Astronomy The correct conception of seasonal change is that it is caused by the tilting of the earth relative to the sun’s rays. As the Earth goes around its orbit, the Northern hemisphere is at various times oriented more toward or more away from the Sun, and likewise for the Southern hemisphere. Seasonal change is explained by the changing angle of the Earth’s rotation axis toward the Earth’s orbit, which causes the alteration in light angle toward a concrete place on the Earth. A major misconception about seasonal change, held by school students and adults (university students — and teacher trainees and primary teachers — Atwood & Atwood, 1996; Kikas, 2004; Ojala, 1997) is known as the “distance theory.” In this theory, seasons on the Earth are caused by varying distances of the Earth from the Sun on its elliptical orbit. Temperature varies in winter and summer because the distance between the Sun and the Earth is different during these two seasons. One way to see that this reasoning is erroneous is to note that the seasons are out of phase in the Northern and Southern hemispheres: when it is Summer in the North it is Winter in the South. (see Atwood & Atwood, 1996; Baxter, 1995; Kikas, 1998; 2003, 2004; Ojala, 1997). Correct scientific theory on the earth’s shape posits a spherical shape of the earth. Knowledge about the Earth Misconceptions: Elementary school children (1st through 5th grades) commonly hold misconceptions about the earth’s shape. Some children believe that the earth is shaped like a flat rectangle or a disc that is supported by the ground and covered by the sky and solar objects above its “top.” Other children think of the earth as a hollow sphere, with people living on flat ground deep inside it, or as a flattened sphere with people living on its flat “top” and “bottom.” Finally, some children form a belief in a dual earth, according to which there are two earths: a flat one on which people live, and a spherical one that is a planet up in the sky. Due to these misconceptions, elementary school children experience difficulty learning the correct scientific understanding of the spherical earth taught in school. It appears that children start with an initial concept of the earth as a physical object that has all the characteristics of physical objects in general (i.e., it is solid, stable, stationary and needing support), in which space is organized in terms of the direction of up and down and in which unsupported objects fall “down.” When students are exposed to the information that the earth is a sphere, they find it difficult to understand because it violates certain of the above-mentioned beliefs about physical objects. (See Vosniadou, 1994; Vosniadou & Brewer, 1992; Vosniadou et al. 2001.) The correct explanation for the day/night cycle is the fact that the earth spins. Elementary school children (1st through 5th grades) show some common misconceptions about the day/night cycle. Misconception #1: The earliest kind of misunderstanding (initial model) is consistent with observations of everyday experience. Clouds cover the Sun; day is replaced by night; the Sun sets behind the hills. Misconception #2: Somewhat older children have “synthetic” models that represent an integration between initial (everyday) models and culturally accepted views (e.g., the sun and moon revolve around the stationary earth every 24 hours; the earth rotates in an up/down direction and the sun and moon are fixed on opposite sides; the Earth goes around the sun; the Moon blocks the sun; the Sun moves in space; the Earth rotates and revolves). (See Kikas, 1998; Vosniadou & Brewer, 1994) The correct understanding of plants is that plants are living things. Misconception: Elementary school children think of plants as nonliving things (Hatano et al., 1997). Path of blood flow in circulation The correct conception is that lungs are involved and are the site of oxygen-carbon-dioxide exchange. Also, there is a double pattern of blood flow dubbed the “double loop” or “double path” model. This model includes four separate chambers in the heart as well as a separate loop to and from the lungs. Blood from the right ventricle is pumped into the lungs to be oxygenated, whereas blood from the left ventricle is pumped to the rest of the body to deliver oxygen. Hence, one path transports de-oxygenated blood to receive oxygen, while the other path transports oxygenated blood to deliver oxygen. Misconceptions: Yip (1998) evaluated science teacher knowledge of the circulatory system. Teachers were asked to underline incorrect statements about blood circulation and provide justification for their choices. Most teachers were unable to relate blood flow, blood pressure, and blood vessel diameter. More experienced teachers often had the same misconceptions as less experienced teachers. Misconception #1: The most common misconception is the “single loop” model, wherein the arteries carry blood from the heart to the body (where oxygen is deposited and waste collected) and the veins carry blood from the body to the heart (where it is cleaned and re-oxygenated) (Chi, 2005). This conception differs from the correct conception in three ways: It does not assume that lungs are involved, but assumes that lungs are another part of the body to which blood has to travel. It does not assume that the site of oxygen-carbon-dioxide exchange is in the lungs; instead, it assumes such exchange happens in the heart It does not assume there is a double loop (double paths), pulmonary and systemic, but instead assumes that there is a single path of blood flow and the role of the circulatory system is a systemic one only. “Single loop” misconceptions contain five constituent propositions: Blood flows from the heart to the body in arteries. Blood flows from the body to the heart in veins. The body uses the “clean” blood in some way, rendering it unclean. Blood is “cleaned” or “replenished with oxygen” in the heart. Circulation is a cycle. Misconception #2: There is a “heart-to-toe” path in answer to the question of “What path does blood take when it leaves the heart?” (8th and 10th graders) (Arnaudin & Mintzes, 1985; Chi 2005) Categories of Misconceptions (Erroneous Ideas) (See Pelaez, Boyd, Rojas, & Hoover, 2005) The groups of blood circulation errors detected among prospective elementary teachers fell into five categories: Blood pathway. These are common conceptual errors about the pathway a drop of blood takes as it leaves the heart and travels through the body and lungs. A typical correct answer explains dual circulation with blood from the left side of the heart going to a point in the body and returning to the right side of the heart, where it is pumped to the lungs and back to the left side of the heart. Blood vessels. A correct response has blood traveling in veins to the heart and arteries carrying blood away from the heart, and the response recognizes that arteries feed and veins drain each capillary bed in an organ. Gas exchange. A correct response indicates that a concentration gradient between two compartments drives the net transport of gases across cell membranes. Gas molecule transport and utilization. A correct response explains that oxygen is transported by blood to the cells of the body and carbon dioxide is transported from the cells where it is produced and eventually back to the lungs. Lung function. A correct response explains that lungs get oxygen from the air and eliminate carbon dioxide from the body. Force and Motion of Objects The correct conception of force, which is based on Newtonian physics (Newtonian theory of mechanics), describes force as a process used to explain changes in the kinetic (caused by motion) state of physical objects. Motion is the natural state that does not need to be explained. What needs to be explained are changes in the kinetic state. Force is a feature of the interaction between two objects. It comes in interactive action-reaction pairs (e.g., the force exerted by a table on a book when the book is resting on the table) that are needed to explain, not an object’s motion, but its change in motion (acceleration). Force is an influence that may cause a body to accelerate. It may be experienced as a lift, push or pull upon an object resulting from the object's interaction with another object. Hence, static objects, such as the book on the table, can exert force. Whenever there is an interaction between two objects, there is a force upon each of the objects. When the interaction ceases, the two objects no longer experience the force. Forces only exist as a result of an interaction. Two interacting bodies exert equal and opposite forces on each other. Force has a magnitude and a direction. (See Committee on Science Learning, Kindergarten through Eighth Grade, 2007) Misconception #1: Motion/velocity implies force. One of the most deeply held misconceptions (or naive theories) about force is known as the pre-Newtonian “impetus theory” or the “acquired force” theory and it is typical among elementary, middle and high school students (see Mayer, 2003; McCloskey, 1983; Vosniadou et al., 2001) and among adults (university students — Kikas, 2003; and teacher trainee and primary teachers — Kikas, 2004). It is erroneously believed that objects are kept moving by internal forces (as opposed to external forces). Based on this reasoning, force is an acquired property of objects that move. This reasoning is central to explaining the motion of inanimate objects. They think that force is an acquired property of inanimate objects that move, since rest is considered to be the natural state of objects. Hence, the motion of objects requires explanation, usually in terms of a causal agent, which is the force of another object. Hence force is the agent that causes an inanimate object to move. The object stops when this acquired force dissipates in the environment. Hence force can be possessed, transformed or dissipated. This “impetus theory” misconception is evident in the following problems taken from Mayer (2007) and McClosky, Caramaza and Green (1980): The drawing on the left — with the curved line — is the misconception response and reflects the impetus theory. This is the idea that when an object is set in motion it acquires a force or impetus (e.g., acquired when it went around through the tube and gained angular momentum) that keeps it moving (when it gets out of the tube). However, the object will lose momentum as the force disappears. The correct drawing on the right — with the straight path — reflects the Newtonian concept that an object in motion will continue until some external force acts upon it. Misconception #2: Static objects cannot exert forces (no motion implies no force). Many high school students hold a classic misconception in the area of physics, in particular, mechanics. They erroneously believe that “static objects are rigid barriers that cannot exert force.” The classic target problem explains the “at rest” condition of an object. Students are asked whether a table exerts an upward force on a book that is placed on the table. Students with this misconception will claim that the table does not push up on a book lying at rest on it. However, gravity and the table exert equal, but oppositely, directed forces on the book thus keeping the book in equilibrium and “at rest.” The table’s force comes from the microscopic compression or bending of the table. Misconception #3: Only active agents exert force. Students are less likely to recognize passive forces. They may think that forces are needed more to start a motion than to stop one. Hence, they may have difficulty recognizing friction as a force. On the correct understanding of gravity, falling objects, regardless of weight, fall at the same speed. Misconception: Heavier objects fall faster than lighter objects. Many students learning about Newtonian motion often persist in their belief that heavier objects fall faster than light objects (Champagne et al., 1985). There is one class of alternative theories (or misconceptions) that is very deeply entrenched. These relate to ontological beliefs (i.e., beliefs about the fundamental categories and properties of the world). (See Chi 2005; Chinn & Brewer, 1998; Keil, 1979). Some common mistaken ontological beliefs that have been found to resist change include: beliefs that objects like electrons and photons move along a single discrete path (Brewer & Chinn, 1991) belief that time flows at a constant rate regardless of relative motion (Brewer & Chinn, 1991) belief that concepts like heat, light, force, and current are a material substance (Chi, 1992) belief that force is something internal to a moving object (McCloskey, 1983; See section on physics misconceptions). Other Misconceptions in Science Belief that rivers only flow from north to south. Epistemological Misconceptions about the Domain of Science Itself (its objectives, methods, and purposes) Many middle school and high school students tend to see the purpose of science as manufacturing artifacts that are useful for humankind. Moreover, scientific explanations are viewed as being inductively derived from data and facts, since the hypothetical or conjectural nature of scientific theories is not well-understood. Also, such students tend not to differentiate between theories and evidence, and have trouble evaluating theories in light of evidence (See Mason, 2002 for review). A correct understanding of money embodies the value of coin currency as noncorrelated with its size. Misconception: At the PreK level, children hold a core misconception about money and the value of coins. Students think nickels are more valuable than dimes because nickels are bigger. Correct understanding of subtraction includes the notion that the columnar order (top to bottom) of the problem cannot be reversed or flipped (Brown & Burton, 1978; Siegler, 2003; Williams & Ryan, 2000). Misconception #1: Students (age 7) have a “smaller-from-larger” error (misconception) that subtraction entails subtracting the smaller digit in each column from the larger digit regardless of which is on top. Misconception #2: When subtracting from 0 (when the minuend includes a zero), there are two subtypes of misconceptions: Misconception a: Flipping the two numbers in the column with the 0. In problem “307-182,” 0 – 8 is treated as 8 – 0, exemplified by a student who wrote “8 ” as the answer. Misconception b: Lack of decrementing; or not decrementing the number to the left of the 0 (due to first bug above, wherein nothing was borrowed from this column.) In problem “307-182,” this means not reducing the 3 to 2. Correct understanding of multiplication includes the knowledge that multiplication does not always increase a number. Misconception: Students have a misconception that multiplication always increases a number. For example, take the number 8: 3 x 8 = 24 5 x 8 = 40 This impedes students’ learning of the multiplication of a (positive) number by a fraction less than one, such as ½ x 8 = 4. Misconception comes in the form of “division as sharing” (Nunes & Bryant, 1996), or the “primitive, partitive model of division” (Tirosh, 2000). In this model, an object or collection of objects is divided into a number of equal parts or sub collections (e.g., Five friends bought 15 lbs. of cookies and shared them equally. How many pounds of cookies did each person get?). The primitive partitive model places three constraints on the operation of division: The divisor (the number by which a dividend is divided) must be a whole number; The divisor must be less than the dividend; and The quotient (the result of the division problem) must be less than the dividend. Hence, children have difficulty with the following two problems because they violate the “dividend is always greater than the divisor constraint” (Tirosh, 2002): “A five-meter-long stick was divided into 15 equal sticks. What is the length of each stick?” A common incorrect response to this problem is 15 divided by 5 (instead of the correct 5 divided by 15). “Four friends bought ¼ kilogram of chocolate and shared it equally. How much chocolate did each person get?” A common incorrect response to this problem is 4 x ¼ or 4 divided by 4 (instead of the correct ¼ divided by 4). Similarly, children have difficulty with the following problem because the primitive, partitive model implies that “division always makes things smaller” (Tirosh, 2002). “Four kilograms of cheese were packed in packages of ¼ kilogram each. How many packages contained this amount of cheese?” Because of this belief they do not view division as a possible operation for solving this word problem. They incorrectly choose the expression “1/4 X 4” as the answer (See Fischbein, Deri, Nello & Marino, 1985). This “primitive, partitive” model interferes with children’s ability to divide fractions — because students believe you cannot divide a small number by a larger number, as it would be impossible to share less among more. Indeed, even teacher trainees can have this preconception of division “as sharing.” Teachers were unable to provide contexts for the following problem (Goulding. Rowland, & Barber, 2002): 2 divided by ¼ The correct conception of negative numbers is that these are numbers less than zero. They are usually written by indicating their opposite, which is a positive number, with a preceding minus sign (See Williams & Ryan, 2000). A Separation Misconception means treating the two parts of the number — the minus sign and the number — separately. In number lines, the scale may be marked: -20, -30, 0, 10, 20...(because the ordering is 20 then 30, and the minus sign is attached afterwards) and later the sequence gets -4 inserted thus: -7, -4, 1,...(because the sequence is read 1, 4, 7 and the minus sign is afterwards attached). Similarly, we can explain: -4 + 7 = -11. The correct conception of a fraction is of the division of one cardinal number by another. Children start school with an understanding of counting — that numbers are what one gets when one counts collections of things (the counting principles). Students have moved towards using counting words and other symbols that are numerically meaningful. The numbering of fractions is not consistent with the counting principles, including the idea that numbers result when sets of things are counted and that addition involves putting two sets together. One cannot count things to generate a fraction. A fraction, as noted, is defined as the division of one cardinal number by another. Moreover, some counting principles do not apply to fractions. For example, one cannot use counting based algorithms for ordering fractions — ¼ is not more than ½. In addition, the nonverbal and verbal counting principles do not map to the tripartite symbolic representations of fractions (two cardinal numbers separated by a line) (See misconception examples above and Hartnett & Gelman, 1998). Misconceptions reflect children’s tendency to distort fractions in order to fit their counting-based number theory, instead of viewing a fraction as a new kind of number. Misconception #1: Student increase the values of denominator maps in order to increase quantitative values. This includes a natural number ordering rule for fractions that is based on cardinal values of the denominator (See Hartnett & Gelman, 1998). Misconception #2: When adding fractions, the process is to add the two numerators to form the sum’s numerator and then add the two denominators to form its denominator. Example: Elementary and high school students think ¼ is larger than ½ because 4 is more than 2 and they seldom read ½ correctly as “one half.” Rather, they use a variety of alternatives, including “one and two, ” “one and a half,” “one plus two, ” “twelve,” and “three.” (See Gelman, Cohen, & Hartnett, 1989, cited in Hartnett & Gelman, 1998), Example ½ +1/3 = 2/5 (See Siegler, 2003). The correct understanding of the decimal system is of a numeration system based on powers of 10. A number is written as a row of digits, with each posi¬tion in the row corresponding to a certain power of 10. A decimal point in the row divides it into those powers of 10 equal to or greater than 0 and those less than 0, i.e., negative powers of 10. Positions farther to the left of the decimal point correspond to increasing positive powers of 10 and those farther to the right to increasing negative powers, i.e., to division by higher positive powers of 10. A number written in the decimal system is called a decimal, although sometimes this term is used to refer only to a proper fraction written in this system and not to a mixed number. Decimals are added and subtracted in the same way as in¬tegers (whole numbers), except that when these operations are written in columnar form, the decimal points in the column entries and in the answer must all be placed one under another. In multiplying two decimals, the operation is the same as for integers except that the number of decimal places in the product (i.e., digits to the right of the decimal point) is equal to the sum of the decimal places in the factors (e.g., the factor 7.24 to two decimal places and the factor 6.3 to one decimal place have the product 45.612 to three decimal places). In division, (e.g., 4.32|12.8), a decimal point in the divisor (4.32) is shifted to the extreme right (i.e., to 432.) and the decimal point in the dividend (12.8) is shifted the same number of places to the right (to 1280), with one or more zeros added before the decimal to make this possible. The decimal point in the quotient is then placed above that in the dividend, i.e., 432|1280.0 and zeros are added to the right of the decimal point in the dividend as needed. The division proceeds the same as for integers. Misconception #1: Students often use a “separation strategy,” whereby they separate the whole (integer) and decimal as different entities. They treat the two parts before and after the decimal point as separate entities. This has been seen in pupils (Williams & Ryan, 2000), as well as in beginning preservice teachers (Ryan & McCrae, 2005). Division by 100: 300.62 divided by 100 Correct Answer = 3.0062 Misconception Answer = 3.62 Example: When given 7.7, 7.8, 7.9, students continue the scale with 7.10, 7.11. Misconception #2: This relates to the ordering of decimal fractions from largest to smallest (Resnick et al., 1989; Sackur-Grisvard, & Leonard, 1985). This misconception is also seen in primary teacher trainees (Goulding et al., 2002). Here is an example of a mistaken ordering: 0.203 2.35 X 10-2; two hundreths 2.19 X 10 -1; one fifth A lack of connection exists in the knowledge base between different forms of numerical expressions AND difficulties with more than two decimal places. Misconception a: The larger/longer number is the one with more digits to the right of the decimal point, i.e. 3.214 is greater than 3.8 (Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985; Siegler, 2003). This is known as the “whole number rule” because children are using their knowledge of whole number values in comparing decimal fractions (Resnick et al., 1989). Whole number errors derive from students’ applying rules for interpreting multidigit integers. Children using this rule appear to have little knowledge of decimal numbers. Their representation of the place value system does not contain the critical information of column values, column names and the role of zero as a placeholder (see Resnick et al., 1989). Misconception b: The “largest/longest decimal is the smallest (the one with the fewest digits to the right of decimal).” Given the pairs 1.35 and 1.2, 1.2 is viewed as greater. 2.43 judged larger than 2.897 (Mason & Ruddock, 1986; cited in Goulding et al., 2002, Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985; Siegler, 2003; & Ryan & McCrae) This is known as the “fraction” rule because children appear to be relying on ordinary fraction notation and their knowledge of the relation between size of parts and number of parts (Resnick et al., 1989). Fraction errors derive from children’s attempts to interpret decimals as fractions. For instance, if they know that thousandths are smaller parts than hundredths, and that three-digit decimals are read as thousandths, whereas two-digit decimals are read as hundredths, they may infer that longer decimals, because they refer to smaller parts, must have lower values (Resnick et al., 1989). These children are not able to coordinate information about the size of parts with information about the number of parts; when attending to size of parts (specified by the number of columns) they ignored the number of parts (specified by the digits). Misconception c: Students make incorrect judgments about ordering numbers that include decimal points when one number has one or more zeros immediately to the right of the decimal point or has other digits to the right of the decimal point. Hence, in ordering the following three numbers (3.214, 3.09, 3.8), a student correctly chooses the number with the zero as the smallest, but then resorts to “the larger number is the one with more digits to the right” rule (i. e., 3.09, 3.8, 3.214) (Resnick et al., 1989; Sackur-Grisvard & Leonard, 1985). This is known as the “zero rule” because it appears to be generated by children who are aware of the place-holder function of zero, but do not have a fully developed place value structure. As a result, they apply their knowledge of zero being very small to a conclusion that the entire decimal must be small (See Resnick et al.,1989). Misconception #3: Multiplication of decimals. Example: 0.3 X 0.24 Correct Answer = 0.072 Misconception answer: Multiply 3 x 24 and adjust two decimal points. 0.72 (This is seen in the beginning instruction of pre-service teachers as well.) Misconception #4: Units, tenths and hundredths. Example: Write in decimal form: 912 + 4/100 Correct Answer = 912.04 Misconception answer = 912.004. 4/100 is ¼ or 100 divided by 4 gives the decimal or 1/25 is 0.25 = 912.25 Overgeneralization of Conceptions Developed for "Whole Numbers" (cited in Williams & Ryan, 2000) Misconception #1: Ignoring the minus or % sign. Errors such as: 4 + - 7 = -11; -10 + 15 = 25. Misconception #2: Thinking that zero is the lowest number. Misconception #1: Incorrect generalization or extension of correct rules. Siegler (2003) provides the following example: The distributive principle indicates that a x (b + c) = (a x b) + (a x c) Some students erroneously extend this principle on the basis of superficial similarities and produce: a + (b x c) = (a + b) x ( a + c) Misconception #2: Variable misconception. Correct understanding of variables means that a student knows that letters in equations represent, at once, a range of unspecified numbers/values. It is very common for middle school students to have misconceptions about core concepts in algebra, including concepts of a variable (Kuchemann 78; Knuth, Alibali, McNeil, Weinberg, & Stephens, 2005; MacGregor & Stacey, 1997; Rosnick, 1981). This misconception can begin in the early elementary school years and then persist through the high school years. There are several levels or kinds of variable misconceptions: Variable Misconception: Level 1 A letter is assigned one numerical value from the outset. Variable Misconception: Level 3 A letter is interpreted as a label for an object or as an object itself. At a university, there are six times as many students as professors. This fact is represented by the equation S = 6P. In this equation, what does the letter S stand for? a. number of students (Correct) c. students (Misconception) d. none of the above Misconception #3: Equality misconception. Correct understanding of equivalence (the equal sign) is the “relational” view of the equal sign. This means understanding that the equal sign is a symbol of equivalence (i.e., a symbol that denotes a relationship between two quantities). Students exhibit a variety of misconceptions about equality (Falkner, Levi, & Carpenter, 1999; Kieran, 1981,1992; Knuth et al., 2005; McNeil & Alibali, 2005; Steinberg, Sleeman, & Ktorza,1990; Williams & Ryan, 2000). The equality misconception is also evident in adults, like college students (McNeil & Alibali, 2005). Students do not understand the concept of “equivalent equations” and basic principles of transforming equations. Often, they do not know how to keep both sides of the equation equal. So, they do not add/subtract equally from both sides of the equal sign. In solving x + 3 = 7, a next step could be A. x + 3 – 3 = 7 – 3 (Correct) B. x + 3 + 7 = 0 C. = 7 – 3 (Misconception) D. .3x = 7 It is assumed that the answer (solution) is the number after the equal sign (i.e., answer on the right) The correct understanding of poems includes the notion that a poem need not rhyme. Misconceptions are that poems must rhyme. A correct understanding of language includes the knowledge that language can be used both literally and nonliterally. The misconception is that language is always used literally. Many elementary school children have difficulty understanding nonliteral or figurative uses of language, such as metaphor and verbal irony. In these nonliteral uses of language, the speaker’s intention is to use an utterance to express a meaning that is not the literal meaning of the utterance. In irony, speakers are expressing a meaning that is opposite to the literal meaning (e.g., while standing in the pouring rain, one says “What a lovely day.”). Metaphor is a figure of speech in which a term or phrase is applied to something to which it is not literally applicable in order to suggest a resemblance, as in “All the world’s a stage." (Shakespeare). Students have difficulty understanding nonliteral (figurative) uses of language because they have a misconception that language is used only literally. (See Winner, 1997.)
http://www.apa.org/education/k12/alternative-conceptions.aspx
13
15
While the United States was fighting to get a man on the Moon by the end of the 1960s, the Soviet Union was working hard to return a sample of lunar soil as part of the robotic Luna program. Some missions were successful and others weren’t, but for decades no one was really sure why. That’s changed: Last week, NASA’s Lunar Reconnaissance Orbiter photographed the remnants of two Luna missions, Luna 23 and 24, and almost 50 years later is helping solve the mysteries these missions opened. The Luna program was conceived in 1955 by Sergei Korolev, the elusive Soviet Chief Designer responsible for the USSR’s early successes in space. He proposed building a multi-stage version of the R-7 rocket (the one that would launch Sputnik into orbit two years later) that would be powerful enough to deliver a payload to the Moon. He envisioned Soviet probes orbiting, landing on, and photographing the Moon before the Americans. The eventual goal would be for a Luna spacecraft to return a soil sample. The sample return spacecraft consisted of a descent stage, an ascent stage, and an Earth-return capsule. The entire suite was designed to land on the surface where an instrument would gather the lunar sample and place it in the Earth-return capsule. The ascent stage would fire its main engine and send the mission’s payload back to Earth leaving the descent stage on the surface. Success came early to the Luna program. In 1959, Luna 2 became the first spacecraft to reach the lunar surface when it crashed at a point in the North near Mare Imbrium (the Sea of Clouds). Luna 3 went into orbit and sent back the first pictures of the Moon’s far side the same year. Luna 15 marks the Soviet Union’s intersection with Apollo; Luna 15, the third designed for a sample collection and return, was launched three days before Apollo 11. On July 20, 1969, as Neil Armstrong and Buzz Aldrin made history’s first manned lunar landing, the orbiting Luna 15 fired its retrorockets to descend towards the surface. Unfortunately, it crashed while the Apollo 11 crew was partway through their historic moonwalk. Luna 23 met a similar fate. Launched on October 28, 1974, it malfunctioned halfway through its mission and ended up crashing on the surface in the Mare Crisium (the Sea of Crisis in the northwest on the Earth-facing side). The spacecraft stayed in contact with Earth after its hard landing, but it couldn’t get a sample. Mission scientists expected the spacecraft had tipped over as a result of its landing, but without a way to image the moon at a high resolution, they weren’t able to confirm, and the mystery endured. It turns out they were indeed right. The whole spacecraft is still on the surface, its ascent engine never fired, and high resolution image from LRO’s cameras show the spacecraft lying on its side. LRO also captured images of Luna 24, the mission that picked up where Luna 23 left off by landing, collecting, and returning samples from a point less than 2.5km away on 18 August 1976. After less than 24 hours, Luna 24 fired its ascent stage, and sent a 0.375 pound sample of lunar regolith to scientist on Earth. The sample puzzled scientists — it had unexpected characteristics based on the understanding of Mare Crisium geology at the time. The new picture of the spacecraft’s landing point has shed light on why the sample differed from the observed lunar environment around it. Images from NASA’s LRO’s Camera have solved the mystery by putting the lander in geographic and geological context. Luna 24 landed near a crater that had brought material up from ancient lava flows. The spacecraft returned a sample not from its environment, but from beneath the surface that hadn’t been exposed to space nearly as long. This accounts for the nearly 40-year-old mystery. Luna 9 and 13 have yet to be imaged by NASA’s LRO. It’s yet to be seen if it was actually pesky Moon aliens that wrecked their mission, but with the last pair of Luna spacecraft set to be captured by NASA sometime in the near future, we’ll find out soon enough.
http://motherboard.vice.com/blog/soviet-moon-mystery-solved-by-nasa-50-years-later
13
12
We are all familiar with the climate on Earth: the seasons, the range of surface temperatures that are just right for being a water world, the oxygen we breathe, the ozone layer that protects us from UV radiation. In short: habitable. So what other bodies in the Solar System might be (or might have been) habitable, and why aren’t they today? Mars probably comes to mind, and for good reason. Mars has the most similar climate to our own, with water ice caps at the poles, seasonal snow, and dust storms. This is because Mars has a similar axial tilt as the Earth, which creates similar seasonal temperature variations. However, the colder average temperatures and the thin atmosphere mean liquid water can only exist on the surface around midsummer and at the lowest elevations (where the atmospheric pressure is greatest). The thin atmosphere also means the surface is exposed to intense UV radiation. Mars may not be habitable today (for life on the surface), but climates change. Hubble image of Mars engulfed in a global dust storm, with its polar caps peeking through. Image courtesy of NASA. Several lines of evidence point to Mars being wet and warm early in its history. Water-carved channels, minerals formed by interaction with groundwater (like gypsum), river delta deposits, and what may be a shoreline all the way around the northern lowlands (which would have been a giant ocean) all point to lots of liquid water on the surface sometime in the distant past. So why was Mars so much warmer and wetter than it is today, and why did it change? These are fundamental questions about climate change that have yet to be fully answered. Early Mars likely had a thicker atmosphere, made of mostly CO2 like it is today, which would have warmed the surface through the greenhouse effect. One way to understand the climate early in Mars history is to study the oldest rocks and landforms. Another is to look at more recent climate changes, which are likely preserved in the polar ice caps. Just as ice cores on Earth provide a record of annual changes in climate, the thick stacks of polar ice on Mars have internal layering that suggests they were built up one layer at a time, for millions if not billions of years. (Some of the research I do here at the Museum is directly related to the internal structure of these ice caps, which I mapped out using orbital radar data. I am currently working to understand smaller-scale features buried in the ice.) So if one of our neighbors may have been habitable in the past, what about our nearest neighbor, Venus? Venus is almost the same size as Earth, and only slightly closer to the Sun. However, its axis does not tilt relative to the Sun, so it has no seasons like Earth and Mars. We know less about ancient Venus than we do about Mars, because the surface of Venus is relatively young (~1 billion years old). However, we think the atmosphere is much older than the surface, made up of mostly CO2 (like Mars, and like early Earth). With 100 times the atmosphere of Earth, its runaway greenhouse effect long ago boiled all the water off the surface. Some of that water is bound to sulfur and makes up the sulfuric acid clouds that circle the planet, but much of it was broken down in the atmosphere and removed by the solar wind. Venus is dry and hot, despite its clouds reflecting 80% of the sunlight that arrives, since it very effectively traps the remaining 20%. Clouds swirl around the south pole of Venus, imaged in UV by Venus Express. Image courtesy of the European Space Agency. So was Venus ever more like Earth? Being so similar to Earth, Venus likely formed from the same material. The key to their different climates today may be in part due to Earth having plate tectonics, which buries carbon-rich sedimentary rocks (taking CO2 out of the atmosphere). Venus instead keeps all of its CO2 in the atmosphere. The clues to climate change on Venus will probably be found in the composition of its atmosphere, with isotopic ratios of elements like carbon and hydrogen pointing the way to understanding when and why it became so hot and dry. Only those three inner planets in our Solar System have atmospheres thick enough and persistent enough to have climates that change over time. However, one moon in our Solar System, more massive than the planet Mercury, has an atmosphere. In fact, Titan, a moon of Saturn, was once thought to be the largest moon in the Solar System precisely because its atmosphere is so thick (1.5 times the atmosphere of Earth). Titan is the only moon in the Solar System with a thick atmosphere, imaged by Cassini. Image courtesy of NASA. Titan is particularly interesting because its atmosphere is made up mostly of nitrogen, just like the Earth. The remainder is mostly methane, which breaks down easily in the atmosphere and has to be replenished every ~50 million years; this implies some unknown but ongoing process. Titan gets 100 times less sunlight than the Earth, so its surface is frigid, cold enough that water ice is as hard as rock. So while Titan is not currently habitable for life as we know it on Earth, it is the only other place in the Solar System with rain (made of methane and ethane). However, in another 5 billion year the Sun will become a red giant star, and Titan probably will be warm enough to have liquid water on its surface, making it habitable at last. For the time being, understanding the methane cycle on Titan (perhaps analogous to the water cycle on Earth) will help us understand climate change on Titan, and may give us insight into the behavior of climate on early Earth. Titan, Venus, and Mars all have something to teach us about the possibilities for climate change and habitability on Earth. While nothing as dramatic as the changes experienced by Mars or Venus is likely to happen anytime soon on Earth, we do know that smaller changes in climate have had big effects on life, and vice versa. When photosynthesis appeared on Earth ~2.5 billion years ago, it put oxygen into the atmosphere for the first time. When the “snowball Earth” episode ended ~500 million years ago, the warmer and friendlier climate produced macroscopic life for the first time. When extensive volcanism occurred ~250 million years ago, ~95% of life on Earth was wiped out. When the aftermath of a large impact cooled the climate ~65 million years ago, the dinosaurs died off. In the last million years, according to ice core records from Greenland and Antarctica, recurring periods of warming and cooling (correlated with increasing and decreasing amounts of CO2 in the atmosphere) have caused repeated ice ages and interglacial periods; during the most recent interglacial period (from ~10,000 years ago to today), humanity has thrived. The one climate in our Solar System that is "just right" for life, imaged by Apollo 17. Image courtesy of NASA. Currently we are blessed with a friendly climate. What will help us best understand it? What more might we want to know about changes in other climates? What is the role of humanity in the future climate of Earth? Michelle Selvans is a planetary geophysicist in the Center for Earth and Planetary Studies at the National Air and Space Museum.
http://blog.nasm.si.edu/category/planetary-science/
13
21
Symbols are the basic building blocks of mathematics. After you have studied mathematics at advanced level for a while you will come to appreciate that certain symbols tend to mean certain things. For example x and y are used to represent variables, whereas a and b are used to stand for constants. Greek symbols are commonly used too. A proof is a convincing demonstration that some mathematical statement is necessarily true, within the accepted standards of the field. A proof is a logical argument, not an empirical one. That is, the proof must demonstrate that a proposition is true in all cases to which it applies, without a single exception. An unproven proposition believed or strongly suspected to be true is known as a conjecture. The concept of proof is central to mathematics at an advanced level. Laws of indices for all rational exponents. Use and manipulation of surds. Quadratic functions, equations and graphs. Completing the square. Simultaneous equations. Solution of linear and quadratic inequalities. Algebraic manipulation of polynomials, including expanding brackets and collecting like terms, factorisation. Graphs of functions; sketching curves defined by simple equations. Geometrical interpretation of algebraic solution of equations. Use of intersection points of graphs of functions to solve equations. Knowledge of the effect of simple transformations on the graph of y = f(x) as represented by y = af(x), y = f(x) + a, y = f(x + a), y = f(ax). |Completing the square| Equation of a straight line, including the three common forms y = mx + c, y - y1 = m(x - x1) and ax + by + c = 0. The equation of a line through two given points and the equation of a line parallel (or perpendicular) to a given line through a given point. Conditions for two straight lines to be parallel or perpendicular to each other. Try the Geogebra page for an on-line coordinate geometry program where you can try out some ideas about linear equations. How to generate sequences from the formula for the nth term; how to find the nth term and sum of the first n terms of an arithmetic sequence; how to use summation notation |Convergence & divergence| Differentiation is used to find the gradient function (derivative) for a curve, the gradient at any point on a curve, and also to find the equation of the tangent or normal to a curve at a point on the curve. Differentiation, as part of calculus, is used in science and engineering, and was developed originally in the 17th century by Newton and Leibniz. Integration may be seen as the reverse of differentiation. The principles of integration were formulated by Isaac Newton and Gottfried Leibniz in the late seventeenth century. Integration can be used to find areas and volumes of mathematically defined shapes and is used extensively in science. |Reverse of differentiation|
http://www.mathsnetalevel.com/module.php?ref=E1
13
10
The Möbius strip or Möbius band is a topological object with only one side (one-sided surface) and only one boundary component. It was co-discovered independently by the German mathematicians August Ferdinand Möbius and Johann Benedict Listing in 1858. A model can easily be created by taking a paper strip and giving it a half-twist, and then merging the ends of the strip together to form a single strip. In Euclidean space there are in fact two types of Möbius strip depending on the direction of the half-twist: if the right hand twists the right end of the strip in a clockwise manner, the result is a right-handed Möbius strip. The Möbius strip therefore exhibits chirality. The Möbius strip has several curious properties. If you cut down the middle of the strip, instead of getting two separate strips, it becomes one long strip with two half-twists in it (not a Möbius strip). If you cut this one down the middle, you get two strips wound around each other. Alternatively, if you cut along a Möbius strip, about a third of the way in from the edge, you will get two strips; one is a thinner Möbius strip, the other is a long strip with two half-twists in it (not a Möbius strip). Other interesting combinations of strips can be obtained by making Möbius strips with two or more flips in them instead of one. For example, a strip with three half-twists, when divided lengthwise, becomes a strip tied in a trefoil knot. Cutting a Möbius strip, giving it extra twists, and reconnecting the ends produces unexpected figures called paradromic rings . The Möbius strip is often cited as the inspiration for the infinity symbol , since if one were to stand on a the surface of a Möbius strip, one could walk along it forever. However, this may be apocryphal since the symbol had been in use to represent infinity even before the Möbius strip was discovered. Geometry and topology One way to represent the Möbius strip as a subset of R3 is using the parametrization: where and . This creates a Möbius strip of width 1 whose center circle has radius 1, lies in the x-y plane and is centered at (0,0,0). The parameter u runs around the strip while v moves from one edge to the other. In cylindrical polar coordinates (r,θ,z), an unbounded version of the Möbius strip can be represented by the equation: - log(r)sin(θ / 2) = zcos(θ / 2). Topologically, the Möbius strip can be defined as the square [0,1] × [0,1] with sides identified by the relation (x,0) ~ (1-x,1) for 0 ≤ x ≤ 1, as in the following diagram: The Möbius strip is a two-dimensional compact manifold (i.e. a surface) with boundary. It is a standard example of a surface which is not orientable. The Möbius strip is also a standard example used to illustrate the mathematical concept of a fiber bundle. Specifically, it is a nontrivial bundle over the circle S1 with a fiber the unit interval, I = [0,1]. Looking only at the edge of the Möbius strip gives a nontrivial two point (or Z2) bundle over S1. A closely related "strange" geometrical object is the Klein bottle. A Klein bottle can be produced by gluing two Möbius strips together along their edges; this cannot be done in ordinary three-dimensional Euclidean space without creating self-intersections. Another closely related manifold is the real projective plane. If a single hole is punctured in the real projective plane, what is left is a Möbius strip. Going in the other direction, if one glues a disk to a Möbius strip by identifying their boundaries, the result is the projective plane. In order to visualize this, it is helpful to deform the Möbius strip so that its boundary is an ordinary circle. Such a figure is called a cross-cap (a cross-cap can also mean this figure with the disk glued in, i.e. an immersion of the projective plane in R3). It is a common misconception that a cross-cap cannot be formed in three dimensions without the surface intersecting itself. In fact it is possible to embed a Möbius strip in R3 with boundary a perfect circle. Here is the idea: let C be the unit circle in the xy plane in R3. Now connect antipodal points on C, i.e., points at angles θ and θ + π, by an arc of a circle. For θ between 0 and π / 2 make the arc lie above the xy plane, and for other θ the arc below (with two places where the arc lies in the xy plane). However, if a disk is glued in to the boundary circle, the self-intersection of the resulting projective plane is imminent. In terms of identifications of the sides of a square, as given above: the real projective plane is made by gluing the remaining two sides with 'consistent' orientation (arrows making an anti-clockwise loop); and the Klein bottle is made the other way. Art and technology The Möbius strip has provided inspiration both for sculptures and for graphical art. M. C. Escher is one of the artists who was especially fond of it and based several of his lithographs on this mathematical object. One famous one, Möbius Strip II, features ants crawling around the surface of a Möbius strip. It is also a recurrent feature in science fiction stories, such as Arthur C. Clarke's The Wall of Darkness. Science fiction stories sometimes suggest that our universe might be some kind of generalised Möbius strip. In the short story "A Subway Named Möbius", by A.J. Deutsch, the Boston subway authority builds a new line; the system becomes so tangled that it turns into a Möbius strip, and trains start to disappear. There have been technical applications; giant Möbius strips have been used as conveyor belts that last longer because the entire surface area of the belt gets the same amount of wear, and as continuous-loop recording tapes (to double the playing time). A device called a Möbius resistor is a recently discovered electronic circuit element which has the property of cancelling its own inductive reactance. Nikola Tesla patented similar technology in the early 1900s, US#512,340 "Coil for Electro Magnets" was intended for use with his system of global transmission of electricity without wires. Last updated: 06-01-2005 22:39:15
http://www.fact-archive.com/encyclopedia/M%F6bius_strip
13
18
This material may be copied only for noncommercial classroom teaching purposes, and only if this source is clearly cited. Human Evolution Patterns |Students describe, measure and compare cranial casts from contemporary apes (chimpanzees and gorillas, typically), modern humans and fossil "hominins" (erect and bipedal forms evolutionarily separated from apes). ("Hominid" is the new collective term for African apes and humans.) The purpose of the activity is for students to discover for themselves what some of the similarities and differences are that exist between these forms, and to see the pattern of the gradual accumulation of traits over time, leading to modern humans.| |Documenting similarities and differences between species is fundamental to understanding their biological and evolutionary relationships.| 1. When used in conjunction with certain other lessons (see extensions below), illustrates the compelling power of multiple independent lines of evidence as a tool for selecting the "best explanation" in the process of science. 2. Transitional forms in an evolutionary sequence are generally mosaic; some traits evolve more rapidly than others. 3. Modern humans have not evolved from modern apes: both have evolved from a common ancestor 1. handle and read the measuring instruments. 2. identify the appropriate skeletal and dental features and 3. describe features of a given specimen as either similar to, different from or the same as those present in another specimen. 4. recognize the sequence pattern in which several human skull features appeared over time. 5. (Optional) summarize and graph measurement data of the cranial specimens. 6. (Optional) construct and justify a taxonomic classification of the specimens. 1. Plastic casts of modern apes, humans and fossil hominins. Preferably two chimpanzee or gorilla specimens, male and female are ideal. A modern human skull may be available from the skeleton standing in the corner of your lab, but, if not, the 25,000 year old Predmost fossil cast will serve, as well. 2. Plastic casts of a Neandertal (50-60,000 year old La Chapelle is commonly available), 450,000 year old Homo erectus ("Peking" is the most widely available and least expensive) and an australopithecine (preferably the "robust" 1.8 million year old Olduvai number 5 or "Zinjanthropus") as an example of an early hominin form that shows a "mosaic" mixture of ape- like and human-like features). These casts are commercially available, primarily from Carolina Biological Supply and Ward's although you may find other sources, as well. Check our SKULLS: PRICE COMPARISON chart for online addresses and prices for recommended skulls. There is also a recommended "PREFERENCE" column on the table, indicating 5 skulls for a basic set, then a prioritizing of additional skulls for future enhancement if/when funds are available. Drawings of specimens may also be used but are not nearly as good as the actual skull replicas. A set of 7 drawings of hominid profiles can be downloaded from this site. Just click here for hominid drawings. In order to accommodate a greater number of measurements, we have added a collection of 28 HOMINID PHOTOS: 4 views each of 7 skulls: front, top, right side, and an under-view of each skull. In fact, an excellent craniometry-focused version of this exercise can be found in an online article in the NABT journal The American Biology Teacher for March 2007: Investigating Human Evolution Using Digital Imiaging & Craniometry by John C. Robertson. This is a 5-page pdf article, providing the index (ratio) formulas to use for various dimensions, thereby eliminating the need for the skulls all being the same scale. The digital photos provided on the ENSI site (HOMINID PHOTOS) would be excellent ready-to-go material to use for this work. 3. Sliding calipers or hinge calipers and rulers with metric scales. Extremely inexpensive plastic sliding calipers may be purchased at hardware or arts & crafts stores. Hinge calipers can be made out of cut-out cardboard (hinged with snap clips; worn tips can be strengthened with white glue), or plastic or masonite (hinged with small bolt & nut). Click here for a template for cardboard or plastic calipers. 4. Carpet squares, foam pads, or similar table padding on which to set the casts for each student group or "Skull Station". |One to two 45-55 minute periods, depending on the amount of analysis you want to have the students engage in. (Some or all of the student analysis can be done outside of the classroom as homework.)| See attached Hominid Cranium Comparison Checklist. Worksheets are ruled notebook paper with hand-drawn columns corresponding to each specimen. As an alternative, consider using formatted handouts created by other teachers. See item #7 under the Extensions & Variations section of this lesson. |This lab activity may be done in conjunction with units on either taxonomic classification, interpreting the fossil record, comparative anatomy of skeletal features, or human biological attributes. It may be desirable, but it is certainly not necessary, to have dealt with basic ape and human biological and behavioral attributes. The teacher's emphasis should be on how well humans can be used as evidence to support the idea that modern species are evolutionarily related to one another and descended from now-extinct non-modern forms. (See sample Human Evolution Unit Outline, offering one workable sequence of topics which includes a lesson like this one, and has worked well as an early introduction to a unit on evolution).| 1. Have students work in groups of 3-5 since there are typically fewer cranial casts than students available. 2. Students may either work in stationary groups (in which case the specimens are passed from one group to another) or in groups that move from one "Skull Station" to another. 3. Each student should have a copy of the Hominid Cranium Comparison Checklist because the details of each measurement and observation are spelled out on it. Each student should also have her/his own data worksheet for recording descriptions and measurements. 4. Have all students label the columns on their data worksheets with the names of each specimen. Have them simply number the left hand edge of the worksheet 1 through 18 to correspond to the 18 items on the checklist. (This will put all entries for a single checklist item on the same line across the page to facilitate comparisons.) 5. Have students take turns being responsible for the 18 items on the Checklist in order to keep everyone involved as much as possible. 6. Remind students to record all measurements in millimeters (not inches). 7. Ask students to support each specimen in the palms of their hands and not like bowling balls with their fingers stuck into the eye orbits and nasal cavity! 8. After the students have measured and described the specimens, have them determine and describe the patterns represented by their findings. This can be done in a variety of ways: - a) simply list those features that all of the specimens have in common; - b) identify those features which are most useful for distinguishing between the specimens; - c) describe the changes that occur in only the hominin crania over time; - d) plot their data on graphs using the geological dates listed above (in Materials). Consider doing the Chronology Lab, where a more complete plotting of hominin ages can be done (see link under Extensions and Variations below.) 9. Following their efforts to summarily describe the patterns they perceive in the specimens, engage the students in a discussion and/or consideration of the evolutionary significance and adaptive benefits of the changes they have just described in the hominin - a. Why do you think the canine tooth reduced in size so much from earlier to later hominins? (Ans.: grasping function of long canines replaced by easy use of hands, associated with bipedalism). - b. Why do you think the face flattens over time in hominins? (Ans.: similar reason as for item a.) - c. How does the position of the foramen magnum relate to the body posture and locomotor pattern of the animal? (Ans.: more forward and under the skull, associated with erect posture of bipedalism; skull balances on top of spinal column. With semi-erect posture of apes, foramen magnum is located more to the rear of the skull.) - d. What areas or portions of the braincase enlarge first and which ones enlarge later in the hominins? (Ans.: Rear portion enlarges first; top and forward portions enlarge later.) - e. What behavioral and cognitive functions are associated with these cerebral areas? (Ans.: associated with "higher" rational beahvior) - f. Have we really lost the browridge? (Ans.: not really; forehead rises directly above the browridge, enclosing the much enlarged frontal lobes.) 1. Teacher observation: are the assessable objectives being met? 2. Teacher-constructed test, based on the observations made and pointed out, and on the discussion which follows. 3. Do students recognize some of the patterns revealed? Do they see how hominins have changed over time 1. Have students plot a chronology of hominin existence, based on the age-ranges of the different hominin species (see the mini-lesson on this site: 2. Arrange the skull casts in a row, oldest on the left (as viewed by students). Be sure to put the modern ape skulls on the class' far right, at the same end as modern humans, since they are both modern. This is an excellent time to get students to see that humans did NOT evolve from apes, but rather apes and humans evolved from some common ancestor which was neither ape nor human, but probably more apelike, due to its more likely primitive semi-erect posture. Ask students to point out any general changes or trends they see, from left to right. Ask which skulls look most "primitive", and which most modern. Point out (or get students to express) how the sequence of skulls relates to the chronology which they built (or which you can reveal to them). 3. Hopefully, students will see a mix of "ape-like" and modern human features in the skulls. If so, do they see the trend from fewer modern human features in the earlier specimens to more modern human features in the later specimens? This is a good opportunity to point out that such a changing mosaic of traits is typical (and expected) as we trace a group through time, and thereby reflects the gradual accumulation of traits as expected in evolution by natural selection. 4. Can students see that the appearance of human traits was apparently a gradual process, not sudden? 5. This lesson provides a basic experience revealing anatomical indications that we have evolved. It would be highly beneficial for students to also do the lessons which compare CHROMOSOMES, PROTEINS, and DNA, all indicating a similar trend, and collectively showing an excellent example of the power of MULTIPLE INDEPENDENT LINES OF EVIDENCE (or MILEs) all pointing in the same direction: that humans have evolved. 6. An interesting extension of this lab is to explore what we can tell from the bones, such things as likely age, sex, size, race, appearance, health, etc. If possible, have a forensic pathologist from a local crime lab or a physical anthropologist from a local university talk to your students, and point out the clues they look for to answer those questions, and the degrees of confidence they have in those clues. An alternative is to have an enterprising student search the internet for that information, and report to the class. 7. Consider using one of the sets of handout materials developed and used successfully by other ENSI teachers. These may give you a little more "structure", which is often helpful when embarking upon territory which is somewhat new to you. To see and download these materials (in PDF format), just click on one of the following sets: The first set was developed by Jo Ann Lane, in Cleveland, Ohio. This is the material which she presented at the 1998 NABT convention in Reno. The second set of materials was developed and used successfully for several years by webmaster Larry Flammer, in San Jose, California. Two other variations can be found, as presented by Mari Knutson (of Lynden, WA), and Dorothy Reardon (of Carmichael, CA) at the 1999 NABT convention in Ft. Worth, Texas. 8. A very useful extension and/or alternative to the Skulls Lab is the approach developed by Jeremy DeSilva at the Boston Museum of Natural History: HUMAN EVOLUTION: INTERPRETING THE EVIDENCE. This was featured in the American Biology Teacher journal, April 2004. It is structured around the comparison of three different interpretations by 3 different anthropologists in how known hominin fossils are related to each other. Students become involved in reviewing their criteria and assumptions, and defending their own interpretations. An excellent experience in the process of science, including uncertainty, bias, assumptions, and controversy amongst scientists. Website includes full text of article and diagrams (3 full page provisional phylogenies, easily compared as transparencies, or handouts for students.) 9. Be sure to bring in the excellent materials about Ardipithecus ramidus that were published in the AAAS journal Science (2 October 2009). These extensive fossils take human evolution back to 4.4 mya, providing insight into its anatomy, environment and behavior. They show that Ardi was bipedal, yet had a grasping big toe. Raises possibility that common ancestor to apes and humans was perhaps bipedal, and that apes secondarily evolved knuckle-walking while humans became more efficient at bipedalism! 10. For another possible alternative see the article: "Were Australopithecines Ape-Human Intermediates, or Just Apes?" by Phil Senter (The American Biology Teacher, vol.72, no.2, Feb, 2010, pp. 70-76). Students compare skeletal features of Lucy, chimp, and human (diagrams provided). Results show that a representative of the Australopithecine grade (Lucy) has too many anatomical traits in common with humans - and in contrast with apes - to support a conclusion that the creature is a mere ape. Instead, Lucy appears to be a clear intermediate. One caution here is the need to make clear: the evidence shows that Lucy did not evolve from a modern chimpanzee, and did not (likely) give rise to modern humans. There is also the questionable assumption that the common ancestor of chimps and humans was chimp-like rather than human-like. The analysis ramidus tends to raise some doubts here. Access to this article online may be blocked to non-NABT members during 2010. If you can't get the article, contact the webmaster. Nickels, Martin. 1987. "Human evolution: a challenge for biology teachers". The American Biology Teacher 49 (3): 143-148. (March, 1987). (This article, by the creator of this activity, provides more detailed information regarding the skeletal features, measurements and observations on the Checklist. Martin Nickels is the anthropologist member of the ENSI directorship). The Human Evolution Coloring Book by Adrienne Zihlman (1981) is a good source for illustrations to help students learn anatomical terms and see comparisons. It's available currently for a nominal cost from Amazon.com, Barnes & Noble or Borders Books. A sample of two pages (99 and 100), modified to clarify certain dimensions, can be downloaded (in PDF format) from this ENSIweb site. They can be accessed from the bottom of the sample Human Evolution Unit offered by Larry Flammer. Just click on this to get them. Go to the RESOURCES section on this site, click on HUMAN EVOLUTION for additional links to excellent photos, descriptions and analyses for students to use. Some of the ideas in this lesson may have been adapted from earlier, unacknowledged sources without our knowledge. If the reader believes this to be the case, please let us know, and appropriate corrections will be made. Thanks. Original source: "This cranium comparison lab activity is the most recent version of one I began using at least as early as 1985. I have presented various versions and modifications of it at several workshops for high school biology teachers at regional and district meetings, national meetings of the National Association of Biology Teachers, every Evolution and Nature of Science (ENSI) summer institute (1989-1994) and some Satellite Evolution and Nature of Science (SENSI) sessions." M. Nickels. Modifications: "I know many teachers I have worked with over the years have adopted and modified different earlier versions of this activity for their own purposes, but I think this version is more applicable by more teachers than any of my earlier ones. It is simpler because it does not include the mandible (lower jaw) as part of the activity and restricts the use of hinge calipers (which are less readily available than sliding calipers and rulers with metric scales)." M. Nickels. Content of "Associated Concepts", "Assessment", and "Extensions & Variations" was added by Larry Flammer, 7/98. Updated 11/99; again 2/03. The following is a useful worksheet for students to complete while doing the lab, to help focus and direct their study. A pdf version follows, for easier printout (requires Adobe Reader...free download). 1. Work in groups of 3-4 students so that everyone can be involved in the activity. 2. BE SURE (!) TO TAKE TURNS doing different measurements and observations. 3. When taking a measurement, use the SLIDING CALIPERS (except for #11 & #12 which may require the HINGE calipers) and remember to... 4. ALWAYS MEASURE IN MILLIMETERS [mm] and round off to whole numbers. 5. PLEASE DO NOT ADD ANY PENCIL OR PEN MARK "TATTOOS" TO THESE CRANIA, OR STICK YOUR FINGERS IN THEIR EYE ORBITS OR NOSES! 1. Does the FOREHEAD (frontal bone) look more vertical OR flatter when the skull is held in normal anatomical position [NAP] (i.e., with the eyes oriented forward)? 2. Is a SUPRAORBITAL BROWRIDGE present? 3. If present, is the BROWRIDGE DIVIDED in the middle, or CONTINUOUS? 4. What is the SHAPE OF THE BRAINCASE (front to back) when viewed from above? 5. Is a SAGITTAL CREST present? 6. In NAP, is the FORAMEN MAGNUM oriented more downward OR more to the rear? 7. Is the MASTOID process relatively flat OR does it noticeably protrude (project)? 8. Are the NASAL BONES raised (arched) OR flat? 9. Measure the MAXIMUM BREADTH (width) of the NASAL OPENING [mm]. 10. Measure the MAXIMUM HEIGHT of the NASAL OPENING [mm]. 11. Measure the LENGTH of the MAXILLA (the upper jaw) [mm]. (Measure down the middle of the palate from the front edge of the foramen magnum to either between or just in front of the two central incisors to determine how much the face projects forward.) 12. Measure the BIZYGOMATIC BREADTH using the hinge caliper if necessary [mm]. (This is the width or breadth of the face from the widest part of one zygomatic arch to the widest part of the other zygomatic arch.) 13. SHAPE OF THE DENTAL ARCADE: Do the tooth rows diverge towards the back OR are they more straight-sided and parallel to one another? 14. When viewed from the side, are the INCISORS angled out OR are they vertical? 15. Measure the COMBINED WIDTH or BREADTH of the 4 INCISORS together. 16. Does the CANINE tooth project above the chewing surfaces of the other teeth? 17. Is a CANINE DIASTEMA present? 18. Measure the COMBINED LENGTH of the LEFT 2 PREMOLARS and 3 MOLARS together by measuring from the back of the last molar to the front of the first premolar to determine the length of the chewing surface of the "cheek teeth". [mm]. (NOTE: Measure the right side if the left side is missing any of these 5 teeth.)
http://www.indiana.edu/~ensiweb/lessons/hom.cran.html
13
20
Note: Teachers notes are in red This activity focuses on some of the implications of the dramatic increase in world population and on the unequal distribution of resources among six regions of the world. Students will consider their perceptions of world regions and then use data to examine the quality of life in those regions. Find the key to a balanced population. In the past 40 years, world population has nearly doubled. By 1999, it had reached six billion people. All those people need resources, such as drinkable water, food, and places to live. How can people learn to practice sustainable use of resources so that future generations will have enough to eat, clean water to drink, and comfortable homes? One key is to realize the imbalance in the distribution of resources and the increase in population in different regions of the world. Time: Two class periods Subjects: Geography, science, math, art Relevant U.S. National Geography Standards: 1, 16, 18 Color markers and pencils Magazines with art or photographs of people, fresh water, and cropland (Optional) Millennium in Maps: Population supplement map from the October 1998 NATIONAL GEOGRAPHIC magazine Population & Resources cards (Youll need Adobe Acrobat Reader to download the cards) Regions and Resources On the blackboard write region, population, population growth, and resources. Ask students to define the words. Write their responses on the board. (You can find a population glossary at the Population Reference Bureau Web site: http://www.prb.org/news/glossary.htm.) Ask students such questions as: In what region do you live? What regions are nearby? What are some unifying characteristics of these regions? How would you define a region? Does it matter what a regions population is? Why or why not? In what ways does population growth impact a country? In what ways might population growth impact you in the future? What are resources? Why does population matter when countries are exporting and importing resources? Should we be concerned about the lack of resources in our region of the world? Why or why not? Resources mean different things to different people. What do students think that statement means? When you think of Europe, one of the worlds regions, what comes to mind? A Paris café, Swiss ski slopes, a gondola in Venice? What does Africa, another world region, call to mind? Perhaps the snowy cap on Mount Kilimanjaro or a pride of lions lying on the savanna. The quality of life in these regions and throughout the world depends in part on the balance between population and the availability of resources. To give students food for thought, direct them to nationalgeographic.coms Feeding the Planet http://magma.nationalgeographic.com/2000/population/ Students will research four indicators of the quality of life in six regions of the world. Their results will show tangible evidence of population increases and the unequal distribution of resources. Use a geographic focus to look at six regions of the world identified by the United Nations: the United States and Canada, Europe, Africa, Latin America and the Caribbean, Asia, and Oceania. Divide the class into groups of at least six students. Distribute the Population and Resources cards so that each student in a group has a card from a different region. Students should gather data about these six regions in one of two ways: From the Millennium in Maps: Population map. Have students interpret data from side 2 of the world map. Each student should count four Regional Indicatorspopulation increase, income per capita, fresh water availability, and croplandfor his or her region to determine correct quantities (rounding up or down to whole numbers). From population data from 1998. Have students enter their data on their cards. Divide up into teams of at least six students, one for each of these six regions. Each of you will then compare four aspects of lifepopulation increase; income per capita; availability of fresh water; and the amount of croplandin your chosen (or assigned) region. You and your teammates will gather data either from a map, or from other materials your teacher will give you. Gather your data. Then enter the statistics for your region on the Population & Resources card your teacher has given you. Have each student make a bar graph of his or her region based on the data from one of the sources listed above. Each student should make the same type of graph. Collaborate with a math teacher in helping students graph the data. Students can create graphs by hand with graph paper or they can use a graphing program such as Microsoft Excel or Appleworks. Students can work individually or in their groups. Make a bar chart of the data you gathered for your region. The Best Place, Bar None In their groups, students should refer to the completed bar charts to make inferences on what it might be like to live in another region of the world in terms of population and resources. Within your group, compare the six bar charts. What is the annual rate of population increase in each region? What is the per capita annual income in each region? How much fresh water does one person use in a year in each region? How much cropland per person exists in each region? As a group, in which region would you most like to live? Ask one spokesperson from each group to tell the class how the group determined which region they would like to live in and why. Students should use geographic terms. Pick one person from your group to tell the class where your group would like to live, and why you chose that region. Find out how other groups voted. Pose the same questions to the class that you asked at the beginning of the activity (see Regions and Resources). Bar Charts and Beyond: Information Is Key Ask: Did creating the bar charts help you better understand the unequal distribution of resources around the world? Have each group create an eye-catching display of the data in a way thats easily understood. Geographers constantly struggle to find the best ways to display data. Charts, graphs, maps, and other visual displays help geographers report information from a spatial perspective. Each group should draw a large world map, and, using that as their starting point, display the data for each of the six regions. Encourage students to be inventive. You may want to ask an art teacher for ideas. Your group should draw a large world map. Using this map and the data from your bar charts, your group will creat a display to help others understand the worldwide balancing act between people and resources. You can go to Web sites under Other Related Web Sites to find additional information to add to your display. Be creativepeople are more likely to read your chart if it catches their eye. TODALSS (map elements)Title, Orientation, Data, Author, Legend, Scale, Source; bar charts of data for all six regions (use the bar charts youve made or make new charts); and photographs, drawings, or symbols representing the four indicators: population increase, income per capita, fresh water availability, and cropland. Start at home . . . Donate your time or goods to the Red Cross, which supplies communities with necessary supplies during national disasters. Conduct a food or clothing drive. Learn about your watershed-about keeping the water clean and using it wisely. Look at the Fresh Water 9-12 activity for a list of Web sites focused on watersheds and keeping yours clean. . . . and go beyond. To learn about environmental emergencies worldwide and to take action, check out the Eco-Club Action Web site Learn about groups that provide needed assistance around the world, such as the International Committee of the Red Cross. Adapted from Millennium in Maps: Population lesson plan. Copyright © 1998 National Geographic Society. © 2000 National Geographic Society. All rights reserved.
http://www.nationalgeographic.com/gaw/pop/pop_912_teacher.html
13
19
A wind profiler is a type of weather observing equipment that uses radar or sound waves (SODAR) to detect the wind speed and direction at various elevations above the ground. Readings are made at each kilometer above sea level, up to the extent of the troposphere (i.e., between 8 and 17 km above mean sea level). Above this level there is inadequate water vapor present to produce a radar "bounce." The data synthesized from wind direction and speed is very useful to meteorological forecasting and timely reporting for flight planning. A twelve hour history of data is available through NOAA websites. In a typical implementation, the radar or sodar can sample along each of five beams: one is aimed vertically to measure vertical velocity, and four are tilted off vertical and oriented orthogonal to one another to measure the horizontal components of the air's motion. A profiler's ability to measure winds is based on the assumption that the turbulent eddies that induce scattering are carried along by the mean wind. The energy scattered by these eddies and received by the profiler is orders of magnitude smaller than the energy transmitted. However, if sufficient samples can be obtained, then the amplitude of the energy scattered by these eddies can be clearly identified above the background noise level, then the mean wind speed and direction within the volume being sampled can be determined. The radial components measured by the tilted beams are the vector sum of the horizontal motion of the air toward or away from the radar and any vertical motion present in the beam. Using appropriate trigonometry, the three-dimensional meteorological velocity components (u,v,w) and wind speed and wind direction are calculated from the radial velocities with corrections for vertical motions. Radar wind profiler Pulse-Doppler radar wind profilers operate using electromagnetic (EM) signals to remotely sense winds aloft. The radar transmits an electromagnetic pulse along each of the antenna's pointing directions. A UHF profiler includes subsystems to control the radar's transmitter, receiver, signal processing, and Radio Acoustic Sounding System (RASS), if provided, as well as data telemetry and remote control. The duration of the transmission determines the length of the pulse emitted by the antenna, which in turn corresponds to the volume of air illuminated (in electrical terms) by the radar beam. Small amounts of the transmitted energy are scattered back (referred to as backscattering) toward and received by the radar. Delays of fixed intervals are built into the data processing system so that the radar receives scattered energy from discrete altitudes, referred to as range gates. The Doppler frequency shift of the backscattered energy is determined, and then used to calculate the velocity of the air toward or away from the radar along each beam as a function of altitude. The source of the backscattered energy (radar “targets”) is small-scale turbulent fluctuations that induce irregularities in the radio refractive index of the atmosphere. The radar is most sensitive to scattering by turbulent eddies whose spatial scale is ½ the wavelength of the radar, or approximately 16 centimeters (cm) for a UHF profiler. A boundary-layer radar wind profiler can be configured to compute averaged wind profiles for periods ranging from a few minutes to an hour. Boundary-layer radar wind profilers are often configured to sample in more than one mode. For example, in a “low mode,” the pulse of energy transmitted by the profiler may be 60 m in length. The pulse length determines the depth of the column of air being sampled and thus the vertical resolution of the data. In a “high mode,” the pulse length is increased, usually to 100 m or greater. The longer pulse length means that more energy is being transmitted for each sample, which improves the signal-to-noise ratio (SNR) of the data. Using a longer pulse length increases the depth of the sample volume and thus decreases the vertical resolution in the data. The greater energy output of the high mode increases the maximum altitude to which the radar wind profiler can sample, but at the expense of coarser vertical resolution and an increase in the altitude at which the first winds are measured. When radar wind profilers are operated in multiple modes, the data are often combined into a single overlapping data set to simplify postprocessing and data validation procedures. Sodar wind profiler Alternativy, a wind profiler may use sound waves to measure wind speed at various heights above the ground, and the thermodynamic structure of the lower layer of the atmosphere. These sodars can be divided in mono-static system using the same antenna for transmitting and receiving, and bi-static system using separate antennas. The difference between the two antenna systems determines whether atmospheric scattering is by temperature fluctuations (in mono-static systems), or by both temperature and wind velocity fluctuations (in bi-static systems). Mono-static antenna systems can be divided further into two categories: those using multiple axis, individual antennas and those using a single phased array antenna. The multiple-axis systems generally use three individual antennas aimed in specific directions to steer the acoustic beam. One antenna is generally aimed vertically, and the other two are tilted slightly from the vertical at an orthogonal angle. Each of the individual antennas may use a single transducer focused into a parabolic reflector to form a parabolic loudspeaker, or an array of speaker drivers and horns (transducers) all transmitting in-phase to form a single beam. Both the tilt angle from the vertical and the azimuth angle of each antenna are fixed when the system is set up. The vertical range of sodars is approximately 0.2 to 2 kilometers (km) and is a function of frequency, power output, atmospheric stability, turbulence, and, most importantly, the noise environment in which a sodar is operated. Operating frequencies range from less than 1000 Hz to over 4000 Hz, with power levels up to several hundred watts. Due to the attenuation characteristics of the atmosphere, high power, lower frequency sodars will generally produce greater height coverage. Some sodars can be operated in different modes to better match vertical resolution and range to the application. This is accomplished through a relaxation between pulse length and maximum altitude. This article incorporates public domain material from the United States Government document "Meteorological Monitoring Guidance for Regulatory Modeling Applications". |Wikimedia Commons has media related to: Wind profilers| - Official NOAA wind profiler search page See real time (and 12 hour history) graphic displays of wind direction and speed from ground level up to 17 KM above sea level (at 1 KM intervals). Click on any star or dot, then click on "get plot" at left.
http://en.wikipedia.org/wiki/Wind_profiler
13
15
Connect to share and comment NASA estimates that around 4,700 asteroids are close enough and big enough to pose a risk to Earth. NASA estimates there are around 4,700 asteroids that are close enough, and big enough, to pose a risk to Earth. According to CNN, the number of asteroids, give or take about 1,500 is, "how many space rocks that are bigger than 100 meters (330 feet) across and are believed to come within 5 million miles (8 million km) of Earth, or about 20 times farther away than the moon." These asteroids would also be large enough to survive passing through Earth's atmosphere. Amy Mainzer, an astronomer at NASA's Jet Propulsion Laboratory in California, told CNN, "It's not something that people should panic about. However, we are paying attention to the issue." More from GlobalPost: Super Earth light detected by NASA space telescope NASA said a 40-meter asteroid would strike the Earth with an impact comparable to a 3-megaton nuclear bomb, according to CNN. A 2-km asteroid striking Earth "would produce severe environmental damage on a global scale," the space agency estimated, but an impact of that magnitude isn't likely to occur more than twice per million years. NASA used the Wide-field Infrared Survey Explorer (WISE) as part of the NEOWISE project to find the asteroids. From there they estimated how many more were actually out there. Of the project Lindley Johnson of the Near-Earth Object Observation Program at NASA Headquarters in Washington, told UPI, "The NEOWISE analysis shows us we've made a good start at finding those objects that truly represent an impact hazard to Earth. But we've many more to find, and it will take a concerted effort during the next couple of decades to find all of them that could do serious damage or be a mission destination in the future." More from GlobalPost: Supermoon 2012: Did you see it? (PHOTOS)
http://www.globalpost.com/dispatch/news/science/120517/nasa-finds-4700-potentially-dangerous-asteroids
13
15
Updated 02 Feb 2012 This page is for those who are just starting out in astronomy and telescopes. It will provide a basic introduction to telescope All telescopes use either a lens or a mirror (some use both) to gather incoming light and to form an image from that light. The telescope's eyepiece takes the image formed by the lens (or mirror) and magnifies it to a larger size so that the human eye can see more details in the image. If you look at a typical eyepiece you will almost certainly see some numbers and possibly some letters printed on the eyepiece. For example, you might see the marking "25mm" or "7.5mm" (or any other number of values ranging from around 4mm to possibly 60mm). This numeric value is known as the focal length of the eyepiece (measured in millimeters). This is probably the most important characteristic of an eyepiece because it allows you to calculate how much magnification the eyepiece will provide. The actual magnification an eyepiece provides depends on the focal length of the telescope. You may see other letters or markings on the eyepiece (such as "H", "SR", "Pl", etc), these indicate the type of eyepiece (we will discuss these in an upcoming section). For now, the thing to take away from this section that the number printed on the eyepiece is the focal length and understand that it is used to calculate the magnification the eyepiece will provide. To determine the magnification that an eyepiece provides, two pieces of information are needed. One is the focal length of the eyepiece and the second is the focal length of the telescope in which the eyepiece will be used. Almost every telescope ever made will have the focal length marked on it, usually near where the eyepiece goes into the scope. Focal lengths for typical beginner telescopes will be in the range from around 500mm to 1200mm. To determine the magnification that a particular eyepiece provides, a simple calculation is done: divide the focal length of the telescope by the focal length of the eyepiece. Let's take an example. Suppose you have a telescope with a focal length of 700mm and an eyepiece of focal length 25mm. The magnification that the eyepiece provides in this telescope will be 700/25 = 28x (often called "28 power"). Now let's take the same scope but use a second eyepiece of focal length 7.5mm. The magnification provided by this eyepiece will be 700/7.5 = 93.3x. Magnifications for other telescope/eyepiece combinations are calculated in the same manner. One thing to note from this: eyepieces with smaller focal lengths produce larger magnifications with any given telescope! The image below shows the results of using appropriate magnification and excessive magnification. The leftmost two images of Saturn are representative of what you might expect to see at low and high power respectively (in a typical entry level scope). The image at right shows what happens when magnification is pushed to excess. The image is bigger to be sure, however the clarity is terrible, the image will be very shaky and much dimmer. No additional detail can be seen beyond a certain point! The image at right is typical of what you might see in an entry level telescope that claims "675x magnification". The most magnification you will normally use is about 50x per inch of lens (or mirror) diameter. For a 3" scope this would be about 150x. Remember, most of your observing will be done at LOW magnification! Representative views of Saturn at low, high and excessive magnification! Yes. Here we are talking about physical size of the eyepiece barrel (the silver part that inserts into the eyepiece holder), not the focal length. There are 3 standard sizes of eyepieces in use for amateur telescopes. They are .965", 1.25" and 2". That said, it should be noted that the .965" size is an older obsolete size that is still being used on some entry level scopes. The 1.25" size is the most commonly used eyepiece size for amateur telescopes. The 2" size is also common on amateur scopes but this size is generally found on larger, more advanced telescopes. Most beginning astronomers will not need to be concerned with 2" eyepieces (a single 2" eyepiece can be as costly as an entire entry level telescope)! The photo below shows the relative sizes of the three standard sizes of eyepieces along with a soda can for reference. Note that the 2" eyepiece shown here is not too much smaller than a soda can! 2", 1.25" and .965" diameter eyepieces with a soda can for reference. All other things being equal, no. The quality of an eyepiece is not related to its size (good quality eyepieces can be made in any of the three sizes). That said, it should be noted that many of the eyepieces that come in the .965" size are not of the best quality. This is not a result of the size but more a result of making things less costly. Yes. When we say "type" of eyepiece we refer to the optical design of the eyepiece. There are many optical designs used in various eyepieces, some use more glass elements than others to achieve different characteristics. Plossl eyepieces (an excellent all around performer) often come with the better entry level telescopes. Unfortunately some entry level telescopes come with eyepiece designs that are not as good. These typically include the Huygens (marked with an "H") and Symmetric Ramsden (typically marked "SR"). These are common designs in entry level telescopes as they are inexpensive to make (however their performance is often not so good compared to the better Plossl design). There are two specifications regarding field of view when speaking of eyepieces. One is known as "apparent field of view" and the other is "actual field of view". Both are measured in degrees. Apparent field of view is constant for any given eyepiece and telescope combination. Actual field of view refers to how much actual sky you can see at any one time (and this will vary depending on what telescope the eyepiece is used in). So what do these two terms really mean? Apparent field of view can be thought of as "how big a window am I looking through". The wider the apparent field of view the more area of sky you will see. Eyepieces come with apparent fields of view ranging from around 30 degrees (quite narrow) to over 80 degrees (extremely wide), with 40 to 50 degrees being very common (and totally adequate for entry level eyepieces). Eyepieces with apparent fields of view in the 60 degree plus range generally cost a considerable amount (several hundred dollars). Although often excellent performers, they are not included as standard equipment when buying a telescope as they are too costly. Getting back to understanding apparent field of view, here's an example of how to better understand what it means. Picture yourself sitting on a couch in your living room and looking out a standard window (say a 3x5 foot window). You can probably see some trees, maybe a portion of the neighbor's house, etc. Now, sitting in the same spot, imagine that there was a 6x12 foot picture window. Now you can see a LOT more outside! Nothing visible through the window looks any LARGER, we just see MORE of what is outside. For eyepieces with a wider apparent field of view, the results will be similar. So what is actual field of view? When you use a particular eyepiece in a given telescope, it will "see" a small portion of the sky. In general, the more magnification that is used the LESS sky we will see. For example the Moon is about 1/2 of a degree wide. Most small telescopes using low magnification can see the entire Moon (with a good amount of "breathing room" surrounding it). If we switch to an eyepiece that results in higher magnification, we will more likely only be able to see a portion of the Moon at any one time. So, for any given eyepiece/telescope combination, the eyepiece will allow you to see a particular portion of the sky. For a low power eyepiece, a typical actual field of view (for an entry level telescope) might be in the order of 2 degrees. For the same telescope using a higher magnification eyepiece, the field of view might be (for example) more like 1/4 of a degree. By use of an eyepiece that has a wide apparent field of view (say 70 degrees or so) and one that also results in a fairly high magnification (for example around 100x), it is possible to obtain some very dramatic views of objects like the Moon (basically you have fairly high magnification AND a wide actual field of view at the same time)! As with most anything good, there is a downside: cost. Eyepieces that have very wide fields of view are often pretty expensive and they will not be "standard equipment" on any entry level telescope. The main thing to take away from this section is that eyepieces with wider apparent fields of view are generally easier to use (think of the difference between looking out a porthole vs. a picture window). For any given eyepiece, the actual field of view is a function of the magnification the eyepiece provides with a particular telescope (the more magnification the less sky you will see at once). Yes. There are 3 or 4 eyepieces to be avoided that are fairly commonly supplied with entry level telescopes. Ones to avoid include eyepieces with the following markings: H25mm, H20mm, H12.5mm and SR4mm. Especially avoid them if they are of the .965" barrel diameter! As mentioned in a previous section, eyepieces with H markings (Huygens optical configuration) are generally of not very good quality (the image will tend to be blurry around the perimeter of the field of view) and the apparent fields of view are on the smaller side. Huygens eyepieces with smaller focal lengths tend to be worse than those with larger focal lengths. Most users that attempt to use an SR4mm eyepiece will find it of little practical use. Such eyepieces are provided with scopes only to allow the scope to claim a very high maximum magnification (many people just starting out associate "high magnification" with "high quality", a notion that is completely false). In general, the smaller the focal length of the eyepiece the harder it is to physically look through. This is because the opening is very small and you have to very carefully center your eye over it in order to see anything. Couple that with the shakiness that a high magnification will result in using a small scope and you will be lucky to see much of anything. Regarding eyepiece size (barrel diameter), I strongly advise avoiding any telescope that can only accept .965" eyepieces. The reason is this: the availability of quality .965" eyepieces has greatly diminished in recent years and you will have a very hard time finding quality eyepieces to upgrade to. There is a huge variety of quality eyepieces available in the 1.25" size so this is the size eyepiece you want your first telescope to be able to use. Number 1. Make sure the telescope accepts the 1.25" size (it is OK if it also takes the 2" size as adapters are widely available to allow using 1.25" eyepieces in scopes that take 2" eyepieces). Number 2. If the telescope comes with 2 eyepieces, you want one of them to produce a good low power magnification (in the range of 25x - 50x) and the other one should produce a good higher magnification (in the range of 90x - 120x). These magnifications will be the ones most commonly used in most any telescope (including expensive advanced telescopes)! Also, if the scope comes with 2 eyepieces try to make sure that the focal length of one is NOT simply twice that of the other. For example, I've seen scopes that come with a 20mm eyepiece and a 10mm eyepiece. Why is this not the best choice? If you eventually obtain a Barlow lens (an accessory that typically doubles the power of any eyepiece) then you will effectively have only 3 unique magnifications with those 2 eyepieces. It would be better to have a a scope come with 2 eyepieces that are more like 25mm and 10mm. When used in conjunction with a Barlow lens this would provide you with 4 unique magnifications (instead of only 3 with the other case) when using a 2x Barlow lens. The bottom line is this: The best eyepieces for an entry level scope will typically include one of approximately 25mm focal length and one in the range of 10m - 7mm (depending on the scope's focal length). Always avoid telescopes with the eyepieces mentioned in the previous paragraph!!! As for the design of the eyepiece, Plossl is arguably the best optical design for an entry level scope. Some of the least expensive scopes won't come with Plossl eyepieces (due to cost); just be sure to stay away from H and SR eyepieces and chances are you will be fine. For most people, 2 will be fine for starting out, and eventually a third eyepiece could be added to your collection. Ideally a low, medium and high magnification eyepiece set is perfect for most people (with low and high magnifications being the first two to obtain). Alternatively, a decent quality 2x Barlow lens will double the magnification of any given eyepiece. So, if you have 2 eyepieces you can likely end up with 4 unique magnifications by using a Barlow lens (a Barlow lens would cost about the same as a decent quality beginner eyepiece, or around $50- $60). You can always add more eyepieces later if your interest grows! Possibly. If you do wear glasses and have to use them when looking through a telescope, you may find eyepieces that produce larger magnifications (eyepieces with smaller focal lengths) difficult to look through. This is because you cannot get your eye close enough to the eyepiece to comfortably see through it while wearing glasses. There is a specification associated with eyepieces that we have not mentioned yet: eye relief. Eye relief specifies how far away you can hold your eye and still easily see image in the eyepiece. There are eyepieces available with what is known as "long eye relief". Such eyepieces typically have eye relief of around 20mm (this should be adequate for most anyone who wears glasses at the scope). Eye relief is not always specified with eyepieces. If it is not called out, chances are it is not a long eye relief eyepiece. The downside with long eye relief eyepieces is that they tend to be somewhat more costly than other eyepiece designs (maybe $100 per eyepiece). Keep in mind however that most people who wear eyeglasses at the scope will typically only find certain eyepieces (the ones that generate higher magnifications) problematic. Most good quality eyepieces will have at least some of their elements treated with anti reflection optical coatings. Such coatings help to transmit more light and reduce glare and loss of contrast. You can often tell that an eyepiece has coatings as the optics will tend to have a bluish or greenish tint to them. Ideally all air-to-glass interfaces in an eyepiece will have anti reflection coatings, but the more coatings an eyepiece has the more it will cost (and the better the view will be too). Filters are used to enhance viewing of certain objects (there are different filters for different subjects and viewing conditions). The filters basically thread into one end of the eyepiece. Virtually all eyepieces available today are threaded to accept filters (all but one brand I have ever encountered use a standardized thread so there is little chance of incompatibility among filters and eyepieces). In general, when changing eyepieces (to get a different magnification), some refocusing of the telescope will be required. Eyepieces that are parafocal will need only a very minor (if any) refocusing. In general parafocal eyepieces are eyepieces of different focal lengths from the same family of eyepieces. Being parafocal (or not) with each other has no bearing on quality or performance, it is simply something that provides convenience. The best thing is to not let them get dirty (keep them covered and in their cases when not in use). If you must clean them, do so carefully. Never use anything like Windex! Use only a soft, CLEAN camel hair brush or use compressed air (from a can, NOT from a garage air compressor), or a cloth that is meant for optics (kits such as Orion Deluxe 6-Piece Optics Cleaning Kit or Orion Optics Cleaning Kit are examples of what should be used if you need to clean your eyepiece optics) . Never disassemble an eyepiece to attempt to clean the interior. No dirt can enter inside the eyepiece, taking it apart almost certainly assures you will not get it back together properly! No. You *can* spend a lot if you want the very best, however for most people starting out the cost of such eyepieces is not justified (or necessary). Very good eyepieces can be had for around $50 each. As you progress in astronomy you can always move up to more expensive eyepieces. The more costly eyepieces offer very wide fields of view with outstanding image quality throughout the field. Some of these eyepieces cost over $500! However, keep in mind that many of the $50 eyepieces will get you 80% of the view of the very best at 1/10 the cost. If you are just starting out and have $500 to spend, it would be much wiser to buy a better telescope before delving into exotic high end eyepieces. Below are some examples of eyepieces that would be very good choices for starting out. If you purchased a good starter scope chances are you already have eyepieces that are perfectly fine. However if you have an older scope or one with less than great eyepieces, the ones I list below are excellent choices for upgrading. Note that these are all 1.25" diameter eyepieces, your scope must accept this size eyepiece for these to work! The first two are ones I recommend as my top 2 picks for excellent low and high power views in the vast majority of entry level telescopes. These eyepieces are of a quality level that you won't outgrow in a month (these are good all around workhorses that will be useful even if you upgrade to a more sophisticated telescope in the future). These eyepieces have all of the features discussed earlier: a wide 50-deg apparent field of view, and the optics are fully coated with magnesium fluoride on every air-to-glass surface (improves contrast and reduces scattering). Cost is around $55 each including shipping. Orion 25mm Sirius Plossl eyepiece. This would be an excellent choice for a low power eyepiece for most telescopes. Orion 7.5mm Sirius Plossl eyepiece. This would be an excellent choice for a high power eyepiece for most telescopes. Orion 12.5mm Sirius Plossl eyepiece. This would be an excellent choice for a medium power eyepiece for most telescopes. Use your browser's "back" button, or use links below if you arrived here via some other path: This page is part of the site Amateur Astronomer's Notebook. E-mail to Joe Roberts HTML text © Copyright 2009 by Joe Roberts.
http://www.rocketroberts.com/astro/eyepiece_basics.htm
13
22
diagrammatic means of representing sets and their relationships. The first use of "Eulerian circles" is commonly attributed to Swiss mathematician Leonhard Euler (1707–1783). They are closely related to Venn diagrams. Venn and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading. OverviewEuler diagrams consist of simple closed curves (usually circles) in the plane that depict sets. The sizes or shapes of the curves are not important: the significance of the diagram is in how they overlap. The spatial relationships between the regions bounded by each curve (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset and disjointness). Each Euler curve divides the plane into two regions or "zones": the interior, which symbolically represents the elements of the set, and the exterior, which represents all elements that are not members of the set. Curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements; the zone inside both curves represents the set of elements common to both sets (the intersection of the sets). A curve that is contained completely within the interior zone of another represents a subset of it. Venn diagrams are a more restrictive form of Euler diagrams. A Venn diagram must contain all the possible zones of overlap between its curves, representing all combinations of inclusion/exclusion of its constituent sets, but in an Euler diagram some zones might be missing. When the number of sets grows beyond 3, or even with three sets, but under the allowance of more than two curves passing at the same point, we start seeing the appearance of multiple mathematically unique Venn diagrams. Venn diagrams represent the relationships between n sets, with 2n zones, Euler diagrams may not have all zones. (An example is given below in the History section; in the top-right illustration the O and I diagrams are merely rotated; Venn stated that this difficulty in part led him to develop his diagrams). In a logical setting, one can use model theoretic semantics to interpret Euler diagrams, within a universe of discourse. In the examples above, the Euler diagram depicts that the sets Animal and Mineral are disjoint since the corresponding curves are disjoint, and also that the set Four Legs is a subset of the set of Animals. The Venn diagram, which uses the same categories of Animal, Mineral, and Four Legs, does not encapsulate these relationships. Traditionally the emptiness of a set in Venn diagrams is depicted by shading in the region. Euler diagrams represent emptiness either by shading or by the use of a missing region. Often a set of well-formedness conditions are imposed; these are topological or geometric constraints imposed on the structure of the diagram. For example, connectedness of zones might be enforced, or concurrency of curves or multiple points might be banned, as might tangential intersection of curves. In the diagram to the right, examples of small Venn diagrams are transformed into Euler diagrams by sequences of transformations; some of the intermediate diagrams have concurrency of curves. However, this sort of transformation of a Venn diagram with shading into an Euler diagram without shading is not always possible. There are examples of Euler diagrams with 9 sets that are not drawable using simple closed curves without the creation of unwanted zones since they would have to have non-planar dual graphs. Sir William Hamilton in his posthumously published Lectures on Metaphysics and Logic (1858–60) asserts that the original use of circles to "sensualize ... the abstractions of Logic" (p. 180) was not Leonhard Paul Euler (1707–1783) but rather Christian Weise (?–1708) in his Nucleus Logicoe Weisianoe that appeared in 1712 posthumously. He references Euler's Letters to a German Princess on different Matters of Physics and Philosophy1" [1Partie ii., Lettre XXXV., ed. Cournot. – ED.] In Hamilton's illustration the four forms of the syllogism as symbolized by the drawings A, E, I and O are: - A: The Universal Affirmative, Example: "All metals are elements". - E: The Universal Negative, Example: "No metals are compound substances". - I: The Particular Affirmative, Example: "Some metals are brittle". - O: The Particular Negative, Example: "Some metals are not brittle". - "...of the first sixty logical treatises, published during the last century or so, which were consulted for this purpose:-somewhat at random, as they happened to be most accessible :-it appeared that thirty four appealed to the aid of diagrams, nearly all of these making use of the Eulerian Scheme." (Footnote 1 page 100) - “In fact ... those diagrams not only do not fit in with the ordinary scheme of propositions which they are employed to illustrate, but do not seem to have any recognized scheme of propositions to which they could be consistently affiliated.” (pp. 124–125) - "We now come to Euler's well-known circles which were first described in his Lettres a une Princesse d'Allemagne (Letters 102–105). The weak point about these consists in the fact that they only illustrate in strictness the actual relations of classes to one another, rather than the imperfect knowledge of these relations which we may possess, or wish to convey, by means of the proposition. Accordingly they will not fit in with the propositions of common logic, but demand the constitution of a new group of appropriate elementary propositions.... This defect must have been noticed from the first in the case of the particular affirmative and negative, for the same diagram is commonly employed to stand for them both, which it does indifferently well". (italics added: page 424) By 1914 Louis Couturat (1868–1914) had labeled the terms as shown on the drawing on the right. Moreover, he had labeled the exterior region (shown as a'b'c') as well. He succinctly explains how to use the diagram – one must strike out the regions that are to vanish: - "VENN'S method is translated in geometrical diagrams which represent all the constituents, so that, in order to obtain the result, we need only strike out (by shading) those which are made to vanish by the data of the problem." (italics added p. 73) - "No Y is Z and ALL X is Y: therefore No X is Z" has the equation x'yz' + xyz' + x'y'z for the unshaded area inside the circles (but note that this is not entirely correct; see the next paragraph). - "No Y is Z and ALL X is Y: therefore No X is Z" has the equation x'yz' + xyz' + x'y'z + x'y'z' . Couturat now observes that, in a direct algorithmic (formal, systematic) manner, one cannot derive reduced Boolean equations, nor does it show how to arrive at the conclusion "No X is Z". Couturat concluded that the process "has ... serious inconveniences as a method for solving logical problems": - "It does not show how the data are exhibited by canceling certain constituents, nor does it show how to combine the remaining constituents so as to obtain the consequences sought. In short, it serves only to exhibit one single step in the argument, namely the equation of the problem; it dispenses neither with the previous steps, i. e., "throwing of the problem into an equation" and the transformation of the premises, nor with the subsequent steps, i. e., the combinations that lead to the various consequences. Hence it is of very little use, inasmuch as the constituents can be represented by algebraic symbols quite as well as by plane regions, and are much easier to deal with in this form."(p. 75) - "For more than three variables, the basic illustrative form of the Venn diagram is inadequate. Extensions are possible, however, the most convenient of which is the Karnaugh map, to be discussed in Chapter 6." (p. 64) - "The Karnaugh map1 [1Karnaugh 1953] is one of the most powerful tools in the repertory of the logic designer. ... A Karnaugh map may be regarded either as a pictorial form of a truth table or as an extension of the Venn diagram." (pp. 103–104) Example: Euler- to Venn-diagram and Karnaugh mapThis example shows the Euler and Venn diagrams and Karnaugh map deriving and verifying the deduction "No X's are Z's". In the illustration and table the following logical symbols are used: - 1 can be read as "true", 0 as "false" - ~ for NOT and abbreviated to ' when illustrating the minterms e.g. x' =defined NOT x, - + for Boolean OR (from Boolean algebra: 0+0=0, 0+1 = 1+0 = 1, 1+1=1) - & (logical AND) between propositions; in the mintems AND is omitted in a manner similar to arithmetic multiplication: e.g. x'y'z =defined ~x & ~y & z (From Boolean algebra: 0*0=0, 0*1 = 1*0=0, 1*1 = 1, where * is shown for clarity) - → (logical IMPLICATION): read as IF ... THEN ..., or " IMPLIES ", P → Q =defined NOT P OR Q Given the example above, the formula for the Euler and Venn diagrams is: - "No Y's are Z's" and "All X's are Y's": ( ~(y & z) & (x → y) ) =defined P - "No X's are Z's": ( ~ (x & z) ) =defined Q - ( ~(y & z) & (x → y) ) → ( ~ (x & z) ): P → Q - IF ( "No Y's are Z's" and "All X's are Y's" ) THEN ( "No X's are Z's" ) |Square #||Venn, Karnaugh region||x||y||z||(~||(y||&||z)||&||(x||→||y))||→||(~||(x||&||z))| Modus ponens (or "the fundamental rule of inference") is often written as follows: The two terms on the left, "P → Q" and "P", are called premises (by convention linked by a comma), the symbol ⊢ means "yields" (in the sense of logical deduction), and the term on the right is called the conclusion: - P → Q, P ⊢ Q - P → Q , P ⊢ Q - i.e.: ( ~(y & z) & (x → y) ) → ( ~ (x & z) ) , ( ~(y & z) & (x → y) ) ⊢ ( ~ (x & z) ) - i.e.: IF "No Y's are Z's" and "All X's are Y's" THEN "No X's are Z's", "No Y's are Z's" and "All X's are Y's" ⊢ "No X's are Z's" The use of tautological implication means that other possible deductions exist besides "No X's are Z's"; the criterion for a successful deduction is that the 1's under the sub-major connective on the right include all the 1's under the sub-major connective on the left (the major connective being the implication that results in the tautology). For example, in the truth table, on the right side of the implication (→, the major connective symbol) the bold-face column under the sub-major connective symbol " ~ " has the all the same 1s that appear in the bold-faced column under the left-side sub-major connective & (rows 0, 1, 2 and 6), plus two more (rows 3 and 4). A Venn diagram shows all possible intersections. Euler diagram visualizing a real situation, the relationships between various supranational European organisations. Euler diagram visualizing a real situation, the relationships between various supranational African organisations. Humorous diagram comparing Euler and Venn diagrams. Euler diagram of types of triangles, assuming isosceles triangles have at least 2 equal sides. Euler diagram of terminology of the British Isles. - Strategies for Reading Comprehension Venn Diagrams - By the time these lectures of Hamilton were published, Hamilton too had died. His editors (symbolized by ED.), responsible for most of the footnoting, were the logicians Henry Longueville Mansel and John Veitch. - Hamilton 1860:179. The examples are from Jevons 1881:71ff. - See footnote at George Stibitz. - This is a sophisticated concept. Russell and Whitehead (2nd edition 1927) in their Principia Mathematica describe it this way: "The trust in inference is the belief that if the two former assertions [the premises P, P→Q ] are not in error, the final assertion is not in error . . . An inference is the dropping of a true premiss [sic]; it is the dissolution of an implication" (p. 9). Further discussion of this appears in "Primitive Ideas and Propositions" as the first of their "primitive propositions" (axioms): *1.1 Anything implied by a true elementary proposition is true" (p. 94). In a footnote the authors refer the reader back to Russell's 1903 Principles of Mathematics §38. - cf Reichenbach 1947:64 - Reichenbach discusses the fact that the implication P → Q need not be a tautology (a so-called "tautological implication"). Even "simple" implication (connective or adjunctive) will work, but only for those rows of the truth table that evaluate as true, cf Reichenbach 1947:64–66. ReferencesBy date of publishing: - Sir William Hamilton 1860 Lectures on Metaphysics and Logic edited by Henry Longueville Mansel and John Veitch, William Blackwood and Sons, Edinburgh and London. - W. Stanley Jevons 1880 Elemetnary Lessons in Logic: Deductive and Inductive. With Copious Questions and Examples, and a Vocabulary of Logical Terms, M. A. MacMillan and Co., London and New York. - John Venn 1881 Symbolic Logic, MacMillan and Co., London. - Alfred North Whitehead and Bertrand Russell 1913 1st edition, 1927 2nd edition Principia Mathematica to *56 Cambridge At The University Press (1962 edition), UK, no ISBN. - Louis Couturat 1914 The Algebra of Logic: Authorized English Translation by Lydia Gillingham Robinson with a Preface by Philip E. B. Jourdain, The Open Court Publishing Company, Chicago and London. - Emil Post 1921 "Introduction to a general theory of elementary propositions" reprinted with commentary by Jean van Heijenoort in Jean van Heijenoort, editor 1967 From Frege to Gödel: A Sourcebook of Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, ISBN 0-674-42449-8 (pbk.) - Claude E. Shannon 1938 "A Symbolic Analysis of Relay and Switching Circuits", Transactions American Institute of Electrical Engineers vol 57, pp. 471–495. Derived from Claude Elwood Shannon: Collected Papers edited by N.J.A. Solane and Aaron D. Wyner, IEEE Press, New York. - Hans Reichenbach 1947 Elements of Symbolic Logic republished 1980 by Dover Publications, Inc., NY, ISBN 0-486-24004-5. - Edward W. Veitch 1952 "A Chart Method for Simplifying Truth Functions", Transactions of the 1952 ACM Annual Meeting, ACM Annual Conference/Annual Meeting "Pittsburgh", ACM, NY, pp. 127–133. - Maurice Karnaugh November 1953 The Map Method for Synthesis of Combinational Logic Circuits, AIEE Committee on Technical Operations for presentation at the AIEE summer General Meeting, Atlantic City, N. J., June 15–19, 1953, pp. 593–599. - Frederich J. Hill and Gerald R. Peterson 1968, 1974 Introduction to Switching Theory and Logical Design, John Wiley & Sons NY, ISBN 0-71-39882-9. - Ed Sandifer 2003 How Euler Did It, http://www.maa.org/editorial/euler/How%20Euler%20Did%20It%2003%20Venn%20Diagrams.pdf |Wikimedia Commons has media related to: Euler diagrams|
http://www.eskesthai.com/search/label/Venn
13
74
Science Fair Project Encyclopedia This article is about angles in geometry. For other articles, see Angle (disambiguation) An angle (from the Lat. angulus, a corner, a diminutive, of which the primitive form, angus, does not occur in Latin; cognate are the Lat. angere, to compress into a bend or to strangle, and the Gr. ἄγκοσ, a bend; both connected with the Aryan or Indo-European root ank-, to bend) is the figure formed by two rays sharing a common endpoint, called the vertex of the angle. Angles provide a means of expressing the difference in slope between two rays meeting at a vertex without the need to explicitly define the slopes of the two rays. Angles are studied in geometry and trigonometry. Euclid defines a plane angle as the inclination to each other, in a plane, of two lines which meet each other, and do not lie straight with respect to each other. According to Proclus an angle must be either a quality or a quantity, or a relationship. The first concept was used by Eudemus , who regarded an angle as a deviation from a straight line; the second by Carpus of Antioch , who regarded it as the interval or space between the intersecting lines; Euclid adopted the third concept, although his definitions of right, acute, and obtuse angles are certainly quantitative. Units of measure for angles In order to measure an angle, a circle centered at the vertex is drawn. Since the circumference of a circle is always directly proportional to the length of its radius, the measure of the angle is independent of the size of the circle. Note that angles are dimensionless, since they are defined as the ratio of lengths. - The radian measure of the angle is the length of the arc cut out by the angle, divided by the circle's radius. The SI system of units uses radians as the (derived) unit for angles. - The degree measure of the angle is the length of the arc, divided by the circumference of the circle, and multiplied by 360. The symbol for degrees is a small superscript circle, as in 360°. 2π radians is equal to 360° (a full circle), so one radian is about 57° and one degree is π/180 radians. - The grad, also called grade or gon, is an angular measure where the arc is divided by the circumference, and multiplied by 400. It is used mostly in triangulation. - The point is used in navigation, and is defined as 1/32 of a circle, or exactly 11.25°. - The full circle or full turns represents the number or fraction of complete full turns. For example, π/2 radians = 90° = 1/4 full circle Conventions on measurement A convention universally adopted in mathematical writing is that angles given a sign are positive angles if measured counterclockwise, and negative angles if measured clockwise, from a given line. If no line is specified, it can be assumed to be the x-axis in the Cartesian plane. In navigation and other areas this convention may not be followed. In mathematics radians are assumed unless specified otherwise because this removes the arbitrariness of the number 360 in the degree system and because the trigonometric functions can be developed into particularly simple Taylor series if their arguments are specified in radians. Types of angles An angle of π/2 radians or 90°, one-quarter of the full circle is called a right angle. Angles smaller than a right angle are called acute angles; angles larger than a right angle are called obtuse angles. Angles equal to two right angles are called straight angles. Angles larger than two right angles are called reflex angles. The difference between an acute angle and a right angle is termed the complement of the angle, and between an angle and two right angles the supplement of the angle. In Euclidean geometry, the inner angles of a triangle add up to π radians or 180°; the inner angles of a quadrilateral add up to 2π radians or 360°. In general, the inner angles of a simple polygon with n sides add up to (n − 2) × π radians or (n − 2) × 180°. If two straight lines intersect, four angles are formed. Each one has an equal measure to the angle across from it; these congruent angles are called vertical angles . If a straight line intersects two parallel lines, corresponding angles at the two points of intersection are equal; adjacent angles are complementary, that is they add to π radians or 180°. Angles in different contexts This allows one to define angles in any real inner product space, replacing the Euclidean dot product · by the Hilbert space inner product <·,·>. The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—amphicyrtic (Gr. ἀμφί, on both sides, κυρτόσ, convex) or cissoidal (Gr. κισσόσ, ivy), biconvex; xystroidal or sistroidal (Gr. ξυστρίσ, a tool for scraping), concavo-convex; amphicoelic (Gr. κοίλη, a hollow) or angulus lunularis, biconcave. Also a plane and an intersecting line form an angle. This angle is equal to π/2 radians minus the angle between the intersecting line and the line that goes through the point of intersection and is perpendicular to the plane. Angles in Riemannian geometry Angles in astronomy In astronomy, one can measure the angular separation of two stars by imagining two lines through the Earth, each one intersecting one of the stars. Then the angle between those lines can be measured; this is the angular separation between the two stars. Astronomers also measure the apparent size of objects. For example, the full moon has an angular measurement of 0.5°, when viewed from Earth. One could say, "The Moon subtends an angle of half a degree." The small-angle formula can be used to convert such an angular measurement into a distance/size ratio. Angles in maritime navigation The obsolete (but still commonly used) format of angle used to indicate longitude or latitude is hemisphere degree minute' second", where there are 60 minutes in a degree and 60 seconds in a minute, for instance N 51 23′26″ or E 090 58′57″ - Central angle - Complementary angles - Inscribed angle - Supplementary angles - solid angle for a concept of angle in three dimensions. - Angle Bisectors - Angle Bisectors and Perpendiculars in a Quadrilateral - Angle Bisectors in a Quadrilateral - Constructing a triangle from its angle bisectors - Online Unit Converter - Conversion of many different units The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/Angle
13
22
Use the Law of Cosines with SSS When you know the values for two or more sides of a triangle, you can use the law of cosines. In the following case, you know all three sides (which is called SSS, or side-side-side, in trigonometry) but none of the angles. What you see here is how to solve for the measures of the three angles in triangle ABC, which has sides where a is 7, b is 8, and c is 2. As you can see in the preceding figure, the triangle appears to have two acute angles and one obtuse angle, the obtuse angle being opposite the longest side. Solve for the measure of angle A. Using the law of cosines where side a is on the left of the equation, substitute the values that you know and simplify the equation. Now use a scientific calculator to find the measure of A. A = cos–1(0.594) = 53.559 Angle A measures about 54 degrees. Solve for the measure of angle B. Using the law of cosines where side b is on the left of the equation, input the values that you know and simplify the equation. The negative cosine means that the angle is obtuse — its terminal side is in the second quadrant. Now use a scientific calculator to find the measure of B. B = cos–1(–0.393) = 113.141 Angle B measures about 113 degrees. Determine the measure of angle C. Because angle A measures 54 degrees and angle B measures 113 degrees, add them together and subtract the sum from 180 to get the measure of angle C. 180 – (54 + 113) = 180 – 167 = 13. Angle C measures only 13 degrees.
http://www.dummies.com/how-to/content/use-the-law-of-cosines-with-sss.navId-420746.html
13
10
When James Clarke Maxwell was doing his work with electrodynamics, several of the concepts that we have been considering had not yet been introduced to the world of mathematics. For instance, vector calculus was a very young discipline, and many of the operators currently in use (Div, Curl, the Laplacian) did not exist in Maxwell's time. The original "Maxwells Equations" were a set of 20 complicated differential equations that placed a primary focus on the idea of magnetic potential (a quantity which is almost completely ignored in the modern variants of these equations). Heinrich Hertz and Oliver Heaviside did much of the work to convert Maxwells original equations into a more convenient form. The Electric and Magnetic fields were deemed to be of primary importance, whereas the magnetic potential was dropped from the formalization. From Hertz and Heaviside we obtained the 4 equations that we know today as "Maxwell's Equations". The 4 Equations Here are Maxwell's equations. Several of these equations have been seen already in previous chapters. [Gauss' Law of Electrostatics] [Gauss' Law of Magnetostatics] Where: is the charge density, which can (and often does) depend on time and position, is the permittivity of free space, is the permeability of free space, and is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permitivity of free space with the permeability and permitivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. We can write Maxwell's equations in another form, that relates each field to its sources: By taking the curl of the third equation, we get It should be noticed, if not immediately, that the first two equations are essentially equivalent, and that the second two equations have a similar form and should be able to be put into a single form. We can use our field tensors F and G to put the 4 Maxwell's equations into two more concise equations: You may notice that these two equations are very similar, but they are not completely symmetric. The magnetic field equations reduce because magnetic fields always have two opposite poles, whereas an electric field may have only a single charge. This lack of symmetry in these equations has prompted scientists to search for a magnetic monopole, something that we will talk about in later chapters. Besides the forms of these equations, modern "unified theories" of physics seeking to describe all forces of nature (including, significantly, electromagnetism and gravity) often posit the existence of monopoles. As a basic consideration, similarity and symmetry among many equations and processes in physics often leads to the discovery of entirely new entities or phenomena. Thus, the pronounced lack of symmetry between the magnetic and electric field equations is a simple and logical reason for scientists to search for monopoles.
http://en.m.wikibooks.org/wiki/Electrodynamics/Maxwell's_Equations
13
29
Although you might not know it, your thinking and questioning can be the start of the scientific inquiry process. Scientific inquiry refers to the diverse ways in which scientists study the natural world and propose explanations based on the evidence they gather. If you have ever tried to figure out why a plant has wilted, then you have used scientific inquiry. Similarly, you could use scientific inquiry to find out whether there is a relationship between the air temperature and crickets’ chirping. Scientific inquiry often begins with a problem or question about an observation. In the case of the crickets, your question might be: Does the air temperature affect the chirping of crickets? Of course, questions don’t just come to you from nowhere. Instead, questions come from experiences that you have and from observations and inferences that you make. Curiosity plays a large role as well. Think of a time that you observed something unusual or unexpected. Chances are good that your curiosity sparked a number of questions. Some questions cannot be investigated by scientific inquiry. Think about the difference between the two questions below. Does my dog eat more food than my cat? Which makes a better pet—a cat or a dog? The first question is a scientific question because it can be answered by making observations and gathering evidence. For example, you could measure the amount of food your cat and dog each eat during a week. In contrast, the second question has to do with personal opinions or values. Scientific inquiry cannot answer questions about personal tastes or judgments. How could you explain your observation of noisy crickets on that summer night? “Perhaps crickets chirp more when the temperature is higher,” you think. In trying to answer the question, you are in fact developing a hypothesis. A hypothesis (plural: hypotheses) is a possible explanation for a set of observations or answer to a scientific question. In this case, your hypothesis would be that cricket chirping increases at higher air temperatures. In science, a hypothesis must be testable. This means that researchers must be able to carry out investigations and gather evidence that will either support or disprove the hypothesis. Many trials will be needed before a hypothesis can be accepted as true. To test your hypothesis, you will need to observe crickets at different air temperatures. All other variables , or factors that can change in an experiment, must be exactly the same. Other variables include the kind of crickets, the type of container you test them in, and the type of thermometer you use. By keeping all of these variables the same, you will know that any difference in cricket chirping must be due to temperature alone. An experiment in which only one variable is manipulated at a time is called a controlled experiment . The one variable that is purposely changed to test a hypothesis is called the manipulated variable (also called the independent variable). In your cricket experiment, the manipulated variable is the air temperature. The factor that may change in response to the manipulated variable is called the responding variable (also called the dependent variable). The responding variable here is the number of cricket chirps. Suppose you are designing an experiment to determine whether birds eat a larger number of sunflower seeds or millet seeds. What is your manipulated variable? What is your responding variable? What other variables would you need to control? Another aspect of a well-designed experiment is having clear operational definitions. An operational definition is a statement that describes how to measure a variable or define a term. For example, in this experiment you would need to determine what sounds will count as a single “chirp.” For your experiment, you need a data table in which to record your data. Data are the facts, figures, and other evidence gathered through observations. A data table is an organized way to collect and record observations. After the data have been collected, they need to be interpreted. A graph can help you interpret data. Graphs can reveal patterns or trends in data. Figure 8A Controlled Experiment In their controlled experiment, these students are using the same kind of containers, thermometers, leaves, and crickets. The manipulated variable in this experiment is temperature. The responding variable is the number of cricket chirps per minute at each temperature. Controlling Variables What other variables must the students keep constant in this experiment? A conclusion is a summary of what you have learned from an experiment. In drawing your conclusion, you should ask yourself whether the data support the hypothesis. You also need to consider whether you collected enough data. After reviewing the data, you decide that the evidence supports your original hypothesis. You conclude that cricket chirping does increase with temperature. It’s no wonder that you have trouble sleeping on those warm summer nights! Scientific inquiry usually doesn’t end once a set of experiments is done. Often, a scientific inquiry raises new questions. These new questions can lead to new hypotheses and new experiments. Also, scientific inquiry is not a rigid sequence of steps. Instead, it is a process with many paths, as shown in Figure 9. An important part of the scientific inquiry process is communicating your results. Communicating is the sharing of ideas and experimental findings with others through writing and speaking. Scientists share their ideas in many ways. For example, they give talks at scientific meetings, exchange information on the Internet, and publish articles in scientific journals. When scientists communicate their research, they describe their procedures in full detail so that other scientists can repeat their experiments. Figure 9Scientific Inquiry There is no set path that a scientific inquiry must follow. Observations at each stage of the process may lead you to modify your hypothesis or experiment. Conclusions from one experiment often lead to new questions and experiments.
http://district.fms.k12.nm.us/Departments/currinst/textbooks/Science/Life_Science_Textbook/iText/products/0-13-190309-8/ch1/ch1_s2_1.html
13
51
In 1929, Edwin Hubble discovered that the universe was expanding, and the velocity of expansion was a function of the distance from the Earth. For example, galaxies at a “proper distance” D from the Earth were moving away from the Earth at a velocity v, according to the following equation: v = H0D, where H0 is the constant of proportionality (the Hubble constant). In this context, the phrase “proper distance” means a distance (D) measured at a specific time. Obviously, since the galaxies are moving away from the Earth, the distance D will change (i.e. increase) with time. Until 1998, most physicists and cosmologists believed that the expansion would eventually be slowed by gravity and be reversed (i.e. all matter in the universe would eventually be pulled by gravity to a common point resulting in the “Big Crunch”). In 1998, three physicists (Saul Perlmutter, Brian P. Schmidt, and Adam G. Riess) decided to measure the expansion and expected to confirm that it was slowing down. To their surprise, and the scientific community’s surprise, they discovered that the universe’s expansion was accelerating. In 2011, they received the Nobel Prize in Physics for their discovery. The accelerated expansion of the universe is one of the great mysteries in science. Since the vast majority of scientists believe in the principle of cause and effect, the scientific community postulated that something was causing the accelerated expansion. They named the cause “dark energy,” which they believed was some kind of vacuum force. Today we know that extremely distant galaxies are actually moving away from the Earth with a velocity that exceeds the speed of light. This serves to deepen the mystery. Let us turn our attention to what is causing the accelerated expansion of the universe. First, let us understand that the extremely distant galaxies themselves are not moving away from the Earth faster than the speed of light. A mass, including a galaxy, cannot obtain a velocity greater than the speed of light, according to Einstein’s special theory of relativity. Any theory that attempts to explain the faster than light velocity of extremely distant galaxies via any type of force acting on the galaxies would contradict Einstein’s special theory of relativity. Therefore, we must conclude the galaxies themselves are not moving faster than the speed of light. However, no law of physics prohibits the expansion of space faster than the speed of light. With this understanding, it is reasonable to conclude the space between extremely distant galaxies is expanding faster than the speed of light, which accounts for our observation that the galaxies are moving away from Earth at a velocity faster than the speed of light. What is causing the space between extremely distant galaxies to expand faster than the speed of light? To address this question, let us discuss what we know about space and, more specifically, about vacuums. In my book, Unraveling the Universe’s Mysteries, I explain that vacuums are actually a reservoir for virtual particles. This is not a new theory. Paul Dirac, the famous British physicist and Nobel Laureate, asserted in 1930 that vacuums contain electrons and positrons (i.e. a positron is the antimatter counterpart of an electron). This is termed the Dirac sea. Asserting that vacuums contain matter-antimatter particles is equivalent to asserting that vacuums contain positive and negative energy, based on Einstein’s famous mass-energy equivalence equation, E = mc2 (where E stands for energy, m is the rest mass of an object, and c is the speed of light in a vacuum). Do vacuum really contain particles or energy? Our experimentation with laboratory vacuums proves they do. However, we have no way to directly measure the energy of a vacuum or directly observe virtual particles within the vacuum. As much as we physicists talk about energy, we are unable to measure it directly. Instead, we measure it indirectly via its effects. For example, we are able to measure the Casimir-Polder force, which is an attraction between a pair of closely spaced electrically neutral metal plates in a vacuum. In effect, virtual particles pop in and out of existence, in accordance with the Heisenberg Uncertainty Principle, at a higher density on the outside surfaces of the plates. The density of virtual particles between the plates is less due to their close spacing. The higher density of virtual particles on the outside surfaces of the plates acts to push the plates together. This well-known effect is experimental evidence that virtual particles exist in a vacuum. This is just one effect regarding the way virtual particles affect their environment. There is a laundry list of other effects that prove virtual particles are real and exist in a vacuum. I previously mentioned the Heisenberg Uncertainty Principle. I will now explain it, as well as the role it plays in the spontaneous creation of virtual particles. The Heisenberg Uncertainty Principle describes the statistical behavior of mass and energy at the level of atoms and subatomic particles. Here is a simple analogy. When you heat a house, it is not possible to heat every room uniformly. The rooms themselves and places within each room will vary in temperature. The Heisenberg Uncertainty Principle says the same about the energy distribution within a vacuum. It will vary from point to point. When energy accumulates at a point in a vacuum, virtual particle pairs (matter and antimatter) are forced to pop into existence. The accumulation of energy and the resulting virtual particle pairs are termed a quantum fluctuation. Clearly, vacuums contain energy in the form of virtual particle pairs (matter-antimatter). By extension, we can also argue that the vacuums between galaxies contain energy. Unfortunately, with today’s technology, we are unable to measure the amount of energy or the virtual particle pairs directly. Why are we unable to measure the virtual particle pairs in a vacuum directly? Two answers are likely. First, they may not exist as particles in a vacuum, but rather as energy. As stated previously, we are unable to measure energy directly. Second, if they exist as particles, they may be extremely small, perhaps having a diameter in the order of a Planck length. In physics, the smallest length believed to exist is the Planck length, which science defines via fundamental physical constants. We have no scientific equipment capable of measuring anything close to a Planck length. For our purposes here, it suffices to assert that vacuums contain energy. We are unable to measure the amount of energy directly, but we are able to measure the effects the energy has on its environment. Next, let us consider existence. Any mass requires energy to exist (move forward in time). In my book, Unraveling the Universe’s Mysteries, the Existence Equation Conjecture is derived, discussed, and shown to be consistent with particle acceleration data. The equation is: KEX4 = – .3 mc2, where KEX4 is the kinetic energy associated with moving in the fourth dimension (X4) of Minkowski space, m is the rest mass of an object, and c is the speed of light in a vacuum. This asserts that for a mass to exist (defined as movement in time), it requires energy, as described by the Existence Equation Conjecture. (For simplicity, from this point forward I will omit the word “conjecture” and refer to the equation as the “Existence Equation.”) Due to the enormous negative energy implied by the Existence Equation, in my book, Unraveling the Universe’s Mysteries, I theorized that any mass draws the energy required for its existence from the universe, more specifically from the vacuum of space. Below, I will demonstrate that this gives rise to what science terms dark energy and causes the accelerated expansion of space. At this point, let us address two questions: 1. Is the Existence Equation correct? I demonstrated quantitatively in Appendix 2, Unraveling the Universe’s Mysteries, that the equation accurately predicts a muon’s existence (within 2%), when the muon is accelerated close to the speed of light. Based on this demonstration, there is a high probability that the Existence Equation is correct. 2. What is the space between galaxies? The space between galaxies is a vacuum. For purposes here, I am ignoring celestial objects that pass through the vacuum between galaxies. I am only focusing on the vacuum itself. From this standpoint, based on Dirac’s assertion and our laboratory experiments, we can conclude that vacuums contain matter-antimatter (i.e. the Dirac sea), or equivalently (from Einstein’s famous mass-energy equivalence equation, E = mc2) positive-negative energy. Given that a vacuum contains mass, we can postulate that each mass within a vacuum exerts a gravitational pull on every other mass within the vacuum. This concept is based on Newton’s classical law of gravity, F = G (m1 m2)/r2, where m1 is one mass (i.e. virtual particle) and m2 is another mass (i.e. virtual particle), r is the distance between the two masses, G is constant of proportionality (i.e. the gravitational constant), and F is the force of attraction between the masses. If we think of a vacuum as a collection of virtual particles, it appears reasonable to assume the gravitational force will define the size of the vacuum. This is similar to the way the size of a planet is determined by the amount of mass that makes up the planet and the gravitational force holding the mass together. This is a crucial point. The density of virtual particles defines the size of the vacuum. However, we have shown that existence requires energy (via the Existence Equation). A simple review of the Existence Equation delineates that the amount of energy a mass requires to exist is enormous. The energy of existence is directly proportional to the mass. Therefore, a galaxy, which includes stars, planets, dark matter, and celestial objects, would require an enormous amount of energy to exist. In effect, to sustain its existence, the galaxy must continually consume energy in accordance with the Existence Equation. Using the above information, let us address three key questions: 1. What is causing the vacuum of space between galaxies to expand? To sustain their existence, galaxies remove energy from the vacuum (i.e. space) that borders the galaxies. The removal of energy occurs in accordance with the Existence Equation. The removal of energy results in the gravitational force defining the vacuum to weaken. This causes the vacuum (space) to expand. 2. Why are the distant galaxies expanding at a greater rate than those galaxies closer to the Earth? The galaxies that are extremely distant from the Earth have existed longer than those have closer to the Earth. Therefore, distant galaxies have consumed more energy from the vacuums of space that surround them than galaxies closer to the Earth. 3. Why is the space within a galaxy not expanding? A typical galaxy is a collection of stars, planets, celestial objects, and dark matter. We know from observational measurements that dark matter only exists within a galaxy and not between galaxies. I believe the dark matter essentially allows the galaxy to act as if it were one large mass. From this perspective, it appears that the dark matter blocks any removal of energy from the vacuum (i.e. space) within a galaxy. Dose this solve the profound mystery regarding the accelerated expansion of the universe? To my mind, it does. I leave it to you, my colleagues, to draw your own conclusions.
http://www.louisdelmonte.com/unraveling-the-universes-accelerated-expansion/
13
34
Comets are relatively small icy bodies, often only a few kilometers in extent, that formed in the outer solar system where temperatures are cold enough to sustain (predominately water) ices. They represent the leftover bits and pieces from the outer solar system formation process that took place some 4.6 billion years ago. Over long time periods, some comets are perturbed from their distant orbits and sent close enough to the sun that their ices begin to vaporize. This out-gassing of gas and dust from a comet's nucleus produces an atmosphere (i.e., coma) often extending many hundreds of thousands of kilometers. Because of the reflection of sunlight from its dust particles and the fluorescence of its excited gases, this atmosphere glows with a "fuzzy" appearance when viewed from the ground. As this coma material continues to expand away from the solid cometary nucleus, the gas component is eventually "blown" away from the sun by a high speed stream of charged particles from the sun (solar wind). The comet's dust component is also blown away from the sun - this time by the pressure of sunlight on the tiny dust particles. Thus a comet can have both a gas tail and a dust tail. Comets originate in the outer regions of our planetary system with one group forming in the region near the current orbits of Uranus and Neptune and another group, called the Kuiper belt objects, forming somewhat more distant to the sun - beyond the orbit of Neptune. As a result of their interactions with the outer major planets, the comets in the first group can be thrown out to the distant Oort cloud some 50,000 to 150,000 times further from the sun than the Earth. Close passing single stars and the gravitational interaction with our Milky Way disk of stars can then nudge these comets back into the inner solar system where they can arrive with any inclination with respect to the Earth's orbital plane. Sometimes these objects can be seen as impressive, long-period comets like comet Hale-Bopp that was easily observable to the naked-eye in 1997. comets orbit the sun with periods ranging from 200 to several million years. Comets that form in the so-called Kuiper belt (or Edgeworth-Kuiper belt after the two researchers who hypothesized these comets in the mid twentieth century) are also acted upon gravitationally by the massive outer planets and they often evolve into the short-periodic comets, whose orbital inclinations are usually relatively close to the Earth's orbital plane. With their orbital periods of about 5-7 years, these short-period comets orbit the sun frequently, lose much of their volatile ices, and are often far less visually impressive than their long-period cometary cousins that arrive fresh from the Oort cloud. Read this short article by Don Yeomans to learn why comets are particularly interesting and why we should study these primitive bodies. Then, learn about some of the great comets of the past in this article by Don Yeomans. Orbits: Diagrams & Elements The orbit of any comet (or asteroid) can be viewed using our java-based orbit applet. Start with our small-body browser to find the asteroid of interest, then select the Orbit Diagram link. For example, here is the orbit diagram for comet 1P/Halley. Orbital elements and related parameters are also available for any comet (or asteroid) using our small-body browser. In addition, custom tables of orbital elements and/or physical parameters are available using our small body database search engine. We also provide fixed-format ASCII tables of elements. Warning: If you intend to use cometary orbital elements in a two-body propagation to compute future/past position (ephemerides), your results will be inaccurate and in some cases, completely incorrect. The motion of comets is affected by their so-called non-gravitational forces (the rocket-like force from outgassing of material from the comet while close to the sun). Thus, it is especially important to use HORIZONS to compute comet ephemerides. Physical parameters for comets are not well known primarily because these bodies are too small for ground-based observing when the comet is far enough from the sun that its coma does not shroud its surface. The only parameters determined for nearly all comets are their magnitude parameters (M1,K1 and/or M2,K2). However, a few comets have other parameters determined including and geometric albedo. Known physical parameters for any given small body are are available from our small-body browser. Comet ephemerides are available using JPL's HORIZONS system. Discovery circumstances for many comets are also available using our small-body browser. Discovery data include the date of discovery, who discovery the comet, and where it was discovered. Spacecraft missions to small-bodies provide valuable scientific data ultimately improving our understanding of these primitive solar system bodies. A list of asteroids and comets targeted by spacecraft missions (past, present, and future) is presented on this page. Radar astrometry for selected comets is available in tabular format. A table showing data for only comets is presented on this page.
http://ssd.jpl.nasa.gov/?comets
13
23
Geocentricity and Creation by Gerald A. Aardsma, Ph.D. 1. What is geocentricity? Geocentricity is a conceptual model of the form of the universe which makes three basic assertions about the nature of the earth and its relationship to the rest of the universe. These are: a. the earth is the center of the universe, b. the earth is fixed (i.e., immobile) in space, and c. the earth is unique and special compared to all other heavenly bodies. 2. What is the History of geocentricity? The teaching of geocentricity can be traced in western thought at least back to Aristotle (384-322 B.C.). Aristotle argued, for example, that the reason why all bodies fall to the ground is because they seek their natural place at the center of the universe which coincides with the center of the earth. A geocentric model of the universe seems first to have been formalized by Ptolemy, the famous Greek astronomer who lived in Alexandria around A.D. 130. Ptolemy's model envisioned each planet moving in a small circle, the center of which moved along a large circular orbit about the earth. This model was generally accepted until Copernicus published his heliocentric model in 1543. The heliocentric view pictures the sun as motionless at the center of the solar system with all the planets, including the earth, in motion around it. Copernicus' heliocentric model, because it used circles to describe the orbits of the planets about the sun instead of ellipses, was as clumsy and inaccurate as Ptolemy's geocentric model. However, it was conceptually simpler. It quickly gained acceptance, though not without considerable controversy. The conflict between these two views came to a head in the well-known trial of Galileo by the Inquisition in 1632. Starting from a heliocentric viewpoint, Kepler (1571-1630) was able to formulate laws of planetary motion which accurately described the orbits of the planets for the first time. Newton (1643-1727) was then able to explain why Kepler's laws worked based upon his famous law of gravity. This tremendous progress in understanding resulted in almost universal acceptance of heliocentricity and rejection of geocentricity. 3. What does modern science say about geocentricity? Many attempts were made to prove that heliocentricity was true and geocentricity was false, right up until the early 1900's. All such attempts were unsuccessful. The most well-known of these is the Michelson-Morley experiment which was designed to measure the change in the speed of light, due to the assumed motion of the earth through space, when measured in different directions on the earth's surface. The failure of this experiment to detect any significant change played an important role in the acceptance of Einstein's theory of special relativity. The theory of special relativity holds as a basic assumption that the speed of light will always be the same everywhere in the universe irrespective of the relative motion of the source of the light and the observer. The ability of special relativity to successfully explain many non-intuitive physical phenomena which are manifested by atomic particles when moving at speeds greater than about one-tenth the speed of light seems to corroborate this assumption. Thus, the failure of the Michelson-Morley experiment (and all other experiments of similar intent) to detect any motion of the earth through space is understood by modern science in terms of relativity rather than geocentricity. Einstein's theory of general relativity adds further to the debate. It asserts that it is impossible for a human observer to determine whether any material body is in a state of absolute rest (i.e., immobile in space). It claims that only motion of two material bodies relative to one another can be physically detected. According to this theory the geocentric and heliocentric viewpoints are equally valid representations of reality, and it makes no sense whatsoever scientifically to speak of one as being true and the other false. This shift in emphasis from an either-or argument to a synthesis and acceptance of both viewpoints is summed up by the well-known astronomer, Fred Hoyle, as follows: The relation of the two pictures [geocentricity and heliocentricity] is reduced to a mere coordinate transformation and it is the main tenet of the Einstein theory that any two ways of looking at the world which are related to each other by a coordinate transformation are entirely equivalent from a physical point of view.... Today we cannot say that the Copernican theory is 'right' and the Ptolemaic theory 'wrong' in any meaningful physical sense. Relativity is the theory which is accepted as the correct one by the great majority of scientists at present. However, many science teachers and textbooks are not aware of this, and it is not uncommon to find heliocentricity taught as the progressive and "obviously true" theory even today. 4. What does the Bible teach about geocentricity? To learn what the Bible teaches regarding geocentricity, it is necessary to consider separately the three basic assertions of uniqueness, centrality, and fixity mentioned above since the composite "theory of geocentricity" is nowhere mentioned in the Bible. The assertion that the earth is unique and special (item "c" above) is clearly and unequivocally taught in the first chapter of Genesis. The plain sense of the creation account is that all other heavenly bodies were not even brought into existence until the fourth day of creation. Thus, God had already created the earth, separated the waters above and below the atmosphere, formed the earth into continents and oceans, and brought forth vegetation upon the earth before He paused to create the solar system, the Milky Way, and all of the other material bodies in the universe. It is very clear that the creation of the earth was distinct from that of any other heavenly body. The Biblical doctrine of the uniqueness of the earth is strongly supported by modern space exploration. In particular, every effort by scientists to demonstrate that life does or possibly could exist on other planets in our solar system has so far failed. Such efforts have only served to underscore how different the earth is in this regard from all other heavenly bodies which we have been able to study. While the earth teems with life, elsewhere space appears to be only barren and incredibly hostile to life. The earth gives every indication that it was specially designed for life, and it is unique in this regard. In contrast to the bountiful evidence in the Bible which teaches that the earth is special, nowhere is it taught that the earth is the center of the universe (item "a" above). In fact, the Bible provides no explicit teaching on any questions relating to the form of the universe. We are not told, for example, whether the universe is finite or infinite, and no explicit statement can be found to help us know whether space is flat or curved. This is the type of information we would need to deduce whether the earth is at the center of the universe or if it even makes sense to say that the universe has a center. On matters relating to the physical form of the universe, the Bible is mute. This leaves the more controversial assertion (item "b" above) that the earth is motionless in space to be discussed. In fact, the Bible contains no explicit teaching on this matter either. Nowhere does the Bible set about to deal explicitly with the question of whether the earth is moving through space or not. To be sure, one can fashion implicit arguments for an immobile earth from the Bible, but in no instance do the Bible verses used to accomplish this goal rest in a context of an overall discussion of the physical form of the universe. Evidently, while the physical form of the universe is an interesting scientific issue, it is not of very great importance Biblically. The lack of explicit Biblical teaching on this whole matter makes it impossible to call any conceptual model of the form of the universe "the Biblical view." 5. What is the role of geocentricity in creationism? The Biblical status of the doctrine of creation contrasts sharply with that of geocentricity. The Bible opens with the explicit declaration: "In the beginning God created the heavens and the earth," and Genesis 1 goes on to outline in detail the doctrine of creation. While it is impossible to find any definitive teaching in the Bible on the physical form of the universe, it is impossible to miss the explicit teaching in the Bible that the world was supernaturally created by God, for it permeates Scripture. Geocentricity and creationism are really separate matters. Because of the contrast in the way the Bible deals with these two issues, I believe that attempts to link geocentricity and creationism are ill-founded. 6. What can we learn of general importance from the geocentricity-helio-centricity relativity debate? Perhaps the most important lesson to be learned from the history of geocentricity is in connection with the question, "What role should scientific discovery play in the interpretation of the Bible?" It is surely ironic to see the incident of Galileo's trial before the Inquisition paraded as a supposedly unarguable illustration of the "mistake" recent-creationists make when they insist on a literal, supernatural, six-day creation and fail to yield to modern scientific views of how the universe came to be. "After all," we hear, "the theologians said that Galileo's heliocentric viewpoint was heresy, but now everybody knows that the theologians were wrong and Galileo was right." In actual fact, as we have seen above, the current scientific consensus is that "Today we cannot say that the Copernican theory [which Galileo held] is 'right' and the Ptolemaic theory [which the theologians held] 'wrong' in any meaningful physical sense." The generally overlooked lesson here is that scientific theories do not provide a very secure basis from which to interpret Scripture. In the course of the last five hundred years the weight of scientific consensus has rested in turn with each of three different theories about the form of the universe: first geocentricity, then helio-centricity, and now relativity. This is the way it is with scientific theories—they come and go. But the Word of God endures forever. Let us be immovable in upholding what the Bible clearly teaches. Fred Hoyle, Nicolaus Copernicus (London: Heinemann Educational Books Ltd., 1973), p. 78. * Dr. Aardsma is Assistant Professor of Astro Geophyics at ICR Bouw, D. "The Bible and Geocentricity." Bulletin of the Tychonian Society, no. 41 (January, 1987), 22-25. (A more recent work by Bouw is: Geocentricity [Cleveland: Association for Biblical Astronomy, 1992].) Hoyle, Fred. Nicolaus Copernicus. London: Heinemann Educational Books Ltd., 1973. Reichenbach, Hans. From Copernicus to Einstein. New York: Dover Publications, Inc., 1980. Ronan, Colin Alistair. "Copernicus" The New Encyclopedia Britannica. 15th ed. XVI, 814-815.
http://www.icr.org/article/382/
13
20
|acceleration||The acceleration of an object measures the rate of change in its velocity. We use the second derivative [ f ''(t)] to calculate this.| An angle between 0o and 90o. |addition rule (probability)|| P(A or B) = P(A) + P(B) – P(A and B) We can use this for all types of events. However, if the events are mutually exclusive, we do not need the “ – P(A and B)” part. This is because P(A and B) = 0 for mutually exclusive events. When a number is added to its additive inverse the answer is zero. Two angles are adjacent if they share the same vertex and have one side in common between them. An expression made up of any number of terms seperated from each other by + an -. A fraction which contains variables. When lines are parallel, we have equal alternate angles. Look for a Z or a N. This is a series in which every term has the opposite sign from the preceding term. e.g. 1 - 3 + 5 - 4 + 9 . . . Perpendicular height of a shape. A triangle has three altitudes. |analitical geometry||This is the branch of mathematics that uses algebra to help in the study of geometry.| |angle of depression|| Measured from the horizontal downwards. |angle of elevation|| Measured from the horizontal upwards. This is the pointed tip of a cone or pyramid. |arc of a circle|| An arc is part of the circumference of a circle. The amount of surface or the size of a surface, measured in square units. Sequences with a constant first difference i.e. you need to add or subtract the same amount to get the next term. (e.g. 5 ; 1 ; -3 ; -7 ; . . . ) You get this when you add the terms of an arithmetic sequence. (e.g. 5 + 1 - 3 - 7 - . . .) |ascending order||From smallest to biggest.| A straight line which a graph approaches, but never reaches. In the example below, we have a horizontal asymptote. A mathematical fact that is accepted to be true without needing to prove it. e.g. a tangent to a circle is perpendicular to a radius of that circle, at the point of contact |axis of symmetry|| The line which divides a shape (or graph) so that one half is the mirror image of the other half.
http://www.youcandomaths.co.za/eng/dictionary/
13
13
Mercury's elliptical orbit takes the small planet as close as 29 million miles (47 million kilometers) and as far as 43 million miles (70 million kilometers) from the sun. If one could stand on the scorching surface of Mercury when it is at its closest point to the sun, the sun would appear almost three times as large as it does when viewed from Earth. Temperatures on Mercury's surface can reach 800 degrees Fahrenheit (430 degrees Celsius). Because the planet has no atmosphere to retain that heat, nighttime temperatures on the surface can drop to -280 degrees Fahrenheit (-170 degrees Celsius). Because Mercury is so close to the sun, it is hard to directly observe from Earth except during twilight. Mercury makes an appearance indirectly, however, 13 times each century. Earth observers can watch Mercury pass across the face of the sun, an event called a transit. These rare transits fall within several days of May 8 and November 10. Scientists used to think that the same side of Mercury always faces the sun, but in 1965 astronomers discovered that the planet rotates three times during every two orbits. Mercury speeds around the sun every 88 days, traveling through space at nearly 31 miles (50 kilometers) per second faster than any other planet. The length of one Mercury day (sidereal rotation) is equal to 58.646 Earth days. Rather than an atmosphere, Mercury possesses a thin exosphere made up of atoms blasted off its surface by solar wind and striking micrometeoroids. Because of the planet's extreme surface temperature, the atoms quickly escape into space. With the thin exosphere, there has been no wind erosion of the surface and meteorites do not burn up due to friction as they do in other planetary atmospheres. Mercury's surface resembles that of Earth's moon, scarred by many impact craters resulting from collisions with meteoroids and comets. While there are areas of smooth terrain, there are also lobe-shaped scarps or cliffs, some hundreds of miles long and soaring up to a mile (1.6 kilometers) high, formed by early contraction of the crust. The Caloris Basin, one of the largest features on Mercury, is about 800 miles (1,300 kilometers) in diameter. It was the result of an asteroid impact on the planet's surface early in the solar system's history. Over the next half-billion years, Mercury shrank in radius about 0.6 to 1.2 miles (1 to 2 kilometers) as the planet cooled after its formation. The outer crust contracted and grew strong enough to prevent magma from reaching the surface, ending the period of geologic activity. Mercury is the second smallest planet in the solar system, larger only than previously measured planets, such as Pluto. Mercury is the second densest planet after Earth, with a large iron core having a radius of 1,100 to 1,200 miles (1,800 to 1,900 kilometers), about 75 percent of the planet's radius. Mercury's outer shell, comparable to Earth's outer shell (called the mantle), is only 300 to 400 miles (500 to 600 kilometers) thick. Mercury's magnetic field is thought to be a miniature version of Earth's, but scientists are uncertain of the strength of the field. Missions to Mercury Only one spacecraft has ever visited Mercury: Mariner 10, which imaged about 45 percent of the surface. In 1991, astronomers using radar observations showed that Mercury may have water ice at its north and south poles inside deep craters that are perpetually cold. Falling comets or meteorites might have brought ice to these regions of Mercury, or water vapor might have outgassed from the interior and frozen out at the poles. A new NASA mission to Mercury called MErcury Surface, Space ENvironment, Geochemistry, and Ranging (MESSENGER) will begin orbiting Mercury in March 2011 to investigate key scientific areas such as the planet's composition, the structure of the core, the magnetic field, and the materials at the poles. —Text courtesy NASA/JPL Phenomena: A Science Salon National Geographic Magazine Our genes harbor many secrets to a long and healthy life. And now scientists are beginning to uncover them All the elements found in nature—the different kinds of atoms—were found long ago. To bag a new one these days, and push the frontiers of matter, you have to create it first. Burn natural gas and it warms your house. But let it leak, from fracked wells or the melting Arctic, and it warms the whole planet.
http://science.nationalgeographic.com/science/space/solar-system/mercury-article/
13
50
Circumference Of a Circle When Radius is Given Video Tutorial circles video, circumference video, curves video, plane figures video, radius video, shapes video. Watch Our Video Tutorials At Full Length At TuLyn, we have over 2000 math video clips. While our guests can view a short preview of each video clip, our members enjoy watching them at full length. Become a member to gain access to all of our video tutorials, worksheets and word problems. Circumference Of a Circle When Radius is Given This tutorial will show you how to find the circumference when given the radius. You will learn the relationship between the diameter and the radius. It is important to take note that we need to replace the diameter in the formula and not the radius, so we need to take the measurement for the radius and figure out the diameter in order to solve for the circumference. Circumference of a circle when radius is given video involves circles, circumference, curves, plane figures, radius, shapes. The video tutorial is recommended for 3rd Grade, 4th Grade, 5th Grade, 6th Grade, 7th Grade, 8th Grade, 9th Grade, and/or 10th Grade Math students studying Algebra, Geometry, Basic Math, and/or Pre-Algebra. Circles are simple shapes of Euclidean geometry. A circle consists of those points in a plane which are at a constant distance, called the radius, from a fixed point, called the center. A chord of a circle is a line segment whose both endpoints lie on the circle. A diameter is a chord passing through the center. The length of a diameter is twice the radius. A diameter is the largest chord in a circle. Circles are simple closed curves which divide the plane into an interior and an exterior. The circumference of a circle is the perimeter of the circle, and the interior of the circle is called a disk. An arc is any connected part of a circle. A circle is a special ellipse in which the two foci are coincident. Circles are conic sections attained when a right circular cone is intersected with a plane perpendicular to the axis of the cone. The circumference is the distance around a closed curve. Circumference is a kind of perimeter.
http://tulyn.com/4th-grade-math/radius/videotutorials/circumference-of-a-circle-when-radius-is-given_by_polly.html
13
21
During the first two years of primary education children can learn to do arithmetic faster and better with the help of a more systematically structured educational programme. For older children, teaching arithmetic with the systematic use of visual aids, such as blocks and strings of beads, has many advantages. This is apparent from the review study by NWO researchers Egbert Harskamp and Annemieke Jacobse from the University of Groningen, The Netherlands. Harskamp and Jacobse investigated the effect of new forms of instruction in arithmetic education. They examined the outcomes of 40 experimental studies aimed at improving arithmetic skills. Their findings revealed, for example, that a standardised and clearly structured programme that employs a variety of methods enables young children to learn arithmetic quicker and better than they do under the methods commonly used now, most of which do not offer a structured programme. Good methods include the use of picture books about arithmetic, group discussions, arithmetic games and songs. Short, focused board games or computer games also contribute to an improved development of young pupils counting skills and number comprehension. 'Visualisation is a good method for slightly older children,' says Egbert Harskamp, endowed professor of effective learning environments at the University of Groningen. 'For addition and subtraction up to 100 it was found, for example, that offering rows of blocks or a string of beads in a ten structure considerably improved the arithmetic performances, as long as the teaching method used had a clear structure. The teaching methods for arithmetic currently used in Dutch schools contain some visual models for addition and subtraction but these are not usually presented in a coherent manner. This unstructured use can be confusing for pupils.' The use of the computer in arithmetic education was also found to be effective. Dutch arithmetic teaching methods do not treat different types of calculation in a structured manner and various subjects are offered in a single lesson. Educational computer programs, however, have the advantage of a consistent structure that offers the material subject by subject. Moreover, well-designed computer programs provide instruction, testing and feedback components for pupils, and allow teachers to register how the pupils are progressing and where additional guidance is needed. The studies were done in English-speaking countries. They covered number comprehension, basic operations, measurement and geometry, ratio calculations (percentages, fractions and ratios) or the solving of applied problems. A total of more than 6800 pupils from primary education were involved. The review of these studies has been presented in two publications: A Meta-Analysis of the Effects of Instructional Interventions on Students' Mathematics Achievement and Effective arithmetic instruction with the help of computers. The second publication is mainly aimed at teachers. Explore further: Evolution of lying
http://phys.org/news/2012-02-children-arithmetic-faster.html
13
22
According to a NASA press release, about half of Greenland's surface ice sheet naturally melts during an average summer. But the data from three independent satellites this July, analyzed by NASA and university scientists, showed that in less than a week, the amount of thawed ice sheet surface skyrocketed from 40% to 97% percent. In over 30 years of observations, satellites have never measured this amount of melting, which reaches nearly all of Greenland's surface ice cover. When Son Nghiem of NASA's Jet Propulsion Laboratory observed the recent melting phenomenon, he said in the NASA press release, "This was so extraordinary that at first I questioned the result: Was this real or was it due to a data error?" Scientists at NASA's Goddard Space Flight Center, University of Georgia-Athens and City University of New York all confirmed the remarkable ice melt. NASA's cryosphere program manager, Tom Wagner, credited the power of satellites for observing the melt and explained to The Huffington Post that, although this specific event may be part of a natural variation, "We have abundant evidence that Greenland is losing ice, probably because of global warming, and it's significantly contributing to sea level rise." Wagner said that ice is clearly thinning around the periphery, changing Greenland's overall ice mass, and he believes this is primarily due to warming ocean waters "eating away at the ice." He cautiously added, "It seems likely that's correlated with anthropogenic warming." This specific extreme melt occurred in large part due to an unusual weather pattern over Greenland this year, what the NASA press release describes as a series of "heat domes," or an "unusually strong ridge of warm air." Notable melting occurred in specific regions of Greenland, such as the area aroundSummit Station, located two miles above sea level. Not since 1889 has this kind of melting occurred, according to ice core analysis described in NASA's press release. Goddard glaciologist Lora Koenig said that similar melting events occur about every 150 years, and this event is consistent with that schedule, citing the previous 1889 melt. But, she added, "if we continue to observe melting events like this in upcoming years, it will be worrisome." "One of the big questions is 'What's happening in the Arctic in general?'" Wagner said to HuffPost. Wagner explained that in recent years, studies have observed thinning sea ice and "dramatic" overall changes. He was clear, "We don’t want to lose sight of the fact that Greenland is losing a tremendous amount of ice overall." NASA CAPTION: Extent of surface melt over Greenland’s ice sheet on July 8 (left) and July 12 (right). Measurements from three satellites showed that on July 8, about 40 percent of the ice sheet had undergone thawing at or near the surface. In just a few days, the melting had dramatically accelerated and an estimated 97 percent of the ice sheet surface had thawed by July 12. In the image, the areas classified as “probable melt” (light pink) correspond to those sites where at least one satellite detected surface melting. The areas classified as “melt” (dark pink) correspond to sites where two or three satellites detected surface melting. The satellites are measuring different physical properties at different scales and are passing over Greenland at different times. As a whole, they provide a picture of an extreme melt event about which scientists are very confident. It is true article at least mentions 150 year. That is important since Industrialized Nations were not here then but I date it at 200 years rounded. The Sun and normal earth wobble account for 97% of our climate leaving only 3% left and out of that 3%, cows have more effect than humans but there are Volcanoes and many other natural variables leaving humans at only City heat Island effect so stop taking temperatures in growing Cities to get real data and remember humans are normal to the planet and we also do experiments with climate so nothing unnatural is going on. It would be impossible for it to be unnatural since the whole Universe is natural. We will be so cold by 2020; people will be like in the 70s yelling coming Ice Age again. I admit, I am untrusting of most meteorologist although, I do know one of the more famous ones you see on TV often. He is pretty good but I think I am much better in forecasting because I do not use computer models. We are not going to burn up or freeze to death at least in the next two thousand years. After that, I do not have enough proof of what will happen yet but I know what will happen up to that far out and see nothing irregular or anything to cause global destruction with Oceans or anything else. Snow lovers are going to like this winter from Northern Ga to the NE as well as much of the Midwest but a little disappointed out West. Enjoy the weather and don’t worry about statements of doom. Those are lies and scientists that do not agree with global warming in the States who do research lose research money while those who agree get money. That tells the story right there. Follow the money to find your conspiracy. After climate study going back to the 1970s, I know humans have little to do with climate other than City heat Island effect. Climate will not be fully understood this century however; we will have much better information by 2020 due in large part to Satellites. There are climate phases, small, mid and large. We are in a new 30 year cold phase which is seen by PDO and other factors thus; fewer El Nino’s and more La Nina’s but climatology of these MUST be based on the last cold phase rather than the last 30 years or predictions will be off. We are also in a mid-phase too which happens every few hundred years so we have both global cooling on small phase and warming on the mid phase. This will result in two extremely cold winters followed by one warm winter until 2030. I expect to see Arctic ice buildup to 1979 levels by 2020 to prove my theory correct but my forecasts are top of the line at 90% winter after winter so I’m doing something right and I’m not a meteorologist. I consider myself a student of more than 30 years. There is no money in being a meteorologist which is why I am not one. My winter forecasts come out every year Thanksgiving week. Preliminary study shows CPC to be wrong with this being a brutally cold winter mostly in the East but this can change until I have full summer data and some fall data. Now, from the sound of this article, we're only discussing surface ice. I am basing that on the definition I learned in my climate change course at RU 12 years ago, but that to me sounds a lot like just the top layer. There's still a mile or more of glacial ice below that. We're not talking bare rock on Greenland, that would be catastrophic to our oceans and climate. The scariest part though, from what I remember, is when that glacial ice is exposed, it is gone forever. In this interglacial period we're in, that thick icecap is not being replaced. So yeah, we just took the protective blanket off all that blue ice, and now it will be melting at an equally rapid pace, barring any dramatic change in the pattern. I already linked this report to a couple friends of mine who are environmental science professors, just to get some clarification on how much ice is really gone. 97% of the surface ice is not 97% of the ice cap though...I hope. This to me is not surprising you can see it coming for the last 10 years! I mean look at this year many many places in the Untied States have seen their warmest months on record month after month after month. You have amazing warm waters off the East Coast I mean never in my life time have I seen Cape May NJ week after week after week of 80+ degree waters and this has been going on from just about late MAY!!!! May people talk about the air warming up exponetially but don't forget about your Oceans there is a lag time I would say of 2-5 years and the Oceans have a HUGE role in controlling our weather patterns. That's another point I want to make as well. Has anyone talked about or noticed the long duration weather patterns that have been occurring over the past 3-5 years. NOTHING MOVES AND THOSE AREAS THAT GET HOT AND DRY STAY HOT AND DRY. THOSE AREAS THAT GET WET STAY WET AND FLOODED! ONE EXTREME TO ANOTHER REMEMBER GLOBAL WARMING CLIMATE CHANGE IS ABOUT EXTREMES AND THAT IS WHAT YOU WILL SEE HAPPENING EXPONETIALLY OVER THE NEXT DECADE. I have named my screen name WildWeather because I know the ramifications of Global Warming or Climate change! The end results are WildWeather of all sorts!
http://www.liveweatherblogs.com/index.php?option=com_community&view=groups&task=viewdiscussion&groupid=44&topicid=5176&Itemid=179
13
10
Preliminary data from the Curiosity Mars Science Laboratory, presented at the European Planetary Science Conference on 28 September, indicate that the Gale Crater landing site might be drier than expected. The Curiosity rover is designed to carry out research into whether Mars was ever able to support life, and a key element of this search is the hunt for water. Although Mars has many features on its surface that suggest a distant past in which the planet had abundant liquid water in the form of rivers and lakes, the only water known to be abundant on Mars today is frozen, embedded in the soil, and in large ice caps at both poles. The Dynamic Albedo of Neutrons (DAN) instrument on board Curiosity is designed to detect the location and abundance of water thanks to the way hydrogen (one of water's components) reflects neutrons. When neutrons hit heavy particles, they bounce off with little loss in energy, but when they hit hydrogen atoms (which are much lighter and have approximately the same mass as neutrons), they lose half of their energy. The DAN instrument works by firing a pulse of neutrons at the ground beneath the rover and detecting the way it is reflected. The intensity of the reflection depends on the proportion of water in the ground, while the time the pulse takes to reach the detector is a function of the depth at which the water is located. "The prediction based on previous measurements using the Mars Odyssey orbiter was that the soil in Gale Crater would be around 6% water. But the preliminary results from Curiosity show only a fraction of this," said Maxim Mokrousov (Russian Space Research Institute), the lead designer of the instrument. One possible explanation of the discrepancy lies in the variability of water content across the surface of Mars. There are large-scale variations, with polar regions in particular having high abundances of water, but also substantial local differences even within individual regions on Mars. The Mars Odyssey spacecraft is only able to measure water abundance for an area around 300 by 300 kilometers -- it cannot make high resolution maps. It may therefore be that Odyssey's figure for Gale Crater is an accurate (but somewhat misleading) average of significantly varying hydrogen abundances in different parts the crater. Indeed, over the small distance that the rover has already covered, DAN has observed variations in the detector counting rates that may indicate different levels of hydrogen in the ground, hinting that this is likely to be the case. Curiosity's ability to probe the water content in the Martian soil in specific locations, rather than averages of broad regions, allows for a far more precise and detailed understanding of the distribution of water ice on Mars. EPSC 2012 Press Officer +44 7756 034243 EPSC 2012 Press Officer +44 7754 130109 EPSC Press office (24-28 September only) +34 91 722 3020 (English enquiries) +34 91 722 3021 (Spanish enquiries) Fax: +34 91 722 3022 Russian Space Research Institute, Moscow http://www.europlanet-eu.org/outreach/images/stories/epsc2012/dan_mars_ice.png A: Detecting water on Mars using the DAN instrument The DAN instrument works by firing a pulse of neutrons at the ground beneath the Curiosity rover. If they hit hydrogen (as a component of water ice) the neutrons' kinetic energy is significantly reduced, while other materials in the ground affect the neutrons far less. Credit: Russian Federal Space Agency/NASA/JPL-Caltech http://www.nasa.gov/mission_pages/msl/multimedia/pia16082.html B: The location of the DAN Instrument on the Curiosity rover This image of NASA's Curiosity rover shows the location of the two components of the Dynamic Albedo of Neutrons instrument. The neutron generator is mounted on the right hip (visible in this view), and the detectors are on the opposite hip. Image credit: NASA/JPL-Caltech European Planetary Science Congress 2012 The European Planetary Science Congress (EPSC) is the major European meeting on planetary science and attracts scientists from Europe and around the World. The 2012 program includes more than 50 sessions and workshops. The EPSC has a distinctively interactive style, with a mix of talks, workshops and posters, intended to provide a stimulating environment for discussion. This year's meeting will take place at the IFEMA-Feria de Madrid, Spain, from Sunday 23 September to Friday 28 September 2012. EPSC 2012 is organized by Europlanet, a Research Infrastructure funded under the European Commission's Framework 7 Program, in association with the European Geosciences Union, with the support of the Centro de Astrobiologia of Spain's Instituto Nacional de Tecnica Aeroespacial (CAB-INTA). Details of the Congress and a full schedule of EPSC 2012 scientific sessions and events can be found at the official website: http://www.epsc2012.eu/ The Europlanet Research Infrastructure is a major (O6 million) program co-funded by the European Union under the Seventh Framework Program of the European Commission. The Europlanet Research Infrastructure brings together the European planetary science community through a range of Networking Activities, aimed at fostering a culture of cooperation in the field of planetary sciences, Transnational Access Activities, providing European researchers with access to a range of laboratory and field site facilities tailored to the needs of planetary research, as well as on-line access to the available planetary science data, information and software tools, through the Integrated and Distributed Information Service. These programs are underpinned by Joint Research Activities, which are developing and improving the facilities, models, software tools and services offered by Europlanet. Europlanet Project website: http://www.europlanet-ri.eu Europlanet Outreach website: http://www.europlanet-eu.org/outreach Follow #epsc2012 @europlanetmedia on Twitter
http://spaceref.com/news/viewpr.html?pid=38722
13
20
Exponential Growth and Decay This section is actually a brief introduction to differential equations. A differential equation is pretty much what it sounds like – it’s an equation that involves a function and its derivative. One of the simplest versions of that would be y=y’. Also simple is y = ky’, where k is some constant. The book phrases this dy/dt = ky, but it’s the same thing. What does this mathematical phrase mean? I like to read it as “the value of the function at a given point is proportional to the value of the derivative at that point.” Eventually we’ll be studying differential equations here, and we’ll get into how to explicitly solve here. HOWEVER, you may remember a certain type of functions which has a derivative proportionate to itself: exponential functions! Let’s take the function . (Note, I used t as the variable since the book did, but I could’ve also used x or p or n or whatever) If we take the derivative, we get . So, we see that the derivative of such a function is equal to k*(itself). Nice. You may notice that there is an entire type of function that obeys that rule. It’s of the form , where C is some constant and k is some constant. You may also notice that the equation gets easy to solve at f(0), because the k disappears, meaning the equation simply reduces to f(0) = C. This is very important. Why? As it happens, exponential equations of this form are easy to solve at t=0. But, equally handy, real life systems are often easy to understand at t=0. That is, it’s often easy to know what’s going on right at the beginning. Let’s get into some examples. dP/dt = kP, where P is population, t is time, and k is some constant. How do you read that? There are many ways, but I like to say it as “The change in population is proportional to the current population. Makes sense. If you have two rabbits breeding, there’ll be fewer offspring than if you have 20 rabbits breeding. For a simple system, the more individuals you have, the faster the population grows. When we solve like we did last time, we get this: I read that as “population grows exponentially with time at a rate of k.” If we know the initial population, we can easily solve for C by setting t=0. Pretty simple, I think. The nice thing about differential equations is that you experience them in real life. If you have two cats and they have kittens and their kittens have kittens, the growth rate definitely feels exponential and depends on the number of cats you start with (P at t=0), the rate at which the breed per time unit (k), and the amount of time that has passed. Similarly, you can have a situation where the population is decreasing. The book gives the example of a substance losing mass over time. This is basically the same as rabbit breeding, except the rate is negative. So, we have an equation of the form: In this case, we know k must be negative, since m(t) has to decrease over time. You’ll notice that for negative k, as t increases, the right side of the equation becomes a smaller and smaller fraction of its initial value, approaching closer and closer to 0. Once again, this makes a lot of sense. In the case of matter decaying, it also makes sense that it takes longer and longer to lose the same amount of mass. Why? Radioactive decay involves crazy quantum shit. Suffice it to say that it’s a probabilistic process. So, like with lottery tickets where you’re more likely to win if you have more tickets, with matter you’re more likely to get radioactive decay if you have more atoms. With that in mind, the book actualy shows you how to calculate what’s called a “half-life.” A half-life is simply how much time it takes for half the the matter to decay away. The cute thing is that you can actually solve for this. Since you know the initial mass, you can easily calculate half the mass. Then the equation simply becomes: Assuming you know the rate of decay, that leaves only one value to discover – t. Using natural logs, you can easily solve. Pretty neat, right? I remember being in chemistry class and wondering what the hell ln(2) had to do with figuring out how long it took for half of a chemical to react. The above equation explains! The book goes on to give several more example, but I think by now you’ve probably got the basic idea. The simple exponential equation is incredibly useful and applies in lots of places. Of course, you have to remember that it’s a simplification many times. For example, our bunny population cannot grow infinitely in the real world because there would have to be infinite food. So, in a more realistic model, you’d have to include some modification. Or, in the case of banking, you might have to add some sort of cap, since so-called “Methuselah Trusts” are illegal in some places. However, generally speaking, the Ce^kt format appears over and over in many fields. It’s basic shit, and you should have it as part of your working knowledge. When you see Ce^kt, you shouldn’t think “okay, the C is some constant, and then e is the base and k…” You should see it and think “ah, something’s growing exponentially!” Next stop: Related Rates
http://www.theweinerworks.com/?p=1199
13
16
Spacecraft navigators do the same thing that ocean-going ship navigators do -- except that the ocean of space is much, much bigger and more dangerous. How do the navigators know where the spacecraft is once it leaves Earth? The path of the spacecraft -- the trajectory -- is planned well before launch. But once the spacecraft leaves Earth, knowing where it is, and predicting where it will be at a certain time, is a We can't see the spacecraft, even with a telescope, so we use an indirect method to find it. If we know the velocity -- the speed and direction -- of the spacecraft, we can figure out where it is. As the spacecraft travels away from Earth, the radio signals it sends to us appear to change frequency. By monitoring this change, we can calculate speed and direction -- and thus, the exact location of the spacecraft. This shift in frequency, called the Doppler shift after the physicist who first described it, is caused by the motion of the spacecraft in relation to Earth. You hear the Doppler affect in a police siren when it changes to a higher pitch as it approaches you, then lowers after it passes by. To track the spacecraft, NASA located the antennas of the Deep Space Network equally around Earth. The huge antennas are located in California, Spain, and Australia. As Earth turns, an antenna at one location hands over the spacecraft signal to the next antenna. Locations of the Deep Space Network antennas Next: Commanding the Spacecraft
http://solarsystem.nasa.gov/galileo/mission/missionops-guiding.cfm
13
48
Titration is a common laboratory method of quantitative chemical analysis that is used to determine the unknown concentration of a known reactant. Because volume measurements play a key role in titration, it is also known as volumetric analysis. A reagent, called the titrant, of known concentration (a standard solution) and volume is used to react with a solution of the analyte, whose concentration is not known. Using a calibrated burette to add the titrant, it is possible to determine the exact amount that has been consumed when the endpoint is reached. The endpoint is the point at which the titration is complete, as determined by an indicator (see below). This is ideally the same volume as the equivalence point - the volume of added titrant at which the number of moles of titrant is equal to the number of moles of analyte, or some multiple thereof (as in polyprotic acids). In the classic strong acid-strong base titration, the endpoint of a titration is the point at which the pH of the reactant is just about equal to 7, and often when the solution permanently changes color due to an indicator. There are however many different types of titrations (see below). Many methods can be used to indicate the endpoint of a reaction; titrations often use visual indicators (the reactant mixture changes colour). In simple acid-base titrations a pH indicator may be used, such as phenolphthalein, which becomes pink when a certain pH (about 8.2) is reached or exceeded. Another example is methyl orange, which is red in acids and yellow in alkali solutions. Not every titration requires an indicator. In some cases, either the reactants or the products are strongly coloured and can serve as the "indicator". For example, an oxidation-reduction titration using potassium permanganate (pink/purple) as the titrant does not require an indicator. When the titrant is reduced, it turns colourless. After the equivalence point, there is excess titrant present. The equivalence point is identified from the first faint pink color that persists in the solution being titrated. Due to the logarithmic nature of the pH curve, the transitions are, in general, extremely sharp; and, thus, a single drop of titrant just before the endpoint can change the pH significantly — leading to an immediate colour change in the indicator. There is a slight difference between the change in indicator color and the actual equivalence point of the titration. This error is referred to as an indicator error, and it is indeterminate. History and etymology The word "titration" comes from the Latin word titalus , meaning inscription or title. The French word titre , also from this origin, means rank. Titration, by definition, is the determination of rank or concentration of a solution with respect to water with a pH of 7 (which is the pH of pure water). The origins of volumetric analysis are in late-18th-century French chemistry. Francois Antoine Henri Descroizilles developed the first burette (which looked more like a graduated cylinder) in 1791. Joseph Louis Gay-Lussac developed an improved version of the burette that included a side arm, and coined the terms "pipette" and "burette" in an 1824 paper on the standardization of indigo solutions. A major breakthrough in the methodology and popularization of volumetric analysis was due to Karl Friedrich Mohr, who redesigned the burette by placing a clamp and a tip at the bottom, and wrote the first textbook on the topic, Lehrbuch der chemisch-analytischen Titrirmethode (Textbook of analytical-chemical titration methods), published in 1855. Preparing a sample for titration In a titration, both titrant and analyte are required to be in a liquid (solution) form. If the sample is not a liquid or solution, the samples must be dissolved. If the analyte is very concentrated in the sample, it might be useful to dilute the sample. Although the vast majority of titrations are carried out in aqueous solution, other solvents such as glacial acetic acid or ethanol (in petrochemistry) are used for special purposes. A measured amount of the sample can be given in the flask and then be dissolved or diluted. The mathematical result of the titration can be calculated directly with the measured amount. Sometimes the sample is dissolved or diluted beforehand, and a measured amount of the solution is used for titration. In this case the dissolving or diluting must be done accurately with a known coefficient because the mathematical result of the titration must be multiplied with this factor. Many titrations require buffering to maintain a certain pH for the reaction. Therefore, buffer solutions are added to the reactant solution in the flask. Some titrations require "masking" of a certain ion. This can be necessary when two reactants in the sample would react with the titrant and only one of them must be analysed, or when the reaction would be disturbed or inhibited by this ion. In this case another solution is added to the sample, which "masks" the unwanted ion (for instance by a weak binding with it or even forming a solid insoluble substance with it). Some redox reactions may require heating the solution with the sample and titration while the solution is still hot (to increase the reaction rate). A typical titration begins with a beaker or Erlenmeyer flask containing a precise volume of the reactant and a small amount of indicator, placed underneath a burette containing the reagent. By controlling the amount of reagent added to the reactant, it is possible to detect the point at which the indicator changes color. As long as the indicator has been chosen correctly, this should also be the point where the reactant and reagent neutralise each other, and, by reading the scale on the burette, the volume of reagent can be measured. As the concentration of the reagent is known, the number of moles of reagent can be calculated (since ). Then, from the chemical equation involving the two substances, the number of moles present in the reactant can be found. Finally, by dividing the number of moles of reactant by its volume, the concentration is calculated. Types of titrations Titrations can be classified by the type of reaction. Different types of titration reaction include: - Acid-base titrations are based on the neutralization reaction between the analyte and an acidic or basic titrant. These most commonly use a pH indicator, a pH meter, or a conductance meter to determine the endpoint. - Redox titrations are based on an oxidation-reduction reaction between the analyte and titrant. These most commonly use a potentiometer or a redox indicator to determine the endpoint. Frequently either the reactants or the titrant have a colour intense enough that an additional indicator is not needed. - Complexometric titrations are based on the formation of a complex between the analyte and the titrant. The chelating agent EDTA is very commonly used to titrate metal ions in solution. These titrations generally require specialized indicators that form weaker complexes with the analyte. A common example is Eriochrome Black T for the titration of calcium and magnesium ions. - A form of titration can also be used to determine the concentration of a virus or bacterium. The original sample is diluted (in some fixed ratio, such as 1:1, 1:2, 1:4, 1:8, etc.) until the last dilution does not give a positive test for the presence of the virus. This value, the titre, may be based on TCID50, EID50, ELD50, LD50 or pfu. This procedure is more commonly known as an assay. - A zeta potential titration characterizes heterogeneous systems, such as colloids. Zeta potential plays role of indicator. One of the purposes is determination of iso-electric point when surface charge becomes 0. This can be achieved by changing pH or adding surfactant. Another purpose is determination of the optimum dose of the chemical for flocculation or stabilization. Measuring the endpoint of a titration Different methods to determine the endpoint include: - pH indicator: This is a substance that changes colour in response to a chemical change. An acid-base indicator (e.g., phenolphthalein) changes colour depending on the pH. Redox indicators are also frequently used. A drop of indicator solution is added to the titration at the start; when the colour changes the endpoint has been reached. - A potentiometer can also be used. This is an instrument that measures the electrode potential of the solution. These are used for titrations based on a redox reaction; the potential of the working electrode will suddenly change as the endpoint is reached. - pH meter: This is a potentiometer that uses an electrode whose potential depends on the amount of H+ ion present in the solution. (This is an example of an ion-selective electrode. This allows the pH of the solution to be measured throughout the titration. At the endpoint, there will be a sudden change in the measured pH. It can be more accurate than the indicator method, and is very easily automated. - Conductance: The conductivity of a solution depends on the ions that are present in it. During many titrations, the conductivity changes significantly. (For instance, during an acid-base titration, the H+ and OH- ions react to form neutral H2O. This changes the conductivity of the solution.) The total conductance of the solution depends also on the other ions present in the solution (such as counter ions). Not all ions contribute equally to the conductivity; this also depends on the mobility of each ion and on the total concentration of ions (ionic strength). Thus, predicting the change in conductivity is harder than measuring it. - Colour change: In some reactions, the solution changes colour without any added indicator. This is often seen in redox titrations, for instance, when the different oxidation states of the product and reactant produce different colours. - Precipitation: If the reaction forms a solid, then a precipitate will form during the titration. A classic example is the reaction between Ag+ and Cl- to form the very insoluble salt AgCl. This usually makes it difficult to determine the endpoint precisely. As a result, precipitation titrations often have to be done as "back" titrations (see below). - An isothermal titration calorimeter uses the heat produced or consumed by the reaction to determine the endpoint. This is important in biochemical titrations, such as the determination of how substrates bind to enzymes. - Thermometric titrimetry is an extraordinarily versatile technique. This is differentiated from calorimetric titrimetry by the fact that the heat of the reaction (as indicated by temperature rise or fall) is not used to determine the amount of analyte in the sample solution. Instead, the endpoint is determined by the rate of temperature change. - Spectroscopy can be used to measure the absorption of light by the solution during the titration, if the spectrum of the reactant, titrant or product is known. The relative amounts of the product and reactant can be used to determine the endpoint. - Amperometry can be used as a detection technique (amperometric titration). The current due to the oxidation or reduction of either the reactants or products at a working electrode will depend on the concentration of that species in solution. The endpoint can then be detected as a change in the current. This method is most useful when the excess titrant can be reduced, as in the titration of halides with Ag+. (This is handy also in that it ignores precipitates.) The term back titration is used when a titration is done "backwards": instead of titrating the original analyte, one adds a known excess of a standard reagent to the solution, then titrates the excess. A back titration is useful if the endpoint of the reverse titration is easier to identify than the endpoint of the normal titration. They are also useful if the reaction between the analyte and the titrant is very slow. - As applied to biodiesel, titration is the act of determining the acidity of a sample of WVO by the dropwise addition of a known base to the sample while testing with pH paper for the desired neutral pH=7 reading. By knowing how much base neutralizes an amount of WVO, we discern how much base to add to the entire batch. - Titrations in the petrochemical or food industry to define oils, fats or biodiesel and similar substances. An example procedure for all three can be found here:
http://www.reference.com/browse/titration
13
27
On the grounds of the space center in Jackson (Michigan) is a scale model of the solar system. You can begin at a large sphere that represents the sun, mounted on a post stuck in the ground. Nearby is a dot that is the planet Mercury. Then you walk, eventually coming to Earth. Then you walk and walk some more. There is a lot of walking and very small objects. You get some sense of the space between the sun and the planets. Suppose that you wish to build such a model in which the earth is represented by a sphere 4mm in diameter, the size of the ball E above. Step 1. Establish a scale| Make a ratio of the actual diameter of the Earth to the diameter of the model of the Earth (Be sure units are the same in the numerator and denominator before canceling them) SCALE = [1.2x107m] / [4 x 0.001m]   =   [3x109] : The SCALE of this model is: [3x109] : | The Scale factor is 3x109 Notice that the scale factor is a number without any units. It is a ratio between two lengths both in meters. Now apply your result:| Question (A): How large will the sun be in your model? Diameter of the sun in your model is the actual diameter of the sun, divided by the scale factor. Solution: Diameter of the sun in your model is: 1.5x109m / 3x109 = 0.5m. |In your JOB 1 model, the sun will be a balloon a half meter in diameter, a 20 inch balloon.| Question (B): How far will the earth be from the sun in your model?| Express the answer in meters, and in lengths of a football field (A football field is 100 yards long, about 100m). (Hint: Use the same scale factor as long as you are with Model #1) Distance from the sun to the Earth in your model is the actual distance, divided by the scale factor. Solution: Distance from the sun to the Earth in your model is: 1.5x1011m / 3x109 = 50m In your JOB 1 model, the sun-to-earth distance is 50 meters| or, about one half the length of a football field. Question (C): What is the diameter of the solar system in your model?| Half of this is the distance from the sun to the farthest planet, Neptune. (Pluto weaves in and out of Neptune's orbit) Express in meters and in miles. Diameter of the solar system in your model is the actual diameter of the solar system, divided by the scale factor. Solution: Diameter of the solar system in your model is 9.6x1012m / 3x109 = 3,200 meters In your JOB 1 model, the diameter of the solar system is 3,200m or 2 miles.| (The distance from the sun to Neptune in your model is a one mile walk.) Question (D): What is the diameter of that farthest planet, Neptune, in your model?| (The actual diameter of the planet Neptune is 4.5x107 meters) Express the diameter of your model of Neptune in meters. Which ball (A-H) is closest to the size of your model of Neptune? Diameter of Neptune in your model is the actual diameter of Neptune, divided by the scale factor. Solution: Diameter of Neptune in your model is 4.5x107m / 3x109 = 0.015m or 1.5cm In your JOB 1 model, the diameter of the planet Neptune is 1.5cm.| (The size of Neptune in your model is about the size of ball G.) Permission is hereby granted to reproduce the contents of this section for use in teaching, provided no charge or fee is accepted and provided credit is given to Cavendish Science Organization Go to the NEXT JOB: The Earth a grain of fine sand. Return to 'Scale Models of the Universe' home page. Return to 'JOB 0' Get started on scales and models. Go to 'JOB 3' The Sun a grain of fine sand. Go to 'JOB 4' A scale for the whole Universe. Go to 'JOB 5' The Universe an exploding 4th of July firecracker. Go to 'JOB 6' What happened in the Year 500,000? Return to FREE DOWNLOADS home page. Return to the Web site home page.
http://www.cavendishscience.org/phys/howfar/job1.htm
13
25
The last chapter explained how to draw graphs from equations. This chapter explains how to write equations from graphs of lines. There are several different forms of a linear equation can take. Slope-intercept form, point-slope form, and general linear form are the three most common forms. The first section focuses on slope-intercept form: it explains how to write an equation of a line in slope-intercept form, given a graph of that line. The second section explains how to write an equation of a line in point-slope form, and the third section explains how to write an equation of a line in general linear from. The fourth section discusses other, perhaps less common, forms of linear equations. In particular, it shows how to write equations of horizontal and vertical lines. The final section explains how to convert among forms of linear equations. Different forms have different uses, and the given form of an equation might not always be the most useful. Thus, it is important to know how to convert an equation to a form that will serve the intended purpose. Learning how to write equations from graphs is the next logical step after learning how to create graphs from equations. After mastering the material in this chapter, you will be able to switch back and forth between the equation of a line and the graph of that line. Writing equations from graphs is an especially useful tool for scientists. Scientists often gather data from experiments, graph it, and search for an equation to describe the trend they see.
http://www.sparknotes.com/math/algebra1/writingequations/summary.html
13
55
Using Critical Points The second sort of critical point is an inflection point. This is a point in the graph where on one side, the slop is increasing, and on the other side, the slope is decreasing. This can happen whether the curve is increasing or decreasing. Look in the following graph at the first inflection point. Before it, the slope started from flat and increased to a slant, and after the point, the slope decreases back to flat. The inflection point is the exact point at which this transition occurs. We refer to this as a change in concavity. Before the point, it is concave up, and after, it is concave down. After the second inflection point, it is concave up again. You should be able to realize that if the graph is continuous and smooth, between every min and max there must be an inflection point. The converse is not necessarily true. I can now explain mathematically what these points are. Critical points are the areas at which y' and y" are equal to 0, simply put. The local maximums or minimums can be found by setting the first derivative to 0. This works because when the slope is 0, the graph is flat. If the graph is flat, it is almost always because it was going down, and now it's going up, making a minimum, or the opposite. If the object was moving upward, it is switching direction, to go downward, and for a split second the velocity is 0. This should be obvious from the graph below. At the local max and local min points, the derivative will be 0. So make it equal to 0, and see what x-values emerge. Inflection points are found by setting the second derivative to 0. If the first derivative measure the rate of change of y, then the second derivative measures the rate of change in y'. This measures the rate at which the slop is changing. If the second derivative is positive, the the slope is changing at a faster and faster pace. If there is a point at which the second derivative becomes 0, and then negative, the angle of the slope will stop becoming steeper, and it will then become less and less steep, possible until the curve is flat. And further. The slope can decrease to the point where it is negative, and the curve will be decreasing. When y" is positive, the graph is concave up. When it is negative, the graph is concave down. If there is a point where it is 0, then that means at that point the graph is switching from concave up to concave down, or vice versa. Look back at the graph above. Look back at the graph above. Velocity and Acceleration The function f(x) can be referring to the displacement of a particle or object. Displacement is the distance traveled from the starting point. The x-axis is time, and the y-axis is the distance moved. It can be thought of as a ball thrown directly upward, and you are plotting its position against time. Of course, that would have a particular curve. Now the derivative of this kind of function would be the velocity, because it would be plotting the rate of change against time. Remember that the derivative of a function is an equation for the slope at any point that you plug x in? Well, in an equation of displacement, the slope is the velocity, or the speed at which the displacement changes over time. So y is the velocity. That's what velocity is! Speed. Speed is how quickly something moves, or in other words, how quickly displacement changes. Using the same logic, you can see that y is the acceleration, because it is the derivative of the velocity, or the rate of change of velocity over time. If the velocity is increasing then the acceleration is positive. I just explained how that sort of thing works by inflection points above. Heres an example: y = x2 4x I will take the derivative and second derivative: y = 2x 4 y = 2 In this example, there is a constant acceleration of 2 for all values of x. This makes sense logically. As you can see, the graph begins with a negative velocity, (displacement is decreasing) but it begins to slow its backward movement, which is reverse deceleration, or acceleration. This constant acceleration eventually brings velocities positive. We would like to find the local maximums and/or minimums. This would be a local minimum, and is seen in our example. I will now set y to 0. y = 2x 4 = 0 2x = 4 x = 2 So when x is equal to 2 the velocity is 0 and the object has reached its minimum value. How did I know it was a minimum value and not a maximum value? Because at the point x = 2 the acceleration is positive. In fact, in this entire equation the acceleration is positive; it always equals 2. When the acceleration is positive, it means the velocity is going upward, which means that it must have been negative before and is positive now. That means it is a minimum. The term for this is concave up. When a graph is concave up, it means the slant is slowly getting higher, or less negative. When the acceleration is negative, it is concave down, because that will be the shape of the graph at that point. There will be a maximum, and the slant of the graph is going to be on a downward trend. I'm sorry to repeat myself here. I don't want to insult anybody's intelligence. (Note that in rare cases there will be an inflection point on the same spot as the first derivative is 0, and in that case, the point is not a min or max, but the graph slows down at that point to a slant of 0, and then continues in the same direction it was going before.) Drawing a graph using critical points Firstly, what are critical points? Allow me to repeat a bit. These are all points where y and y are equal to 0. When y is equal to 0 you have local maximums and minimums, as explained. When y equals 0, you have inflection points, or changes in concavity. While y is positive, it is concave up, and while its negative, the graph is concave down. So in between, when it is 0, the graph is switching from concave up to down, or the other way around. The way that you figure out what to do with an equation is by using the following chart: (If you end up needing another column or two do not worry) The top line refers to what x is. The next the rows are y, y and y. You always want to know what y is at -∞ and ∞. For this you use limit of the equation as x goes to both, respectively. You also want to set y and y to zero and fill in the value of x at which this occurs. Here is an equation: y = x3 6x2 + 9x y = 3x2 12 x + 9 y = 6x 12 I will set y to 0 and plug it into the graph, and set y to 0 and plug it in. 3x2 12x + 9 = 0 3(x2 4x + 3) = 0 (x 3)(x 1) = 0 x = 3, 1 6x 12 = 0 6x = 12 x = 2 Now I will plug in all values of x at which there are critical points: Next I will Find all values of y for every critical point so I can know the full (x, y) coordinate at which these occur. I will also find -∞ and ∞. Lim xΰ-∞ Y = x3 6x2 + 9x = x3 = -∞ Lim xΰ∞ Y = x3 6x2 + 9x = x3 = ∞ y = x3 6x2 + 9x = 1 6 + 9 = 4 (When x = 1) y = x3 6x2 + 9x = 8 24 + 18 = 2 (When x = 2) y = x3 6x2 + 9x = 27 54 + 27 = 0 (When x = 3) You can already see from the chart that y is rising from negative infinity up to 4, and goes down to 0 when x is 3, and then rises to infinity. In between, it switched concavity at the point (2, 2). It is clear that at point (1, 4) the graph is concave down, since that is a maximum, but Ill figure it out anyway by checking y at that point. Ill also check concavity at (3, 0) to make sure thats a minimum. y = 6x 12 = 6 12 = -6 (When x is 1) y = 6x 12 = 18 12 = 6 (When x is 3) Now you have every part of the graph you need to fill in to solve the graph. Here is a picture of the graph, as you should draw it. Once again, excuse the sloppiness. :) Incidentally, once you know the (x, y) coordinates of all the critical points, and you know the direction the graph goes in to negative infinity and infinity, you can immediately figure out what the graph looks like without have to check concave up/down, etc. Plot the 3 points, and draw a line coming in from (-∞, ∞), and draw a line leaving to (∞, ∞). in the center area where you have the points, it obviously climbs to the first point, comes through the second, and loops back up at the third. There cannot be any extra squiggles in the graph, because if there were, each squiggle would have another set of max, min, and inflection points. So draw the simplest possible curve for it, and that will be correct. There are, incidentally, much more difficult curves possible. I will try one now. y = 1/x - 4x2 y' = -1/x2 - 8x y'' = 2/x3 - 8 Set y' to 0: -1/x2 - 8x = 0 -1/x2 - 8x3/x2 = 0 -(8x3 + 1)/x2 = 0 Set y'' to 0: 2/x3 - 8 = 0 2/x3 = 8 2/8 = x3 x = 3√4 Lim x-->-∞ y = 1/x - 4x2 = 1/-∞ - 4(-∞)2 = 0 - ∞ = -∞ y = 1/(-1/2) - 4(-1/2)2 = -3 This function demonstrates the shortcomings of the chart method I use to solve these problems. Everything looks great, but something's missing. What happens when the graph is near 0? You should worry about domain in any of these problems. The domain of x refers to all the possible values it can have. In this case, x cannot be 0. It is not in the domain. If you try to put 0 in the original equation, you end up dividing by 0. We will therefore have to make special consideration for this. We must use some method to find out what happens in the 0 region. For this problem, I have two possible ways. The first way requires no calculation. At small numbers, the second term will be very small, and of little influence on the value of the equation. The first term will dominate. So I think to myself, what does the 1/x graph look like? The answer is: So I know that near 0, our equation will look like this. It will go down to negative infinity, and then come down from positive infinity. The second way involves not realizing that. Imagine putting a tiny number into 1/x - 4x2. The first term will get large, as you will be dividing by a tiny number, and the second term will near 0. The equation overall will be very big. Then put in a tiny negative number. The same thing will happen, except the first term will be very large and negative. I can assume that the smaller the number I put in, the bigger it will get. This is a vertical asymptote. (For more on this, see Limits) Whichever you try, we now know 4 lines, and two points. This turns out to be an unusual looking graph. It looks like abstract art. Let me fill in the rest:
http://www.qcalculus.com/cal06.htm
13
11
M31 has played a pivotal historical role in astronomy. Early observers saw the soft, foggy patch of glowing light as just another spiral nebula but weren't yet equipped with the knowledge to appreciate its nature. The true nature of M31 began to became clear in 1923. In that year Edwin Hubble, using the just completed 100 inch Hooker telescope at the Mount Wilson observatory, made his monumental discovery of Cepheid Variable stars in M31 and in one stroke forever changed the astronomical paradigm of the universe as we know it. Appropriately interpreting the cepheid data, Hubble was the first to appreciate the faint nebula in Andromeda as an "island universe", an immense galaxy in its own right, similar to our Milky Way. Hubble's work opened the door to the modern interpretation of the universe which we now know consists of countless galaxies all receding from each other. M31 has the distinction of being the nearest of all spirals at a distance of 2.5 million light years. Its disk, tilted toward earth by some 13 degrees, exposes the grandeur of its spiral structure and star systems to telescopic exploration. M31, along with its near twin, the Milky Way, represent the two dominant giant galaxies of our Local Group which consists of some 40 members. Contrary to most galaxies which are receding away from each other, M31 and the Milky Way are actually moving toward each other and a close encounter or even a full collision may be in store for both galaxies in several billion years. Studies of globular clusters in M31 have revealed at least 4 different subpopulations including some much younger than those that exist in the Milky Way. These findings point to the strong possibility that the galaxy we know as M31 may have been formed by the cannibalization of numerous smaller galactic neighbors. As far back as 1974 astronomers noted a curious asymmetry to the nucleus of M31. It wasn't until 1995 that the Hubble Space Telescope with its great precision, was able to resolve the light of the nucleus into two separate structures. The two light sources of the double nucleus are separated by a miniscule 0.49 arcseconds (6 light years), a distance difficult or impossible to resolve with earth based telescopes. HST images resolved two brightness peaks which were named P1 and P2. Further investigation showed that the optically dimmer P2 was very close to the true center of the galaxy. The brighter P1 is slightly off center creating the illusion of an asymmetric nucleus in ground based images. Recent HST observations have identified a rotating disk of more than 400 blue stars orbiting the true nucleus. The disk of blue stars formed some 200 million years ago in a sudden starburst. The disk is only one light year in diameter and its stars have a remarkable orbital velocity of 2.2 million miles per hour, indicative of a central structure with truly enormous mass. The stars remarkable speed can only be explained by a central black hole having a total mass of some 140 million suns. The brighter light source P1 is artifactual and has its basis in the eccentricity of the rotating circumnuclear stellar disk. It seems that the appearance of the two extremes of the ellipsoidal orbit creates an illusion of a second bright region towards our line of site. Several other galaxies are known which possess a true double nucleus however in these cases the second nucleus was most likely acquired in a previous merger event. It is now widely believed that most, if not all, galaxies have supermassive black holes in their centers. HST image upper right showing the resolved double nucleus.
http://robgendlerastropics.com/M31text.html
13
12
Ever wonder how much soda or beer can fit into a single can? To find this out you can calculate the volume of a cylinder by using either our cylinder volume calculator below or the manual formula provided. These calculations are provided for a right circular cylinder where the top and bottom surfaces are parallel. It represents what is most commonly referred to as a “cylinder”. Method #1 – Cylinder Volume Calculator Method #2 – Volume of a Cylinder Formula A cylinder is one of the most basic curvilinear geometric shapes. The surface of a cylinder is formed by the points at a fixed distance from a given line segment, the axis of the cylinder. The volume of a cylinder can be calculated by using the formula: V = h * π * r2 First, determine the radius of the cylinder. The radius is the straight line from the center of the cylinder to the circumference (outer part) of the cylinder. In essence, you are measuring the radius of a circle from its center point to any point on the outer rim. Once you have the radius of the cylinder, square it in order to get area of the base circle (radius * radius). Now you should have the value for the r2 part of the equation. Next, you’ll need to determine the height of the cylinder. The height is the distance from the two bases of the cylinder. A good rule when measuring height is to make sure the cylinder look more like a cone that’s on its side, and not like a circle. Now that you have all the items in the formula you just need to multiply them in order to get the volume of the cylinder. Knowing that pi is approximately 3.14159, just mutiply (height * 3.14159 * radius squared) to get the volume of your cylinder. You can get a more exact value of pi by using a calculator. Now you should have the volume of your cylinder!
http://volumes.co/calculate-cylinder-volume/
13
11
|Anti-Bullying Poetry Residency| Extending her workshop to a second or third day, poet Laura Boss encourages students to dig deeper and express their thoughts and feelings about the bullying in a second poem. She will discuss the various types of poetry allowing students to experiment with another form. They will learn about adding images, details and aspects of sound in their poems, all while considering the causes and effects of bullying, and what we as individuals, groups and communities can do to stop it. The poems may be put together as a class anthology and students will share their poems in class or, if time permits, in an assembly format with other classes and parents invited. |Arts & Math| Math and Art have many intersections. Artist Mark Stankiewicz has created these 3-10 day residencies to target those lessons in a way that benefit the artist & mathematician within each student. Isometric Drawing - Students create a simple block toy design using isometric drawings, including measurements. Students are shown how to draw an object from three different sides, including the isometric equivalent, representing a three-dimensional object in two-dimensions and vice-versa. Both isometric graph paper and architectural plan paper are utilized. Abilities of ruler use are honed. The students then create a physical version of this toy out of Legos. Tessellation - Artists make tessellations by twisting and turning shapes to find images that fit together like puzzle pieces. M.C. Escher is an artist/mathematician who is a proponent of this style of art. Students will create original geometric tessellations using the mathematical terms like flips, slides, and turns; terms usually used unconsciously by artists to create Tessellations. Mark will discuss the mathematical rule of why some shapes work and others do not. Scale • Proportion • Ratio • Similarity - Students use rulers to design a graph and then set it on an original image. They then make a larger graph with the ruler and transfer the smaller image to a larger graph square by square. This demonstrates scale, similarity, ratio and proportion – as they could enlarge the drawing to the power of 2, 3, 4 or any other number. The art student learns to simplify and enlarge a drawing; and for the math student, it concretizes the ideas of an exponent, and scale, ratio and proportion. Descriptions above are examples – residencies will be customized to the age and ability of the students
http://projectimpact.org/residencydescriptions.php?p=no&id=24&nav_order=26
13
37
The journey to the Moon On the 25th May 1961 President John F Kennedy told Congress: "I believe that this nation should commit itself, before this decade is out, to the goal of landing a man on the Moon and returning him safely to Earth." Earthrise over the lunar surface taken from Apollo 8 Many people have expressed their amazement that not only was the goal of landing a man on the Moon achieved, but that it was achieved in only 8 years, as Kennedy said it should. This is however, ignoring the fact that at the time Kennedy made his statement NASA already had in the pipeline over nine different Moon landing flight plans in a project they had named 'Apollo'. They were already designing a huge Moon booster called 'Nova', that was to generate 40 million pounds of thrust, and were already considering various methods for landing a man on the Moon. At the the time of Kennedy's speech however, NASA were concentrating not so much on landing a man on the moon but on just putting a manned craft around it. Kennedy's speech changed all that. Had NASA not been put under pressure to meet Kennedy's deadline, they would have chosen a far different approach to land a man on the Moon than the one used. It was originally hoped to do it stage by stage using a permanent Earth orbiting station that would make future flights a lot easier, but instead had to settle for a 'one time' system to meet the deadline. With the new system going from launch pad, to orbit, to the Moon and back, using disposable components, it was possible to achieve within the time frame, but it meant each mission was a 'one off' and contributed nothing towards the overall mission plan that could be used by following Moon flights. The mission to land a man on the Moon was not an 8 year period of starting spaceflight from scratch and ending with a Moon landing. Spaceflight began in 1957 with the first satellite placed in orbit and developed from there. FIRST SATELLITE LAUNCH ATTEMPTS |Sputnik 1||4 Oct 1957||USSR||Orbit| |Sputnik 2||3 Nov 1957||USSR||Orbit| |Vanguard||6 Dec 1957||USA||Failed| |Explorer 1||31 Jan 1958||USA||Orbit| |Vanguard||5 Feb 1958||USA||Failed| |Explorer 2||5 Mar 1958||USA||Failed| |Vanguard 1||17 Mar 1958||USA||Orbit| |Explorer 3||26 May 1958||USA||Orbit| |Sputnik||27 April 1958||USSR||Failed| |Vanguard||28 April 1958||USA||Failed| |Sputnik 3||15 May 1958||USSR||Orbit| |Vanguard||27 May 1958||USA||Failed| |Vanguard||26 Jun 1958||USA||Failed| |Explorer 4||26 Jul 1958||USA||Orbit| |Explorer 5||24 Aug 1958||USA||Failed| |Vanguard||26 Sep 1958||USA||Failed| |Beacon||23 Oct 1958||USA||Failed| |Score||18 Dec 1958||USA||Orbit| The launch of the first satellites spurred rocket scientists into action, and with only five satellites safely in orbit the first attempt was made to send a spacecraft to the Moon. FIRST ATTEMPTS FOR UNMANNED ROCKETS TO REACH THE MOON |Pioneer 1A||17 Aug 1958||USA||Failed| |Luna||23 Sep 1958||USSR||Failed| |Pioneer 1B||11 Oct 1958||USA||Failed| |Luna||12 Oct 1958||USSR||Failed| |Pioneer 2||8 Nov 1958||USA||Failed| |Pioneer 3||6 Dec 1958||USA||Failed| |Luna 1||2 Jan 1959||USSR||Missed Moon| |Pioneer 4||3 Mar 1959||USA||Success. Fly-by| |Luna||18 Jun 1959||USSR||Failed| |Luna 2||12 Sep 1959||USSR||Success. Hit Moon| |Pioneer||24 Sept 1959||USA||Failed| |Luna 3||4 Oct 1959||USSR||Success. Lunar loop| |Pioneer||26 Nov 1959||USA||Failed| |Luna||12 Apr 1960||USSR||Failed| |Pioneer||25 Sep 1960||USA||Failed| |Pioneer||15 Dec 1960||USA||Failed| So far, not very encouraging, but at least unmanned rockets had reached the Moon. During this period both countries were sending animals into orbit to pave the way for manned flights. NASA was formed on October 1st 1958, and the man in space programme was introduced just six days later, almost three years before Kennedy's pledge to land a man on the Moon. The program was renamed "Project Mercury" on Nov. 26, 1958, just prior to the commencement of the astronaut candidate selection process. NASA selected seven pilots to train for flights in the one-man capsule called Mercury. It was a bell-shaped capsule that could be controlled in space by its pilot, maneuvering in three axis called pitch, roll and yaw. The pilot could take full manual control or just monitor automatic systems. He had the ability to override systems and troubleshoot problems. The Mercury had an ablative heatshield on its blunt end which would take the brunt of the intense heat on its high speed re-entry into the Earth's atmosphere, carefully controlled at a specific angle. A parachute would enable the craft to splash down in the sea. The first manned flights would involve suborbital "up and down" rides launched on a Redstone rocket and would be followed by orbital flights on the Atlas, America's first ICBM. A major American milestone was reached with a Redstone boosted suborbital flight of the chimpanzee Sam, in January 1961. This was soon put into the shade however by the USSR three months later with the first man in orbit, Yuri Gagarin. The American response was their first manned spaceflight, using a Mercury capsule, Freedom 7, a brief 15 minute suborbital flight on May 5th 1961. No match for the Russian Yuri Gagarin's trip into orbit. The Soviet premier Nikita Khrushchev was ecstatic and milked all the propaganda he could from the flight. This did not go unnoticed by America's newly elected President and 20 days after Alan Shepherd's space hop, Kennedy responded to the Soviet lead by making his pledge to land a man on the Moon before the decade was out. The race was on. THE FIRST MANNED SPACEFLIGHTS 1961-63 |1||Vostok||Yuri Gagarin||USSR||Orbital 1||1 hr 48m| |2 *||Freedom 7||Alan Shepherd||USA||Suborbital||15mins| |3||Libertybell 7||Gus Grissom||USA||Suborbital||15mins| |4||Vostok 2||Gherman Titov||USSR||Orbital 17||1d 1 hr 18m| |5||Friendship 7||John Glenn||USA||Orbital 3||4h 55m| |6||Aurora 7||Scott Carpenter||USA||Orbital 3||4h 56m| |7||Vostok 3||A. Nikolyev||USSR||Orbital 64||3d 22h| |8||Vostok 4||O. Popovich||USSR||Orbital 48||2d 23h| |9||Sigma 7||Wally Schirra||USA||Orbital 6||9h 13m| |10||Faith 7||Gordon Cooper||USA||Orbital 22||1d 10h| |11||Vostok 5||V. Bykovsky||USSR||Orbital 81||4d 23h| |12||Vostok 6||V.Tereshkova||USSR||Orbital 48||2d 22h| *In 1961 when Kennedy made his now famous pledge to land a man on the Moon, the USA had only 15 minutes piloted spaceflight experience and only 5 minutes of that was in space, but as already mentioned, plans were already well in place. COUNTDOWN: 8 years 7 months remaining. NASA had already studied three options of landing a man on the Moon. 1) The Direct Ascent Method. This would involve the construction of a huge booster, the Nova, that would launch a large spacecraft and send it on a course directly to the Moon. The craft would land on the Moon, and after a period of exploration, would take-off and fly directly back to the Earth. This method was ruled out as being too expensive and requiring too high a level of technical sophistication of the Nova. 2) Earth-Orbit Rendezvous. This called for the launching of all the components required for the Moon trip into Earth orbit, where they would rendezvous, be assembled, refueled, and sent to the Moon. This method was dropped due to problems associated with manoeuvring at rendezvous and assembling components, and dangers of refueling. 3) Lunar-Orbit Rendezvous. This proposed sending the entire lunar spacecraft up in one launch. It would head to the Moon, enter into its orbit, and dispatch a small lander to the lunar surface. It was the simpler of the three models, but it was risky. In the plan, three astronauts would be launched in a mother ship, first reaching Earth orbit, then heading for the Moon, to enter an orbit around it. A landing vehicle, manned by two astronauts, leaving one in the mother ship, would touch down on the Moon using its descent engine. After a Moonwalk, the top half of the Lunar Excursion Module would take off, leaving the bottom half on the surface, and rendezvous and dock with the mother ship in lunar orbit. The mother ship would break out of lunar orbit and head back to Earth. Since rendezvous was taking place in lunar, instead of Earth orbit, there was no room for error or the crew could not get home. Also, some of the most difficult course corrections and manoeuvres had to be done after the spacecraft had been committed to a circumlunar flight. This method, though risky, was adopted in 1962 as it was technically the simplest, and the Apollo project was on its way. COUNTDOWN: 8 years remaining. By now the Americans were having great success with their Mercury programme. On the 20th February 1962 John Glenn was launched into orbit by an Atlas rocket in a Mercury capsule called Friendship 7. The first American in orbit, he completed 3 orbits. This was followed by Scott Carpenter with 3 orbits, Wally Schirra with 6, and Gordon Cooper with 22. COUNTDOWN: 7 years 10 months remaining. Rendezvous and docking of spacecraft together was to be a crucial part in the Lunar-Orbit Rendezvous method and the next series of US piloted spacecraft that succeeded Mercury was designed to demonstrate these manoeuvres in Earth orbit and to rehearse as much Moon flight as possible without going there. Testing out spacesuits during spacewalks (EVA's) and flying in Earth orbit longer than it would take to fly to the Moon and back were also on the agenda for this next series of spacecraft which would carry two astronauts. Mercury met all three of its objectives: orbit a manned spacecraft around Earth; learn about man's ability to function in space; and safely recover the man and spacecraft. The project ultimately put six men in space, four of whom made orbital flights around Earth. It proved that men could function normally for up to 34 hours of weightless flight. Over two million people worked on the project for almost five years. By 1963, Project Mercury wrapped up and Project Gemini was two years into its development stages. COUNTDOWN: 6 years remaining. The Gemini spacecraft would become the first to alter its orbit and manoeuvre in space, which was crucial for it to be able to rendezvous and dock. Two unmanned test flights of Gemini were made before Gemini 3 made its first manned flight in March 1965. It carried two astronauts and made a modest three orbits, but successfully demonstrated the first manned manoeuvres in orbit as a crucial test for Apollo. The command pilot was Gus Grissom, who had flown the second suborbital Mercury mission in July 1961 and the first person to make two spaceflights. COUNTDOWN: 4 years 9 months remaining. The remarkable Gemini programme then soared ahead with nine more piloted flights ending in 1966, meeting all its goals during one of the most frenetic and exciting periods of the Moon Race. Astronauts Neil Armstrong and David Scott completed the first space docking on 16 March 1966 when Gemini 8 joined up with an unmanned Agena target rocket, simulating the ascent of a lunar module from the Moon docking with a mother ship in lunar orbit. COUNTDOWN: 3 years 9 months remaining Meanwhile in June 1966, when Gemini 9 was flying, the first mock-up of a Saturn V booster was being rolled out at Kennedy Space centre. COUNTDOWN: 3 years 6 month remaining GEMINI MANNED FLIGHT LOG |3||23 Mar 1965||Orbit change| |4||3 Jun 1965||EVA| |5||21 Aug 1965||Record duration| |6||4 Dec 1965||Record duration| |7||15 Dec 1965||1st Rendezvous| |8||16 Mar 1966||1st docking| |9||3 Jun 1966||Record EVA| |10||18 Jul 1966||Re-boost orbit| |11||12 Sep 1966||Record altitude| |12||11 Nov 1966||Record EVA| The remarkably successful Gemini programme ended with the landing of Gemini 12 in November 1966 and NASA felt confident that the major requirements for a Moon mission had been mastered. COUNTDOWN: 3 years 1 month remaining From the period 1962 onwards, both America and the USSR were involved in sending unmanned probes to the Moon with the aim of finding suitable landing sites. Some scientists, for example, thought that a craft would disappear in a vast layer of soft lunar dust. Fortunately this turned out not to be the case. America launched its Ranger series of probes in 1962, and in 1965 Ranger 8 and Ranger 9 returned over 12,000 images before crashing into the surface as designed. Ranger 8 imaged the Sea of Tranquillity, and this was eventually selected as the first landing spot. This was followed in 1966 by Surveyor 1, a soft landing probe, that sent back spectacular images from the surface that were shown live on TV. In 1967 Surveyor 3 soft landed and scooped up surface material. Lunar Orbiter had 5 successful missions and sent back thousands of pictures, almost the entire Moon. This enabled NASA to select up to 20 candidate landing sites for the Apollo programme. The Apollo lunar spacecraft comprised of three major components: the Command Module, the Service Module and the Lunar Module. The Apollo modules were lifted into earth orbit and sent on their way to the Moon by the massive Saturn V rocket, referred to as a 'booster'. The conical Command Module is the crew living quarters where the crew ate and slept and worked on their way to the Moon and back. At the nose of the Command Module was the docking mechanism that allowed it to join up with the Lunar Module that was in fact stowed beneath it for the launch. A vital part of the Command Module was the heatshield which protected the crew from the 1600 C (3000 F) temperatures experienced during the plunge into Earth's atmosphere, which begins at a speed of 25,000 mph. The Service Module supplied electricity and water to the spacecraft and maneuvering power. It provides the 'burn' to slow down the craft to enable it to enter lunar orbit, and also the 'burn' to get home from lunar orbit. The Service Module was attached at all times to the Command Module until just before re-entry into Earth's atmosphere, when it was jettisoned. For Command and Service Module diagram see CSM The Lunar Module, or LM, is a two-part, totally self-contained spacecraft that used its own rockets to land on and take off from the surface of the moon, and even served as its own launch pad. The Lunar Module was the compartment in which two of the crew landed on the Moon and took-off again to re-join the Command and Service Module, which remained in lunar orbit with the third crew member. For Lunar Module diagram see Lunar Module Apollo missions were launched atop two different boosters, the Saturn 1B used for the Earth orbiting missions and the mighty Saturn V, the Moon booster. For Saturn V diagram see Saturn Another major element of the spacecraft during the first 100 seconds of flight was the launch escape system in case the Saturn V booster malfunctioned. The mighty Saturn V booster was in three stages and its job was to lift the Apollo spacecraft into earth-orbit and then send it on its way to the Moon. The first two stages were used and discarded on the way up to earth-orbit. The third stage was only partially used to reach earth orbit and was then shut down. The Apollo spacecraft and third stage of the Saturn V would then complete an orbit or two while system checks were carried out. One final burn from the Saturn V third stage would then put them on course for the Moon. Once dispatched towards the Moon (Trans Lunar Injection) the Command Module and Service Module combination separated from the third stage of the Saturn V rocket, turned around and docked with the Lunar Module nestled inside the third stage and extracted it from the spent booster. The booster was then discarded. The combined Apollo craft journeyed to the Moon from the momentum given it by that final Saturn V burn. The crew were then able to transfer between the Command Module and Lunar Module via a transfer tunnel, once the docking probe had been removed. Had this not been the case, then the Apollo 13 crew would not have made it back. THE APOLLO 1 DISASTER The intense efforts being made to get the whole Apollo system in gear for the first piloted flights was illustrated by the development of the first Saturn V to its first launch. It took just five years. It has to be remembered that an awful price was paid. On Friday 27th January 1967, Apollo 1 was on launch Pad 34 at Cape Canaveral on top of the massive Saturn V booster, for what was to be a countdown demonstration test during which the rocket was unfuelled. This was to be followed on 14th February 1967 with a shakedown orbital flight. There were three crew onboard, Gus Grissom, Edward White and Roger Chaffee. The Command Module, as usual, was pressurised with pure oxygen. Bad workmanship had resulted in some electrical wiring losing its insulation and this caused a spark under Grissom's seat. Within seconds the arc had become an inferno in the oxygen atmosphere. All three men died within seconds. Those men were test pilots, but more, they were heroes of spaceflight. The Apollo 1 disaster revealed carelessness and bad workmanship in design and production. The programme was delayed while modifications were made. COUNTDOWN: 2 years 11 months remaining On 9th November 1967, the first Saturn V booster was launched from Pad 39A at the Kennedy Space Centre. With the Apollo 4 system on top, the monster rocket was 363 feet high and weighed 2,888 tons. The complete Saturn V rocket on the launch pad Versions of the Apollo modules had made a number of previous test flights in Earth orbit using smaller Saturn 1 and 1B boosters. The Lunar Module was due for testing in early 1968. The Saturn V moon booster is the most powerful rocket ever made. The 5 first stage engines produced 7,600,000 pounds of thrust. Within 40 seconds of liftoff the Saturn V goes supersonic. By the time the first stage fuel is spent the crew are experiencing a force of acceleration of over 4g. At this point stage 1 is jettisoned and stage 2 kicks in. After around 5 minutes the emergency escape rocket is discarded, taking with it the boost protective cover and allowing the crew their first outside view as the windows are uncovered. After almost 9 minutes stage 3 kicks in picking up the last bit of velocity. Just 11 minutes 30 seconds after liftoff stage 3 shuts down and they are in orbit travelling at 17,400 miles per hour at an altitude of 115 miles. After an orbit or two the third stage is lit up a final time to raise their speed up to 24,226 mph, the speed necessary to reach the moon on a free-return path. The burn takes around 5 minutes. The third stage is then jettisoned (goodbye Saturn V and thanks for the ride) and after this all maneuvering power comes from the Service Module, which apart from some small mid-course corrections, will not be used until braking is required to enter lunar orbit. When they are 38,900 miles from the moon they reach the top of the gravity hill and cross over into the moons gravity well. At this stage the craft has slowed down from its initial 24,226 mph to a 'slow' 2,223 mph, but from now on will accelerate again as it 'falls' towards the moon. By the time they need to start their burn to slow down they will have picked up speed to 5,000 mph and will require a burn of 4 minutes to bring the speed down to 3,700 mph, slow enough to go into lunar orbit. In order to break out of orbit and return home takes a 3 minute burn. When they reach the earth's atmosphere they will be travelling at 25,000 mph but braking will not require any burn, the atmosphere will slow them down, so rapidly in fact that the g-force will hit 6 g. COUNTDOWN: 2 years 1 month remaining In 1968, NASA planned one Earth orbit mission, to be followed by a combined Apollo Command and Service Module and Lunar Module test mission in Earth orbit, after launch on a Saturn V. This would be followed in 1969 by a deep Earth orbit test and a final Moon landing dress rehearsal in Lunar orbit. If all went well an American could be on the Moon by mid-1969. January 22 1968 1st test of Lunar Module in space. COUNTDOWN: 1 year 11 months remaining April 4 1968. Final uncrewed Apollo test flight. Full systems check. COUNTDOWN: 1 year 8 months remaining October 11-22, 1968. First manned earth-orbit test of the Apollo Command and Service Modules. (CSM) COUNTDOWN: 1 year 2 months remaining December 21-27, 1968. Lunar orbit mission. This was the first mission to place men into an orbit around the Moon. Completed 10 orbits of the moon on Christmas eve, 1968. This mission did not include the Lunar Module. COUNTDOWN: 12 months remaining March 3-13, 1969. This mission was an earth-orbit only test of the entire Apollo spacecraft - the CSM and LM. Included rendezvous maneuvers between the Command Module and Lunar Module. COUNTDOWN: 9 months remaining May 18-26, 1969. Lunar orbit mission. This was a full dress rehearsal of a Moon landing, with the Lunar Module making a descent from lunar orbit to within 9 miles of the lunar surface before firing its engine and returning to dock with the Command and Service Module. Every system, every procedure, to be used in the actual Moon landing was tested, apart from the actual landing itself, and worked flawlessly. They were now ready to make the attempt to land on the Moon. The Apollo Lunar Module after separation from the Command and Service Module COUNTDOWN: 7 months remaining July 16-24, 1969. Apollo 11 blasted off from the Kennedy Space Centre on 16 July 1969, watched by one million spectators from the nearby beaches and causeways, and 600 million people around the world, including me. Neil Armstrong stepped out onto the surface of the Moon on 20th July 1969. The Apollo CSM in lunar orbit. Photograph taken from the Lunar Module after separation COUNTDOWN: Goal achieved with 5 months to spare. The rest is history Buzz Aldrin climbing down onto the lunar surface Return to Did we land on the Moon?
http://www.thekeyboard.org.uk/The%20journey%20to%20the%20Moon.htm
13
12
Scientists studying data from NASA's Galileo spacecraft have found that Jupiter's moon Amalthea is a pile of icy rubble less dense than water. Scientists expected moons closer to the planet to be rocky and not icy. The finding shakes up long-held theories of how moons form around giant planets. Image: Artist's concept of Galileo at Jupiter's moon Amalthea. Image credit: NASA/JPL/Michael Carroll "I was expecting a body made up mostly of rock. An icy component in a body orbiting so close to Jupiter was a surprise," said Dr. John D. Anderson, an astronomer at NASA's Jet Propulsion Laboratory, Pasadena, Calif. Anderson is lead author of a paper on the findings that appears in the current issue of the journal Science. "This gives us important information on how Jupiter formed, and by implication, how the solar system formed," Anderson said. Current models imply that temperatures were high at Amalthea's current position when Jupiter's moons formed, but this is inconsistent with Amalthea being icy. The findings suggest that Amalthea formed in a colder environment. One possibility is that it formed later than the major moons. Another is that the moon formed farther from Jupiter, either beyond the orbit of Jupiter's moon Europa or in the solar nebula at or beyond Jupiter's position. It would have then been transported or captured in its current orbit around Jupiter. Either of these explanations challenges models of moon formation around giant planets. "Amalthea is throwing us a curve ball," said Dr. Torrence Johnson, co-author and project scientist for the Galileo mission at JPL. "Its density is well below that of water ice, and even with substantial porosity, Amalthea probably contains a lot of water ice, as well as rock." Analysis of density, volume, shape and internal gravitational stresses lead the scientists to conclude that Amalthea is not only porous with internal empty spaces but also contains substantial water ice. One model for the formation of Jupiter's moons suggests that moons closer to the planet would be made of denser material than those farther out. That is based on a theory that early Jupiter, like a weaker version of the early Sun, would have emitted enough heat to prevent volatile, low-density material from condensing and being incorporated into the closer moons. Jupiter's four largest moons fit this model, with the innermost of them, Io, also the densest, made mainly of rock and iron. Amalthea is a small red-tinted moon that measures about 168 miles in length and half that in width. It orbits about 181,000 kilometers (112,468 miles) from Jupiter, considerably closer than the Moon orbits Earth. Galileo passed within about 99 miles of Amalthea on Nov. 5, 2002. Galileo's flyby of Amalthea brought the spacecraft closer to Jupiter than at any other time since it began orbiting the giant planet on Dec. 7, 1995. After more than 30 close encounters with Jupiter's four largest moons, the Amalthea flyby was the last moon flyby for Galileo. The Galileo spacecraft's 14-year odyssey came to an end on Sept. 21, 2003. JPL, a division of the California Institute of Technology in Pasadena, managed the Galileo mission for NASA. Link: Additional information about the mission is available online at: galileo.jpl.nasa.gov/ Explore further: Galaxies fed by funnels of fuel
http://phys.org/news4349.html
13
11
Burns are injuries to tissue that result from heat, electricity, radiation, or chemicals. Burns are usually caused by heat (thermal burns), such as fire, steam, tar, or hot liquids. Burns caused by chemicals are similar to thermal burns, whereas burns caused by radiation (see Radiation Injury), sunlight (see Sunlight and Skin Damage: Overview of Sunlight and Skin Damage), and electricity (see Electrical and Lightning Injuries: Electrical Injuries) differ significantly. Events associated with a burn, such as jumping from a burning building, being struck by debris, or being in a motor vehicle crash, may cause other injuries. Thermal and chemical burns usually occur because heat or chemicals contact part of the body's surface, most often the skin. Thus, the skin usually sustains most of the damage. However, severe surface burns may penetrate to deeper body structures, such as fat, muscle, or bone. When tissues are burned, fluid leaks into them from the blood vessels, causing swelling. In addition, damaged skin and other body surfaces are easily infected because they can no longer act as a barrier against invading microorganisms. More than 2 million people in the United States require treatment for burns each year, and between 3,000 and 4,000 die of severe burns. Older people and young children are particularly vulnerable. In those age groups, abuse must be considered. Doctors classify burns according to strict, widely accepted definitions. The definitions classify the burn's depth and the extent of tissue damage. The depth of injury from a burn is described as first, second, or third degree: Burns are classified as minor, moderate, or severe. These classifications may not correspond to a person's understanding of those terms. For example, doctors may classify a burn as minor even though it can cause the person significant pain and interfere with normal activities. The severity determines how they are predicted to heal and whether complications are likely. Doctors determine the severity of the burn by its depth and by the percentage of the body surface that has second- or third-degree burns. Special charts are used to show what percentage of the body surface various body parts comprise. For example, in an adult, the arm constitutes about 9% of the body. Separate charts are used for children because their body proportions are different. Symptoms and Diagnosis Symptoms of a burn wound vary with the burn's depth: The appearance and symptoms of deep burns can worsen during the first hours or even days after the burn. Doctors frequently examine hospitalized people for complications and assess burn wound depth and extent. In people with large burns, blood pressure, heart rate, and urine volume are measured often to help assess the extent of dehydration or shock and the need for intravenous fluids. Doctors do blood tests to monitor the body's electrolytes and blood count. Electrocardiography (ECG) and chest x-ray are also required. Tests of blood and urine are done to detect proteins caused by the destruction of muscle tissue (rhabdomyolysis) that sometimes occurs with deep third-degree burns. Minor burns are usually superficial and do not cause complications. However, deep second-degree and third-degree burns swell and take more time to heal. In addition, deeper burns can cause scar tissue to form. This scar tissue shrinks (contracts) as it heals. If the scarring occurs in a limb or digit, the resulting contracture may restrict movement of nearby joints. Severe burns and some moderate burns can cause serious complications due to extensive fluid loss and tissue damage. These complications may take hours or days to develop. The deeper and more extensive the burn, the more severe are the problems it tends to cause. Young children and older adults tend to be more seriously affected by complications than other age groups. The following are some complications of some moderate and severe burns: Before burns are treated, the burning agent must be stopped from inflicting further damage. For example, fires are extinguished. Clothing—especially any that is smoldering (such as melted synthetic shirts), covered with a hot substance (for example, tar), or soaked with chemicals—is immediately removed. Hospitalization is sometimes necessary for optimal care of burns. For example, elevating a severely burned arm or leg above the level of the heart to prevent swelling is more easily accommodated in a hospital. In addition, burns that prevent people from carrying out essential daily functions, such as walking or eating, make hospitalization necessary. Severe burns, deep second- and third-degree burns, burns occurring in the very young or the very old, and burns involving the hands, feet, face, or genitals are usually best treated at burn centers. Burn centers are hospitals that are specially equipped and staffed to care for burn victims. Superficial Minor Burns: Superficial minor burns are immersed immediately in cool water if possible. The burn is carefully cleaned to prevent infection. If dirt is deeply embedded, doctors can give analgesics or numb the area by injecting a local anesthetic and then scrub the burn with a brush. Often, the only treatment required is application of an antibiotic cream, such as silver sulfadiazine. The cream prevents infection and forms a seal to prevent further bacteria from entering the wound. A sterile bandage is then applied to protect the burned area from dirt and further injury. A tetanus vaccination is given if needed (see Immunization: Tetanus). Care at home includes keeping the burn clean to prevent infection. In addition, many people are given analgesics, often opioids, for at least a few days. The burn can be covered with a nonstick bandage or with sterile gauze. The gauze can be removed without sticking by first being soaked in water. Deep Minor Burns: As with more superficial burns, deep minor burns are treated with antibiotic cream. Any dead skin and broken blisters should be removed by a health care practitioner before the antibiotic cream is applied. In addition, keeping a deeply burned arm or leg elevated above the heart for the first few days reduces swelling and pain. The burn may require admission to a hospital or frequent re-examination at a hospital or doctor's office, possibly as often as daily for the first few days. A skin graft may be needed. Some skin grafts replace burned skin that will not heal. Other skin grafts help by temporarily covering and protecting the skin as it heals on its own. In a skin grafting procedure, a piece of healthy skin is taken from an unburned area of the person's body (autograft), a dead person (allograft), or an animal (xenograft). After any dead tissue is removed and the wound is clean, a surgeon sews the skin graft over the burned area. Artificial skin can also be used. Autografts are permanent. Allografts and xenografts, however, are rejected after 10 to 14 days by the person's immune system and artificial skin is removed. These skin covers help by temporarily covering and protecting the skin as it begins to heal on its own. However, an autograft eventually must be placed. Burned skin can be replaced anytime within several days of the burn. Physical and occupational therapy usually are needed to prevent immobility caused by scarring around the joints and to help people function if joint motion is limited. Stretching exercises are started within the first few days after the burn. Splints are applied to ensure that joints that are likely to be immobile rest in positions that are least likely to lead to contractures. The splints are left in place except when the joints are moved. If a skin graft has been used, however, therapy is not started for 3 to 5 days after the grafts are attached so that the healing graft is not disturbed. Bulky dressings that put pressure on the burn can prevent large scars from developing. Severe, life-threatening burns require immediate care. People who have gone into shock as a result of dehydration are given oxygen through a face mask. Large amounts of intravenous fluids are given, beginning immediately, for people who have dehydration, shock, or burns that cover a large area of the body. Fluids are also given to people who develop destruction of muscle tissue. The fluids dilute the myoglobin in the blood, preventing extensive damage to the kidneys. Sometimes a chemical (sodium bicarbonate) is given intravenously to help dissolve myoglobin and thus also prevent further damage to the kidneys. A surgical procedure to cut open eschars that cut off blood supply to a limb or that impair breathing may be needed. This procedure is called escharotomy. Escharotomy usually causes some bleeding, but because the burn causing the eschar has destroyed the nerve endings in the skin, there is little pain. Skin care is extremely important. Keeping the burned area clean is essential, because the damaged skin is easily infected. Cleaning may be accomplished by gently running water over the burns periodically. Wounds are cleaned and bandages changed 1 to 3 times per day. Skin grafts are needed to cover burns that will not heal. A proper diet that includes adequate amounts of calories, protein, and nutrients is important for healing. People who cannot consume enough calories may drink nutritional supplements or receive them by way of a tube inserted through the nose into the stomach (a nasogastric tube), or less often nutrition may be given intravenously. Additional vitamins and minerals are usually given. Physical and occupational therapy are needed. Depression is treated. Because severe burns take a long time to heal and can cause disfigurement, people can become depressed. Depression often can be relieved with drugs, psychotherapy, or both. First- and some second-degree burns heal in days to weeks without scarring. Deep second-degree and small third-degree burns take weeks to heal and usually cause scarring. Most require skin grafting. Burns that involve more than 90% of the body surface, or more than 60% in an older person, are often fatal. Last full review/revision April 2009 by Steven E. Wolf, MD
http://www.merckmanuals.com/home/injuries_and_poisoning/burns/burns.html?tabid=tabNav3
13
13
Von Neumann architecturecomputing machine that uses a single storage structure to hold both the set of instructions on how to perform the computation and the data required or generated by the computation. Such machines are also known as stored-program computers. The separation of storage from the processing unit is implicit in this model. The architecture is named after mathematician John von Neumann who provided an early written account of a general purpose stored-program computing machine. The term von Neumann architecture, however, is seen as doing injustice to von Neumann's collaborators, notably John William Mauchly and J. Presper Eckert who conceived of the stored-program concept with their work on ENIAC. The term is now avoided in many circles. By treating the instructions in the same way as the data, a stored-program machine can easily change the instructions. In other words the machine is reprogrammable. One important motivation for such a facility was the need for a program to increment or otherwise modify the address portion of instructions. This became less important when index registers and indirect addressing became customary features of machine architecture. Current machine architecture makes small-scale self-modifying code unnecessary, while processor pipelining and caching schemes makes it inefficient. The practice is now generally deprecated. Of course, on a large scale, the ability to treat instructions as data is what makes compilers possible. It is also a feature that can be exploited by computer viruses when they add copies of themselves to existing program code. The problem of unauthorized code replication can be addressed by the use of memory protection support, and in particular virtual memory architectures. The separation between the CPU and memory leads to what is known as the von Neumann bottleneck. The bandwidth, or the data transfer rate, between the CPU and memory is very small in comparison with the amount of memory. In modern machines it is also very small in comparison with the rate at which the CPU itself can work. Under some circumstances (when the CPU is required to perform minimal processing on large amounts of data), this gives rise to a bottleneck in overall processing speed, because the CPU is forced to wait for vital data to be transferred to or from memory.
http://july.fixedreference.org/en/20040724/wikipedia/Von_Neumann_architecture
13
24
If you are accessing this website right now, then you are almost definitely using a modem! Modem, a combination of the words modulator and demodulator, is a special device used to send data over a phone line to other computers! Most modems are connected to an ISP, or Internet Server Provider, which then connect us to the Internet and to the information superhighway! Because the Internet plays such a major role in our everyday lives, modems are essential devices used by millions of people every day! So, you might be wondering how modems can connect to and access information from computers throughout the world via the In order to transfer information between computers via a phone line, the digital information of computers had to be translated into analog information, and then changed back to digital form when reaching its destination. This was done by modems. At one end, modems would modulate the data. This means that they would convert the data from digital form to a series of analog signals. Once the information successfully passed through the phone lines, another modem would demodulate the data- convert the data from analog form back to digital form. Modulation- the conversion of digital data to a series of analog signals can be accomplished in many ways. The older modems used a technique called Frequency Shift Keying (FSK). In FSK, digital information was transmitted by having different frequencies represent different bits. For example, a 0 in the binary system would be converted to a 1,070-hertz tone, while a 1 would become the analog signal of a 1,270-hertz tone. Now, there are new techniques that can be used. These are all much faster than the old FSK technique. One of these techniques is called Phase Shift Keying, or PSK. If you have studied trigonometry in school, imagine a sine graph and a cosine graph. As you know, these two graphs go up and down in curves. If analog and digital signals were visible, this is what they would look like. In PSK, two waveforms (they look like the curved sine and cosine graphs) are compared. When one graph is going upwards, the other is going downwards, and vice versa. Different phases of the waveforms represent different binary digits. When a certain phase of the waveform is shifted over a certain number of degrees (i.e. 90 degrees), it represents a 0, while shifting it over a different number of degrees (i.e. 270 degrees) would represent a 1. Finally, an even newer technique of modulation is called Quadrature Amplitude Modulation (QAM). This uses the phase shift idea of the PSK, but it also incorporates an idea called Amplitude Modulation, or AM. Remember the sine graph I mentioned above? Well, the wavelengths in the AM technique resemble a sine graph. Where there is the analog signal of a large amplitude sine wave, a 1 is represented. Where there is no sine wave, a 0 is represented. An important part of modems are serial ports. These serve as connecting devices between computers and their modems. A device called the Universal Asynchronous Receiver/Transmitter (UART) converts bytes into single bits. These chips control the serial port and the computer's bus system. The UART then ships the bits one at a time through the serial port to the modem. This is necessary because PC bus systems can transfer information in blocks of 32 bits, while serial cables can only transfer information one bit at a time. Modems can come in all sorts of different speeds. The speeds of the modem are usually measured in bits per second (BPS). This is how many bits they can transfer every second. The first modems cold only transfer data at 300 BPS. Then, newer modems developed. Through the mid-80s to early 1991, 1200 BPS, 2400 BPS, and 9600 BPS modems were primarily used. Throughout the early and mid 1990s, 19.2 K bits per second, 28.8 K bits per second, and 363.6 k bits per second modems were in use. In 1998, the 56.6 K bits per second modem was developed. Now, there is a fairly new type of Internet connection called ISDN. It is much faster than the older modems and is better suited for accessing the Information Superhighway and the World Wide Web. ISDN stands for Integrated Services Digital Network. It can deliver two simultaneous connections and can transfer data completely digitally! Unlike the regular modems, ISDN does not have to transfer data into analog signals and then back to digital form. This makes it much faster, and also it gets rid of all of the little noises you often hear when 56 K modems are working. ISDN can connect to computers throughout the world! Internet access is greatly enhanced through an ISDN connection as compared to regular modems. There are two major types of ISDN connections. The most common is Basic Rate Interface, or BRI. This is found in many homes and offices. Then, there is the Primary Rate Interface. This is primarily used to make available many applications for remote users. Now, a new fairly new modem developed in 1999 is becoming popular. It can transfer data at 10 MB per second! ADSL is an acronym for Asymmetric Digital Subscriber Line. Most homes and offices have a special copper wire that is specifically used to connect to their phone company's nearest central office. If both the house and the phone company's office have ADSL modems, then information can be transferred super fast via the copper wire. It can transfer 1 million BPS from your house to the phone company's office, and 8 million BPS from the phone company to your house! The future will hold even faster and better modems and Internet connections! The Internet will be brought to a whole new level, and all because of modems! 1995-2001, ThinkQuest Inc.all rights reserved
http://library.thinkquest.org/C0115420/Cyber-club%20800x600/Computer%20Parts/Hardware/Modems.htm
13
17
All Grades: Standards for Mathematical Practice MP1: Make sense of problems and persevere in solving them Mathematically proficient students start by explaining to themselves the meaning of a problem and looking for entry points to its solution. They analyze givens, constraints, relationships, and goals. They make conjectures about the form and meaning of the solution and plan a solution pathway rather than simply jumping into a solution attempt. They consider analogous problems, and try special cases and simpler forms of the original problem in order to gain insight into its solution. They monitor and evaluate their progress and change course if necessary. Older students might, depending on the context of the problem, transform algebraic expressions or change the viewing window on their graphing calculator to get the information they need. Mathematically proficient students can explain correspondences between equations, verbal descriptions, tables, and graphs or draw diagrams of important features and relationships, graph data, and search for regularity or trends. Younger students might rely on using concrete objects or pictures to help conceptualize and solve a problem. Mathematically proficient students check their answers to problems using a different method, and they continually ask themselves, “Does this make sense?” They can understand the approaches of others to solving complex problems and identify correspondences between different approaches. MP2: Reason abstractly and quantitatively Mathematically proficient students make sense of the quantities and their relationships in problem situations. Students bring two complementary abilities to bear on problems involving quantitative relationships: the ability to decontextualize—to abstract a given situation and represent it symbolically and manipulate the representing symbols as if they have a life of their own, without necessarily attending to their referents—and the ability to contextualize, to pause as needed during the manipulation process in order to probe into the referents for the symbols involved. Quantitative reasoning entails habits of creating a coherent representation of the problem at hand; considering the units involved; attending to the meaning of quantities, not just how to compute them; and knowing and flexibly using different properties of operations and objects. MP3: Construct viable arguments and critique the reasoning of others Mathematically proficient students understand and use stated assumptions, definitions, and previously established results in constructing arguments. They make conjectures and build a logical progression of statements to explore the truth of their conjectures. They are able to analyze situations by breaking them into cases, and can recognize and use counterexamples. They justify their conclusions, communicate them to others, and respond to the arguments of others. They reason inductively about data, making plausible arguments that take into account the context from which the data arose. Mathematically proficient students are also able to compare the effectiveness of two plausible arguments, distinguish correct logic or reasoning from that which is flawed, and—if there is a flaw in an argument—explain what it is. Elementary students can construct arguments using concrete referents such as objects, drawings, diagrams, and actions. Such arguments can make sense and be correct, even though they are not generalized or made formal until later grades. MP4: Model with mathematics Mathematically proficient students can apply the mathematics they know to solve problems arising in everyday life, society, and the workplace. In early grades, this might be as simple as writing an addition equation to describe a situation. In middle grades, a student might apply proportional reasoning to plan a school event or analyze a problem in the community. By high school, a student might use geometry to solve a design problem or use a function to describe how one quantity of interest depends on another. Mathematically proficient students who can apply what they know are comfortable making assumptions and approximations to simplify a complicated situation, realizing that these may need revision later. They are able to identify important quantities in a practical situation and map their relationships using such tools as diagrams, 2-by-2 tables, graphs, flowcharts and formulas. They can analyze those relationships mathematically to draw conclusions. MP5: Use appropriate tools strategically Mathematically proficient students consider the available tools when solving a mathematical problem. These tools might include pencil and paper, concrete models, ruler, protractor, calculator, spreadsheet, computer algebra system, statistical package, or dynamic geometry software. Proficient students are sufficiently familiar with tools appropriate for their grade or course to make sound decisions about when each of these tools might be helpful, recognizing both the insight to be gained and their limitations. For example, mathematically proficient high school students interpret graphs of functions and solutions generated using a graphing calculator. They detect possible errors by strategically using estimation and other mathematical knowledge. When making mathematical models, they know that technology can enable them to visualize the results of varying assumptions, explore consequences, and compare predictions with data. Mathematically proficient students at various grade levels are able to identify relevant external mathematical resources, such as digital content located on a website, and use them to pose or solve problems. They are able to use technological tools to explore and deepen their understanding of concepts. MP6: Attend to precision Mathematically proficient students try to communicate precisely to others. They try to use clear definitions in discussion with others and in their own reasoning. They state the meaning of the symbols they choose, are careful about specifying units of measure, and labeling axes to clarify the correspondence with quantities in a problem. They express numerical answers with a degree of precision appropriate for the problem context. In the elementary grades, students give carefully formulated explanations to each other. MP7: Look for and make use of structure Mathematically proficient students look closely to discern a pattern or structure. Young students, for example, might notice that three and seven more is the same amount as seven and three more, or they may sort a collection of shapes according to how many sides the shapes have. Later, students will see 7 × 8 equals the well remembered 7 × 5 + 7 × 3, in preparation for learning about the distributive property. In the expression x2 + 9x + 14, older students can see the 14 as 2 × 7 and the 9 as 2 + 7. They recognize the significance of an existing line in a geometric figure and can use the strategy of drawing an auxiliary line for solving problems. MP8: Look for and express regularity in repeated reasoning Mathematically proficient students notice if calculations are repeated, and look both for general methods and for shortcuts. Upper elementary students might notice when dividing 25 by 11 that they are repeating the same calculations over and over again, and conclude they have a repeating decimal. By paying attention to the calculation of slope as they repeatedly check whether points are on the line through (1, 2) with slope 3, middle school students might abstract the equation (y – 2)/(x – 1) = 3. Noticing the regularity in the way terms cancel when expanding (x – 1)(x + 1), (x – 1)(x2 + x + 1), and (x – 1)(x3 + x2 + x + 1) might lead them to the general formula for the sum of a geometric series.
http://map.mathshell.org/materials/stds.php?id=1382
13
13
The intersection of two lines, each given by a pair of points (i.e. the respective equations are not used), is obtained by elementary vector considerations. Objective: to find the intersection of the two lines and . The projection (blue line ) of the red segment through perpendicular to onto has magnitude , where is the angle between the red segment and . In vector terms, the tip of the blue vector is at a distance from , so that its position vector is , where is a unit vector (in purple). Use the positive sign whenever ; otherwise use the negative sign. Let be the angle betwen the lines and . The point of intersection of the two lines (in orange) lies at the distance from (with ), so that its position vector is , where is a unit vector (in black). Choose the sign to give the position closest to . The method relies on Mathematica's capabilities to handle vectors and the angles between them. If is the angle between the two lines, and is the angle between the red segment and the line (see step 2 in the figure), then it can readily be seen that the position vector of the point of intersection is (, implying that the two lines are not parallel), where and are the position vectors of the points and , is the length of the red segment, and is a unit vector in the direction of the line , from to .
http://demonstrations.wolfram.com/IntersectionOfTwoLinesUsingVectors/
13
12
Logical arguments are usually classified as either 'deductive' or 'inductive'. Deduction: In the process of deduction, you begin with some statements, called 'premises', that are assumed to be true, you then determine what else would have to be true if the premises are true. For example, you can begin by assuming that God exists, and is good, and then determine what would logically follow from such an assumption. You can begin by assuming that if you think, then you must exist, and work from there. In mathematics you can begin with some axioms and then determine what you can prove to be true given those axioms. With deduction you can provide absolute proof of your conclusions, given that your premises are correct. The premises themselves, however, remain unproven and unprovable, they must be accepted on face value, or by faith, or for the purpose of exploration. Induction: In the process of induction, you begin with some data, and then determine what general conclusion(s) can logically be derived from those data. In other words, you determine what theory or theories could explain the data. For example, you note that the probability of becoming schizophrenic is greatly increased if at least one parent is schizophrenic, and from that you conclude that schizophrenia may be inherited. That is certainly a reasonable hypothesis given the data. Note, however, that induction does not prove that the theory is correct. There are often alternative theories that are also supported by the data. For example, the behavior of the schizophrenic parent may cause the child to be schizophrenic, not the genes. What is important in induction is that the theory does indeed offer a logical explanation of the data. To conclude that the parents have no effect on the schizophrenia of the children is not supportable given the data, and would not be a logical conclusion. Deduction and induction by themselves are inadequate for a scientific approach. While deduction gives absolute proof, it never makes contact with the real world, there is no place for observation or experimentation, no way to test the validity of the premises. And, while induction is driven by observation, it never approaches actual proof of a theory. The development of the scientific method involved a gradual synthesis of these two logical approaches. For a more comprehensive discussion of deduction, and induction, read the relevant sections of the book by Copi, referenced on this page.
http://www.psych.utah.edu/gordon/Classes/Psy4905Docs/PsychHistory/Cards/Logic.html
13
14
Angular Constructions Help The following paragraphs describe how to reproduce (copy) an angle, and also how to bisect an angle. Reproducing An Angle Figure 5-13 illustrates the process for reproducing an angle. First, suppose two rays intersect at a point P , as shown in drawing A. Set down the non-marking tip of the compass on point P , and construct an arc from one ray to the other. Let R and Q be the two points where the arc intersects the rays (drawing B). Call the angle in question ∠ RPQ , where points R and Q are equidistant from point P . Now, place a new point S somewhere on the page a good distance away from point P , and construct a ray emanating outward from point S , as shown in illustration C. (This ray can be in any direction, but it's easiest if you make it go in approximately the same direction as ray PQ ). Make the new ray at least as long as ray PQ . Without changing the compass span from its previous setting, place its non-marking tip down on point S and construct a sweeping arc that is larger than arc QR . (You can do this by estimation, as shown in drawing D. You can make a full circle if you want.) Let point T represent the intersection of the new arc and the new ray. Now return to the original arc, place the non-marking tip of the compass down on point Q , and construct a small arc through point R so the compass spans the distance QR , as shown in drawing E. Then, without changing the span of the compass, place its non-marking tip on point T , and construct an arc that intersects the arc centered on point S . Call this intersection point U . Finally, construct ray SU , as shown in drawing F. You now have a new angle with the same measure as the original angle. That is, ∠ UST ≅ ∠ RPQ . Bisecting An Angle Figure 5-14 illustrates one method that can be used to bisect an angle, that is, to divide it in half. First, suppose two rays intersect at a point P , as shown in drawing A. Set down the non-marking tip of the compass on point P , and construct an arc from one ray to the other. Call the two points where the arc intersects the rays point R and point Q (drawing B). We can now call the angle in question ∠ RPQ , where points R and Q are equidistant from point P . Now, place the non-marking tip of the compass on point Q , increase its span somewhat from the setting used to generate arc QR , and construct a new arc. Next, without changing the span of the compass, set its non-marking tip down on point R and construct an arc that intersects the arc centered on point Q . (If the arc centered on point Q isn't long enough, go back and make it longer. You can make it a full circle if you want.) Let S be the point at which the two arcs intersect (drawing C). Finally, construct ray PS , as shown at D. This ray bisects ∠ RPQ . This means that ∠ RPS ∠ SPQ , and also that the sum of the measures of ∠ RPS and ∠ SPQ is equal to the measure of ∠ RPQ . Add your own comment Today on Education.com SUMMER LEARNINGJune Workbooks Are Here! TECHNOLOGYAre Cell Phones Dangerous for Kids? - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - First Grade Sight Words List - 10 Fun Activities for Children with Autism - Graduation Inspiration: Top 10 Graduation Quotes - What Makes a School Effective? - Child Development Theories - Should Your Child Be Held Back a Grade? Know Your Rights - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - Smart Parenting During and After Divorce: Introducing Your Child to Your New Partner
http://www.education.com/study-help/article/geometry-help-angular-constructions/
13
36
Lunar water is water that is present on the Moon. Liquid water cannot persist at the Moon's surface, and water vapour is quickly decomposed by sunlight and lost to outer space. However, scientists have since the 1960s conjectured that water ice could survive in cold, permanently shadowed craters at the Moon's poles. Water, and the chemically related hydroxyl group ( · OH), can also exist in forms chemically bound to lunar minerals (rather than as free water), and evidence strongly suggests that this is indeed the case in low concentrations over much of the Moon's surface. In fact, adsorbed water is calculated to exist at trace concentrations of 10 to 1000 parts per million. Inconclusive evidence of free water ice at the lunar poles was accumulated from a variety of observations suggesting the presence of bound hydrogen. In September 2009, India's Chandrayaan-1 detected water on the Moon and hydroxyl absorption lines in reflected sunlight. In November 2009, NASA reported that its LCROSS space probe had detected a significant amount of hydroxyl group in the material thrown up from a south polar crater by an impactor; this may be attributed to water-bearing materials – what appears to be "near pure crystalline water-ice". In March 2010, NASA reported that the Mini-SAR radar aboard Chandrayaan-1 detected what appear to be more than 40 small craters hypothesized to contain up to 1.3 trillion pounds (600 million metric tons) of water ice. Water may have been delivered to the Moon over geological timescales by the regular bombardment of water-bearing comets, asteroids and meteoroids or continuously produced in situ by the hydrogen ions (protons) of the solar wind impacting oxygen-bearing minerals. The search for the presence of lunar water has attracted considerable attention and motivated several recent lunar missions, largely because of water's usefulness in rendering long-term lunar habitation feasible. The possibility of ice in the floors of polar lunar craters was first suggested in 1961 by Caltech researchers Kenneth Watson, Bruce C. Murray, and Harrison Brown. Although trace amounts of water were found in lunar rock samples collected by Apollo astronauts, this was assumed to be a result of contamination, and the majority of the lunar surface was generally assumed to be completely dry. However, a 2008 study of lunar rock samples revealed evidence of water molecules trapped in volcanic glass beads. The first direct evidence of water vapor near the Moon was obtained by the Apollo 14 ALSEP Suprathermal Ion Detector Experiment, SIDE, on March 7, 1971. A series of bursts of water vapor ions were observed by the instrument mass spectrometer at the lunar surface near the Apollo 14 landing site. The first proposed evidence of water ice on the Moon came in 1994 from the United States military Clementine probe. In an investigation known as the 'bistatic radar experiment', Clementine used its transmitter to beam radio waves into the dark regions of the south pole of the Moon. Echoes of these waves were detected by the large dish antennas of the Deep Space Network on Earth. The magnitude and polarisation of these echoes was consistent with an icy rather than rocky surface, but the results were inconclusive, and their significance has been questioned. Resulting computer simulations suggested that an area up to 14,000 km² might be in permanent shadow and hence have the potential to harbour lunar ice. The Lunar Prospector probe, launched in 1998, employed a neutron spectrometer to measure the amount of hydrogen in the lunar regolith near the polar regions. It was able to determine hydrogen abundance and location to within 50 parts per million and detected enhanced hydrogen concentrations at the lunar north and south poles. These were interpreted as indicating significant amounts of water ice trapped in permanently shadowed craters, but could also be due to the presence of the hydroxyl radical (•OH) chemically bound to minerals. Based on data from Clementine and Lunar Prospector, NASA scientists have estimated that if surface water ice is present, the total quantity could be of the order of 1 to 3 cubic kilometers. More suspicions about the existence of water on the Moon were generated by inconclusive data produced by Cassini–Huygens mission, which passed the Moon in 1999. In July 1999, at the end of its mission, the Lunar Prospector probe was deliberately crashed into Shoemaker crater, near the Moon's south pole, in the hope that detectable quantities of water would be liberated. However, spectroscopic observations from ground-based telescopes did not reveal the spectral signature of water. In 2005, observations of the Moon by the Deep Impact spacecraft produced inconclusive spectroscopic data suggestive of water on the Moon. In 2006, observations with the Arecibo planetary radar showed that some of the near-polar Clementine radar returns, previously claimed to be indicative of ice, might instead be associated with rocks ejected from young craters. If true, this would indicate that the neutron results from Lunar Prospector were primarily from hydrogen in forms other than ice, such as trapped hydrogen molecules or organics. Nevertheless, the interpretation of the Arecibo data do not exclude the possibility of water ice in permanently shadowed craters. In June 2009, NASA's Deep Impact spacecraft, now redesignated EPOXI, made further confirmatory bound hydrogen measurements during another lunar flyby. As part of its lunar mapping programme, Japan's Kaguya probe, launched in September 2007 for a 19-month mission, carried out gamma ray spectrometry observations from orbit that can measure the abundances of various elements on the Moon's surface. Japan's Kaguya probe's high resolution imaging sensors failed to detect any signs of water ice in permanently shaded craters around the south pole of the Moon, and it ended its mission by crashing into the lunar surface in order to study the ejecta plume content. On November 14, 2008, the Indian spacecraft Chandrayaan-1 released the Moon Impact Probe (MIP) which impacted Shackleton Crater, of the lunar south pole, at 20:31 on 14 November 2008 releasing subsurface debris that was analyzed for presence of water ice. On September 25, 2009, NASA declared that data sent from its Moon Mineralogy Mapper (M3) instrument aboard Chandrayaan-1 orbiter confirmed the existence of hydrogen over large areas of the Moon's surface, albeit in low concentrations and in the form of hydroxyl group ( · OH) chemically bound to soil. This supports earlier evidence from spectrometers aboard the Deep Impact and Cassini probes. On March 2010, it was reported that the Mini-SAR on board the Chandrayaan-1 had discovered more than 40 permanently darkened craters near the Moon's north pole which are hypothesized to contain an estimated 600 million metric tonnes of water-ice. The radar's high CPR is not uniquely diagnostic of either roughness or ice; the science team must take into account the environment of the occurrences of high CPR signal to interpret its cause. The ice must be relatively pure and at least a couple of meters thick to give this signature. The estimated amount of water ice potentially present is comparable to the quantity estimated from the previous mission of Lunar Prospector’s neutron data. Although the results are consistent with recent findings of other NASA instruments onboard Chandrayaan-1 (the Moon Mineralogy Mapper (MP3) discovered water molecules in the Moon's polar regions, while water vapor was detected by NASA's Lunar Crater Observation and Sensing Satellite, or LCROSS) this observation is not consistent with the presence of thick deposits of nearly pure water ice within a few meters of the lunar surface, but it does not rule out the presence of small (<∼10 cm), discrete pieces of ice mixed in with the regolith. The search for lunar ice continued with NASA's Lunar Reconnaissance Orbiter (LRO) / LCROSS mission, launched June 18, 2009. LRO's onboard instruments carried out a variety of observations that may provide further evidence of water. On October 9, 2009, the Centaur upper stage of its Atlas V carrier rocket was directed to impact Cabeus crater at 11:31 UTC, followed shortly by the LCROSS spacecraft that flew into the ejecta plume and attempted to detect the presence of water vapor in the debris cloud. Although no immediate spectacular plume was seen, time was needed to analyze the spectrometry data. On November 13, 2009 NASA reported that after analysis of the data obtained from the ejecta plume, the spectral signature of water had been confirmed. However, what was actually detected was the chemical group hydroxyl ( · OH), which is suspected to be from water, but could also be hydrates, which are inorganic salts containing chemically-bound water molecules. The nature, concentration and distribution of this material requires further analysis; chief mission scientist Anthony Colaprete has stated that the ejecta appears to include a range of fine-grained particulates of near pure crystalline water-ice. A later definitive analysis found the concentration of water to be "5.6 ± 2.9% by mass". The Mini-RF instrument on LRO observed the LCROSS landing site and did not detect any evidence of large slabs of water ice, so the water is most likely present as small pieces of ice mixed in with the lunar regolith. LRO's laser altimeter's examination of the Shackleton crater at the lunar south pole suggests up to 22% of the surface of that crater is covered in ice. In May 2011, Erik Hauri et al. reported 615-1410 ppm water in melt inclusions in lunar sample 74220, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. While of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists. The sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to detect them with a state-of-the-art ion microprobe instrument. Lunar water has two potential origins: water-bearing comets (and other bodies) striking the Moon, and in situ production. It has been theorized that the latter may occur when hydrogen ions (protons) in the solar wind chemically combine with the oxygen atoms present in the lunar minerals (oxides, silicates etc.) to produce small amounts of water trapped in the minerals' crystal lattices or as hydroxyl groups, potential water precursors. (This mineral-bound water, or hydroxylated mineral surface, must not be confused with water ice.) The hydroxyl surface groups (S–OH) formed by the reaction of protons (H+) with oxygen atoms accessible at oxide surface (S=O) could further be converted in water molecules (H2O) adsorbed onto the oxide mineral's surface. The mass balance of a chemical rearrangement supposed at the oxide surface could be schematically written as follows: where S represents the oxide surface. The formation of one water molecule requires the presence of two adjacent hydroxyl groups, or a cascade of successive reactions of one oxygen atom with two protons. This could constitute a limiting factor and decreases the probability of water production if the proton density per surface unit is too low. Solar radiation would normally strip any free water or water ice from the lunar surface, splitting it into its constituent elements, hydrogen and oxygen, which then escape to space. However, because of the only very slight axial tilt of the Moon's spin axis to the ecliptic plane (1.5 °), some deep craters near the poles never receive any sunlight, and are permanently shadowed (see, for example, Shackleton crater, and Whipple crater). The temperature in these regions never rises above about 100 K (about −170 ° Celsius), and any water that eventually ended up in these craters could remain frozen and stable for extremely long periods of time — perhaps billions of years, depending on the stability of the orientation of the Moon's axis. Although free water cannot persist in illuminated regions of the Moon, any such water produced there by the action of the solar wind on lunar minerals might, through a process of evaporation and condensation, migrate to permanently cold polar areas and accumulate there as ice, perhaps in addition to any ice brought by comet impacts. The hypothetical mechanism of water transport / trapping (if any) remains unknown: indeed lunar surfaces directly exposed to the solar wind where water production occurs are too hot to allow trapping by water condensation (and solar radiation also continuously decomposes water), while no (or much less) water production is expected in the cold areas not directly exposed to the sun. Given the expected short lifetime of water molecules in illuminated regions, a short transport distance would in principle increase the probability of trapping. In other words, water molecules produced close to a cold, dark polar crater should have the highest probability of surviving and being trapped. To what extent, and at what spatial scale, direct proton exchange (protolysis) and proton surface diffusion directly occurring at the naked surface of oxyhydroxide minerals exposed to space vacuum (see surface diffusion and self-ionization of water) could also play a role in the mechanism of the water transfer towards the coldest point is presently unknown and remains a conjecture. The presence of large quantities of water on the Moon would be an important factor in rendering lunar habitation cost-effective, since transporting water (or hydrogen and oxygen) from Earth would be prohibitively expensive. If future investigations find the quantities to be particularly large, water ice could be mined to provide liquid water for drinking and plant propagation, and the water could also be split into hydrogen and oxygen by solar panel-equipped electric power stations or a nuclear generator, providing breathable oxygen as well as the components of rocket fuel. The hydrogen component of the water ice could also be used to draw out the oxides in the lunar soil and harvest even more oxygen. Analysis of lunar ice would also provide scientific information about the impact history of the Moon and the abundance of comets and asteroids in the early inner solar system. The hypothetical discovery of usable quantities of water on the Moon may raise legal questions about who owns the water and who has the right to exploit it. The United Nations Outer Space Treaty does not prevent the exploitation of lunar resources, but does prevent the appropriation of the Moon by individual nations and is generally interpreted as barring countries from claiming ownership of in-situ resources. The Moon Treaty specifically stipulates that exploitation of lunar resources is to be governed by an "international regime", but this treaty has not been ratified by any of the major space-faring nations. Dictionary and translator for handheld New : sensagent is now available on your handheld A windows (pop-into) of information (full-content of Sensagent) triggered by double-clicking any word on your webpage. Give contextual explanation and translation from your sites ! With a SensagentBox, visitors to your site can access reliable information on over 5 million pages provided by Sensagent.com. Choose the design that fits your site. Improve your site content Add new content to your site from Sensagent by XML. Crawl products or adds Get XML access to reach the best products. Index images and define metadata Get XML access to fix the meaning of your metadata. Please, email us to describe your idea. Lettris is a curious tetris-clone game where all the bricks have the same square shape but different content. Each square carries a letter. To make squares disappear and save space for other squares you have to assemble English words (left, right, up, down) from the falling squares. Boggle gives you 3 minutes to find as many words (3 letters or more) as you can in a grid of 16 letters. You can also try the grid of 16 letters. Letters must be adjacent and longer words score better. See if you can get into the grid Hall of Fame ! Change the target language to find translations. Tips: browse the semantic fields (see From ideas to words) in two languages to learn more.
http://dictionary.sensagent.com/Lunar_water/en-en/
13
10
The brain is vital to our existence. It controls our voluntary movements, and it regulates involuntary activities such as breathing and heartbeat. The brain serves as the seat of human consciousness: it stores our memories, enables us to feel emotions, and gives us our personalities. In short, the brain dictates the behaviors that allow us to survive and makes us who we are. Scientists have worked for many years to unravel the complex workings of the brain. Their research efforts have greatly improved our understanding of brain function. The American public has the opportunity to learn of new research findings about the brain regularly through media reports of scientific breakthroughs and discoveries. However, not all of the information we receive is accurate. As a result of misinformation presented by various media, many people maintain misconceptions about the brain and brain function. This problem may be compounded by textbooks for middle school students that present little, if any, scientific information on the brain as the organ that controls human behavior. By providing students with a conceptual framework about the brain, we significantly increase our chances of producing an informed public with the tools needed to correctly interpret brain research findings. The Brain: Our Sense of Self has several objectives. One is to introduce students to the key concept that the sense of self, our sense of identity, is contained within the brain. Through inquiry-based activities, students investigate brain function and the various roles of the brain within the nervous system. A second objective is to allow students to develop the understanding that brain function is not predetermined; the brain can change with learning throughout life. The lessons in this module help students sharpen their skills in observation, critical thinking, experimental design, and data analysis. They also make connections to other disciplines such as English, history, mathematics, and social science. A third objective is to convey to students the purpose of scientific research. Ongoing research affects how we understand the world around us and gives us the foundation for improving choices about our personal health and the health of our community. In this module, students experience how science provides evidence that can be used to understand and treat human disease. Because the mission of the National Institute of Neurological Diseases and Stroke includes helping the public understand the importance of brain and nervous system function to their health, education is an important activity for the Institute. The lessons in this module encourage students to think about the relationships among knowledge, choice, behavior, and human health in this way: Knowledge (what is known and not known) + Choice = Power Power + Behavior = Enhanced Human Health The final objective of this module is to encourage students to think in terms of these relationships now and as they grow older. Middle school life science classes offer an ideal setting for integrating many areas of student interest. In this module, students participate in activities that integrate inquiry, science, human health, mathematics, and science-technology-society relationships. The real-life context of the module’s classroom lessons is engaging for students, and the knowledge gained can be applied immediately to students’ lives. “The hands-on nature of the module was excellent. It generated student interest and kept learning fun.”—Field-Test Teacher “The inquiry approach of the module was challenging to the students at the right level—it activated the learning process. All students could successfully participate in all activities.”—Field-Test Teacher “I think that the most valuable aspect of the lessons was that they were related to real-life experiences. Also, since the Web activities were hands-on, they were fun and I learned a lot that would have been hard to understand otherwise.”—Field-Test Student “The lessons were interesting, promoted thinking, and allowed me to learn something I didn’t know before. Overall, the module inspired me to learn more about the brain and how it functions.”—Field-Test Student The Brain: Our Sense of Self meets many of the criteria by which teachers and their programs are assessed. In addition, the module provides a means for professional development. Teachers can engage in new and different teaching practices like those described in this module without completely overhauling their entire program. In Designing Professional Development for Teachers of Science and Mathematics, Susan Loucks-Horsley et al.18 write that replacement modules such as this one “offer a window through which teachers get a glimpse of what new teaching strategies look like in action.” By experiencing a short-term unit like this one, teachers can “change how they think about teaching and embrace new approaches that stimulate students to problem solve, reason, investigate, and construct their own meaning for the content.” The use of supplements like this module can encourage reflection and discussion and stimulate teachers to improve their practices by focusing on student learning through inquiry. The following table correlates topics often included in the life science curriculum with the major concepts presented in this module. This information is presented to help teachers make decisions about incorporating this material into the curriculum. |Topics||Lesson 1||Lesson 2||Lesson 3||Lesson 4||Lesson 5| |Localization of brain function| |General functions of specific brain regions| |Anatomy of the neuron| |Relationship of science, technology, and society| |Organisms sense and respond to environmental stimuli| Next: Implementing the Module
http://science.education.nih.gov/supplements/nih4/self/guide/introduction.htm
13
11
Albert Einstein died of a ruptured aneurysm in a New Jersey hospital on April 18, 1955. Although he is best remembered for his extraordinary contributions to modern physics, Einstein's life and thought left an impact not only on science, but also on philosophy, visual art, and literature. Of all his works, his theory of relativity had perhaps the farthest-reaching implications for thinkers in all fields. For example, British physicists such as Arthur Eddington interpreted relativity theory as a spiritual, idealistic view of the universe. They claimed that the laws of science have an a priori mental character and exist in a pure spiritual realm. In contrast, Soviet physicists such as V.A. Fock interpreted relativity as evidence for their own Marxist materialist agenda, arguing that science talks about the physical properties of reality as they actually exist and therefore has no idealistic component. Although some philosophers have attributed to Einstein the relativist idea that moral and ethical truth exists only in the point of view of the beholder, Einstein never held such a view and in fact believed just the opposite. In addition to serving as a lightning rod for many different political agendas, Einstein's relativity theory also gave rise to a particular philosophical approach to science called logical positivism. Inspired by Einstein's method of defining concepts in terms of laboratory experiments, the logical positivists held that the only statements that we can know to be true are those that positive experimental evidence can verify. They also emphasized the role of symbolic logic in the formulation of scientific theories. The logical positivist school was an intellectual product of the Vienna Circle, a group of brilliant young intellectuals who gathered in Vienna in the 1920s and early 1930s under the organization of the Viennese physicist and philosopher Moritz Schlick. These thinkers wanted to rid science of all metaphysical speculation, basing it instead on empiricism and analytical statements of logic. Einstein's work also influenced much of European art of the post-World War I years. Cubism, derived from Cezanne's "geometrization" of nature, was a new art form that consisted of breaking the essence of the depicted object into geometrical planes, thereby presenting multiple points of view simultaneously. Founded by Pablo Picasso and Juan Gris, Cubist painting introduced a shifting, or relative point of view, in which a single object is seen from several sides at once. In the later school of high analytic Cubism, the new notions of a four-dimensional space-time led artists to look to the fourth dimension as a higher unity under which various perspectives would join together. In addition, sculptors such as Henri Matisse and Naum Gabo attempted to realize the geometrical ideals of cubist painting. This new type of art used kinetic and dynamic elements to express the relationship between mass, energy, space, and time. Einstein's rejection of an absolute time led to conceptions of time as a dynamic quality not only in visual art, but also in the literature of writers such as William Faulkner, who presents multiple relative perspectives on events which seem to unfold in a subjective, personal time. For instance, Faulkner's novel The Sound and the Fury presents a single story told from the perspectives of four different characters, all of whom have different relationships with time. In poetry as well, Einstein's science achieved an impact. The poetic school of objectivism, led by poets such as Archibald MacLeish, William Carlos Williams, and Leon Zukofsky, involved the attempt to incorporate into poetry the ideas that Einstein brought to physics. Objectivist poets equated the relativity of space and time measurements with the relativity of poetic measures, resulting in innovative experiments with verse, structure, and meter. Other poets, including Robert Frost, Ezra Pound, and T.S. Eliot, expressed a more ambivalent attitude towards Einstein's science. For instance, Eliot openly rejected positivism and all doctrines that denied any reality other than that which could be empirically verified. Although these writers often borrowed terms, images, and analogies from Einstein's science, they criticized the larger philosophical implications of his work. Einstein's legacy also sparked a new public perception of the role of the scientist in society. Einstein believed that the scientist has a moral responsibility to humanity. In addition to his scientific publications, he published popular tracts on themes such as religion, human rights, economics, government, nuclear war, and personal development. He was an outspoken supporter of pacifism, internationalism, democracy, and human dignity. He was also a lifelong supporter of Jewish causes, especially cultural Zionism. In all of these capacities, Einstein helped transform the image of the scientist from a highly specialized student of nature to a public personality deeply concerned about the fate of humanity. Readers' Notes allow users to add their own analysis and insights to our SparkNotes—and to discuss those ideas with one another. Have a novel take or think we left something out? Add a Readers' Note!
http://www.sparknotes.com/biography/einstein/section12.rhtml
13
21
Before the true distance of the Earth from the Sun was known, and what the true distances in the Solar System were, relative distances were measured by the Astronomical Unit. It still is the most meaningful way to understand the scale of the Solar System Astronomical Unit: The mean distance of the Earth from the This distance is defined to be 1 A.U. In 1772, the German astronomer Johann Bode found an empirical rule, dubbed Bode's Rule, that does a remarkable job of "predicting" planetary positions in the Solar System. 1. Write down a sequence of 4s, one for each planet except Neptune, including one for 2. Add the sequence 0, 3, 6, 12, 24, …. 3. Divide these numbers by 10. The result is the relative distance of each object from the Sun, to a surprising degree of accuracy! What Is A Planet? We all have heard that the Sun has nine planets circling around it, but what exactly is a planet? e.g., Is Pluto a planet? In the 1990s, many "asteroids" or "cometary nuclei" approaching 1/4 Pluto crosses Neptune's orbit, like an asteroid or a comet. Pluto is only 1/2 the size of our own Moon. Seven planetary satellites are larger than Pluto. Pluto has a highly inclined orbit. Planet: A non-stellar body larger than a certain size (roughly 1,000 to 3,000 km), orbiting around a star as its primary host. A List Of Important Numbers/Constants: G = 6.67 × 10-11 N·m2/kg2 Astronomical Unit: 1 AU = 1.49 × 1011 m Velocity Of Light: c = 3.00 × 106 m/s Mass Of Sun: M¤ = 2.00 × 1030 kg Radius Of Sun: R¤ = 6.96 × 108 m Mass of Earth: MÅ = 5.98 × 1024 kg Radius Of Earth: RÅ = 6.38 × 106 m A Survey Of The Planets: Mercury Notes: Very Small Magnetic Field, Past Cratering, Inter-Crater Plains - Past A Survey Of The Planets: Venus Notes: Thick Atmosphere, Little/No Magnetic Field, Evidence of Past/Present A Survey Of The Planets: Earth 3475.6 Km (2159.7 miles) Sidereal revolution period Sinodic revolution period Mean distance from Earth 384,400 Km (238,862 miles) Minimum distance from 356,410 Km (221,469 miles) Maximum distance from 406,697 Km (252,717 miles) A Survey Of The Planets: Mars Water In The Past, A Survey Of The Planets: Jupiter Solar System", Fast Rotation. A Survey Of The Planets: Saturn A Survey Of The Planets: Uranus A Survey Of The Planets: Neptune A Survey Of The Planets: Pluto Small Size For Location, Highly The Solar System: Minor Bodies There is a lot of debris in the solar system, including meteoroids, asteroids and comets. Asteroids: A small planetary body composed mostly of rock or metals; most orbit in the asteroid belt between Mars and Jupiter. All known asteroids have diameters less than 1,000 km. Lower size limit often taken to be about 10 m. Many unaltered since their formation - give clues to Solar System formation. A small, extraterrestrial, solid body. Term usually used for smaller than a kilometer, and frequently millimeters, in size. Objects destined to become meteors. Comets: An ice-rich planetesimal that can emit an observable gaseous halo when its ice is warmed by the Sun. Most comets spend their time in the outer Solar System, making it difficult to distinguish them from normal asteroids. Comparison of Planet Properties
http://earthsci.org/fossils/space/solsys/solsys.html
13
12
According to a paper presented at the 2012 European Planetary Science Congress today, September 25, there is a high probability (under certain scenarios and conditions) that the seeds of life were brought to Earth aboard meteorites, or other pieces of debris originating on other worlds. Over the past few years, important volumes of data have emerged that suggest life did not originate on our planet, but rather was brought here on space rocks, in the form of organic molecules. It could be that these seeds found an appropriate environment to develop once they arrived. The same holds true for Earth's waters, which many now believe was brought here aboard comets. The largest volume of water must have been delivered to Earth during the Late Heavy Bombardment, a period when numerous comets, asteroids and meteorites impacted the inner planets of the solar system. Microorganisms, or at least the building blocks required to create them, may have hitched a ride on these space rocks, embedded inside tiny cracks. Recent studies have demonstrated that lifeforms can endure for prolonged periods in space. The new study was conducted by experts at the Princeton University , the University of Arizona and the Centro de Astrobiología (CAB), in Spain. Details of the work were published in the latest issue of the esteemed journal Astrobiology. It's also possible that life spread from Earth to other planets in the solar system, maybe even beyond, through the same mechanisms. Bacteria-carrying rocks may have been ejected from its surface during a collision, and then spent millions of years flying around before landing on another world. Researchers have also found that it's possible for star systems to exchange rocks as well. Previously, statistical data showed that chances of this happening were 1 in a million. The new study indicates that 12 out of every 10,000 rocks ejected from a star system could be captured by its neighbor. “Our work says the opposite of most previous work. It says that lithopanspermia might have been very likely, and it may be the first paper to demonstrate that. If this mechanism is true, it has implications for life in the Universe as a whole. This could have happened anywhere,” says Edward Belbruno. The expert, a mathematician and visiting research collaborator in Princeton's Department of Astrophysical Sciences, was the one who developed these so-called principles of weak transfer. He is the first author of the Astrobiology paper. “The conclusion from our work is that the weak transfer mechanism makes lithopanspermia a viable hypothesis because it would have allowed large quantities of solid material to be exchanged between planetary systems, and involves timescales that could potentially allow the survival of microorganisms embedded in large boulders,” says CAB astronomer and paper coauthor, Amaya Moro-Martín.
http://news.softpedia.com/news/Life-Was-Brought-to-Earth-on-Space-Rocks-294579.shtml
13
13
B. From ________, we can find the size Molecules of gases, unlike those of solids and liquids, were thought to take up very little of the space the gas occupied, and to zoom about freely all the time except when colliding with each other or with the walls of their container. A set of particles suspended in a fluid is very similar to this in at least two ways. First, although the particles might be in frequent or constant contact with the fluid molecules instead of flying through almost-empty space like gas molecules, they seldom come in contact with each other, so among themselves they do behave as if they were gas molecules. And second, if the fluid is composed of molecules in constant motion, those molecules should constantly bounce the suspended particles around, just as the fluid molecules bounce each other around. This would put the suspended particles in constant random motion themselves, again like the molecules of a gas. Einstein further discovered that, if the suspended particles are big enough to see with a microscope, their random motion should be visible too, as the fluid molecules, with their small invisible motions, bounce them around. In fact, such a visible motion had long been known, and had been explored by the botanist Robert Brown. Various causes for this motion had been supposed, including the hypothesis that molecules exist and constantly move. When Einstein first discovered how suspended particles could make the effects of molecules visible, he didn't know enough about Brownian motion to be sure it was the same thing, but as it turns out, it is. In proving this, Einstein also found his main clue to the size of molecules: how far the suspended particles move should depend on the number of molecules it takes to make one "mole". Each time a fluid molecule bounced into a suspended particle, the particle would be moved a little, so after many bounces the particle might wind up in a quite different place. Einstein found that, if one mole equals so many molecules, the suspended particles would wander, on average, so far in one minute. If a mole only equals one fourth as many molecules, so that each fluid molecule is four times as massive, the fluid molecules would hit hard enough for the suspended particles to wander, overall, twice as far in one minute. The basic relation is: moved by a suspended particle in a given time)2 = (some other quantities that we'll look at later) ÷ (number of molecules in one mole) Our example fits this relation: if a mole only had one fourth as many molecules, suspended particles would only move twice as far: (2)2 = 1 ÷ (¼ ). The relation would also work if it took more molecules instead of fewer to make one mole. If molecules had only one millionth of the mass they actually do, a typical suspended particle would wander only one thousandth the distance it does: (1/1,000)2 = 1 ÷ (1,000,000). And if matter could be divided into infinitely small pieces-in other words, if there were no such thing as a smallest-possible unit of matter, so that a "molecule" of fluid had zero mass and it took an infinite number of them to make one mole-the suspended particles would travel zero distance: (0) 2 = 1 ÷ (infinity). In that case, the suspended particles would not be bounced around at all. To find what a molecule's mass is-i.e., the number of molecules in one mole-we need to know those "other quantities" in the above equation. The reasoning Einstein used to determine them is interesting to follow. Einstein showed that the suspended particles, being randomly bounced around by the fluid molecules, would diffuse throughout the fluid in accordance with an equation long known to mathematical physicists. This equation implies a simpler one, which relates the average change in the particles' positions to the time they spend being bounced around: average of [(change in position along any one direction)2] = 2 x (time) x D, where D is a quantity that characterizes the diffusion rate, and depends on the nature of the fluid and the particles suspended in it. This equation describes positional change along just one direction; the average square of the change for all the directions of three-dimensional space is three times this amount. It turns out that D is related to the number of molecules in one mole, and to certain other characteristics of the suspension ("other quantities") that are easy to determine by experiment. To find how they are related, Einstein had to consider four different features of the fluid-particle suspension and how they themselves are related to these "other quantities" and to each other. These latter relationships are described by a set of equations that can be combined like pieces of a jigsaw puzzle to find the relationship of D to the size of a mole. The result of this assembly can be expressed like this: (viscosity of the fluid) x (size of a suspended particle) x x (particles in one = (absolute temperature of the fluid) x R. Aside from the number , everything in this equation has an obvious relation to our problem except perhaps for the quantity R. R stands for a feature of dilute gases known from experiment. It's what you get when you multiply the volume of the gas by its pressure, and divide that by the product of its temperature and the number of moles of gas you have. As long as the gas is dilute enough, the number is practically the same no matter what kind of gas you have, or how much there is of it, or what its pressure, volume, and temperature are. The quantity R turns up in this equation, because one the four features of the "gas" of suspended particles-the one, in fact, that's most directly related to the number of particles in a mole-is proportional to R. To complete our jigsaw puzzle, we only have to combine this equation with the one just before it that involves D. After a bit more algebra, we finally have the relation between how far a suspended particle should move in a given time, and the number of particles in one mole: average of [(change in position along any one direction)2] If we consider average of [(change in position along any one direction)2] as a more precise way to describe (typical distance moved by a suspended particle in a given time)2we see that our last equation is just a more detailed version of the equation at the beginning of this article.The more detailed equation tells us that, to find how many particles there are in one mole of any substance (and thus the mass of one molecule of a substance), we need only measure how far the suspended particles move in a given time, the viscosity and temperature of the suspending fluid, and the size of the suspended particles. If we can make those measurements, we can determine the sizes of molecules even if we can't isolate and weigh them individually. (.....continued)
http://www.osti.gov/accomplishments/nuggets/einstein/seeingb.html
13
11
- "Last ice age" redirects here. This term is also used for the last glacial period of the Quaternary glaciation. Quaternary glaciation, also known as the Pleistocene glaciation or the current ice age, is the period from 2.58 Ma (million years ago) to present, in which permanent ice sheets were established in Antarctica and perhaps Greenland, and fluctuating ice sheets occurred elsewhere (for example, the Laurentide ice sheet). The major effects of the ice age are erosion and deposition of material over large parts of the continents, modification of river systems, creation of millions of lakes, changes in sea level, development of pluvial lakes far from the ice margins, isostatic adjustment of the crust, and abnormal winds. It affects oceans, flooding, and biological communities. The ice sheets themselves, by raising the albedo, effect a major feedback on climate cooling. During the Quaternary Period, the total volume of land ice, sea level, and global temperature has fluctuated initially on 41,000- and more recently on 100,000-year time scales, as evidenced most clearly by ice cores for the past 800,000 years and marine sediment cores for the earlier period. Over the past 740,000 years there have been eight glacial cycles. The entire Quaternary Period, starting 2.58 Ma, is referred to as an ice age because at least one permanent large ice sheet — Antarctica — has existed continuously. There is uncertainty over how much of Greenland was covered by ice during the previous and earlier interglacials. During the colder episodes — referred to as glacial periods — large ice sheets also existed in Europe, North America, and Siberia. The shorter and warmer intervals between glacials are referred to as interglacials. Currently, the earth is in an interglacial period, which marked the beginning of the Holocene epoch. The current interglacial began between 10,000 and 15,000 years ago, which caused the ice sheets from the last glacial period to begin to disappear. Remnants of these last glaciers, now occupying about 10% of the world's land surface, still exist in Greenland and Antarctica. Global warming has exacerbated the retreat of these glaciers. During the glacial periods, the present (i.e. interglacial) hydrologic system was completely interrupted throughout large areas of the world and was considerably modified in others. Due to the volume of ice on land, sea level was approximately 120 meters lower than present. The evidence of such an event in the recent past is robust. Over the last century, extensive field observations have provided evidence that continental glaciers covered large parts of Europe, North America, and Siberia. Maps of glacial features were compiled after many years of fieldwork by hundreds of geologists who mapped the location and orientation of drumlins, eskers, moraines, striations, and glacial stream channels. These maps revealed the extent of the ice sheets, the direction of flow, and the locations of systems of meltwater channels, and they allowed scientists to decipher a history of multiple advances and retreats of the ice. Even before the theory of worldwide glaciation was generally accepted, many observers recognized that more than a single advance and retreat of the ice had occurred. Extensive evidence now shows that a number of periods of growth and retreat of continental glaciers occurred during the ice age, called glacials and interglacials. The interglacial periods of warm climate are represented by buried soil profiles, peat beds, and lake and stream deposits separating the unsorted, unstratified deposits of glacial debris. No completely satisfactory theory has been proposed to account for Earth's history of glaciation. The cause of glaciation may be related to several simultaneously occurring factors, such as astronomical cycles, atmospheric composition, plate tectonics, and ocean currents. Astronomical cycles The role of Earth's orbital changes in controlling climate was first advanced by James Croll in the late 19th century. Later, Milutin Milanković, a Serbian geophysicist, elaborated on the theory and calculated these irregularities in Earth's orbit could cause the climatic cycles known as Milankovitch cycles. They are the result of the additive behavior of several types of cyclical changes in Earth's orbital properties. Changes in the orbital eccentricity of Earth occur on a cycle of about 100,000 years. The inclination, or tilt, of Earth's axis varies periodically between 22° and 24.5°. (The tilt of Earth's axis is responsible for the seasons; the greater the tilt, the greater the contrast between summer and winter temperatures.) Changes in the tilt occur in a cycle 41,000 years long. Precession of the equinoxes, or wobbles on Earth's spin axis, complete every 21,700 years. According to the Milankovitch theory, these factors cause a periodic cooling of Earth, with the coldest part in the cycle occurring about every 40,000 years. The main effect of the Milankovitch cycles is to change the contrast between the seasons, not the amount of solar heat Earth receives. These cycles within cycles predict that during maximum glacial advances, winter and summer temperatures are lower. The result is less ice melting than accumulating, and glaciers build up. Milankovitch worked out the ideas of climatic cycles in the 1920s and 1930s, but it was not until the 1970s that sufficiently long and detailed chronology of the Quaternary temperature changes was worked out to test the theory adequately. Studies of deep-sea cores, and the fossils contained in them indicate that the fluctuation of climate during the last few hundred thousand years is remarkably close to that predicted by Milankovitch. A problem with the theory is that the astronomical cycles have been in existence for billions of years, but glaciation is a rare occurrence. Astronomical cycles correlate perfectly with glacial and interglacial periods, and their transitions, inside an ice age. Other factors such as the position of continents and the effects this has on the earth's oceanic currents, or long term fluctuations inside the core of the sun must also be involved that caused Earth's temperature to drop below a critical threshold and thus initiate the ice age in the first place. Once that occurs, Milankovitch cycles will act to force the planet in and out of glacial periods. Atmospheric composition One theory holds that decreases in atmospheric CO2, an important greenhouse gas, started the long-term cooling trend that eventually led to glaciation. Recent studies of the CO2 content of gas bubbles preserved in the Greenland ice cores lend support to this idea. The geochemical cycle of carbon indicates more than a 10-fold decrease in atmospheric CO2 since the middle of the Mesozoic Era. However, it is unclear what caused the decline in CO2 levels, and whether this decline is the cause of global cooling or if it is the result. CO2 levels also play an important role in the transitions between interglacials and glacials. High CO2 contents correspond to warm interglacial periods, and low CO2 to glacial periods. However, studies indicate that CO2 may not be the primary cause of the interglacial-glacial transitions, but instead acts as a feedback. The explanation for this observed CO2 variation "remains a difficult attribution problem." Plate tectonics and ocean currents An important component in the long-term temperature drop may be related to the positions of the continents, relative to the poles (but it cannot explain the rapid retreat and advances of glaciers). This relation can control the circulation of the oceans and the atmosphere, affecting how ocean currents carry heat to high latitude. Throughout most of the geologic time, the North Pole appears to have been in a broad, open ocean that allowed major ocean currents to move unabated. Equatorial waters flowed into the polar regions, warming them with water from the more temperate latitudes. This unrestricted circulation produced mild, uniform climates that persisted throughout most of geologic time. Throughout the Cenozoic Era, the large North American and South American continental plates moved westward from the Eurasian plate. This drift culminated in the development of the Atlantic Ocean, trending north-south, with the North Pole in the small, nearly landlocked basin of the Arctic Ocean. The Isthmus of Panama developed at a convergent plate margin about 3 million years ago, and further separated oceanic circulation and created the Pacific and Atlantic oceans. The presence of so much ice upon the continents had a profound effect upon almost every aspect of Earth's hydrologic system. The most obvious effects are the spectacular mountain scenery and other continental landscapes fashioned both by glacial erosion and deposition instead of running water. Entirely new landscapes covering millions of square kilometers were formed in a relatively short period of geologic time. In addition, the vast bodies of glacial ice affected the Earth well beyond the glacier margins. Directly or indirectly, the effects of glaciation were felt in every part of the world. The Quaternary glaciation created more lakes than all other geologic processes combined. The reason is that a continental glacier completely disrupts the preglacial drainage system. The surface over which the glacier moved was scoured and eroded by the ice, leaving myriad closed, undrained depressions in the bedrock. These depressions filled with water and became lakes. Very large lakes were created along the glacial margins. The ice on both North America and Europe was about 3,000 m (9,800 ft) thick near the centers of maximum accumulation, but it tapered toward the glacier margins. Ice weight caused crustal subsidence which was greatest beneath the thickest accumulation of ice. As the ice melted, rebound of the crust lagged behind, producing a regional slope toward the ice. This slope formed basins that have lasted for thousands of years. These basins became lakes or were invaded by the ocean. The Great Lakes and the Baltic Sea of northern Europe were formed primarily in this way. Pluvial lakes The climatic conditions that cause glaciation had an indirect effect on arid and semiarid regions far removed from the large ice sheets. The increased precipitation that fed the glaciers also increased the runoff of major rivers and intermittent streams, resulting in the growth and development of large pluvial lakes. Most pluvial lakes developed in relatively arid regions where there typically was insufficient rain to establish a drainage system to the sea. Instead, stream runoff in those areas flowed into closed basins and formed playa lakes. With increased rainfall, the playa lakes enlarged and overflowed. Pluvial lakes were most extensive during glacial periods. During interglacial stages, when less precipitation fell, the pluvial lakes shrank to form small salt flats. Isostatic adjustment Major isostatic adjustments of the lithosphere during the Quaternary glaciation were caused by the weight of the ice, which depressed the continents. In Canada, a large area around Hudson Bay was depressed below sea level, as was the area in Europe around the Baltic Sea. The land has been rebounding from these depressions since the ice melted. Some of these isostatic movements triggered large earthquakes in Scandinavia about 9,000 years ago. These earthquakes are unique in that they are not associated with plate tectonics. Studies have shown that the uplift has taken place in two distinct stages. The initial uplift following deglaciation was rapid (called "elastic"), and took place as the ice was being unloaded. After this "elastic" phase, uplift proceed by "slow viscous flow" so the rate decreased exponentially after that. Today, typical uplift rates are of the order of 1 cm per year or less. In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network. Studies suggest that rebound will continue for about at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred meters near the center of rebound. The presence of ice over so much of the continents greatly modified patterns of atmospheric circulation. Winds near the glacial margins were strong and persistent because of the abundance of dense, cold air coming off the glacier fields. These winds picked up and transported large quantities of loose, fine-grained sediment brought down by the glaciers. This dust accumulated as loess (wind-blown silt), forming irregular blankets over much of the Missouri River valley, central Europe, and northern China. Sand dunes were much more widespread and active in many areas during the early Quaternary period. A good example is the Sand Hills region in Nebraska, USA, which covers an area of about 60,000 km2 (23,166 sq mi). This region was a large, active dune field during the Pleistocene epoch, but today is largely stabilized by grass cover. Ocean currents Thick glaciers were heavy enough to reach the sea bottom in several important areas, thus blocking the passage of ocean water and thereby affecting ocean currents. In addition to direct effects, this caused feedback effects as ocean currents contribute to global heat transfer. Records of prior glaciation Glaciation has been a rare event in Earth's history, but there is evidence of widespread glaciation during the late Paleozoic Era (200 to 300 Ma) and during late Precambrian (i.e. in the Neoproterozoic Era, 600 to 800 Ma). Before the current ice age, which began 2 to 3 Ma, Earth's climate was typically mild and uniform for long periods of time. This climatic history is implied by the types of fossil plants and animals and by the characteristics of sediments preserved in the stratigraphic record. There are, however, widespread glacial deposits, recording several major periods of ancient glaciation in various parts of the geologic record. Such evidence suggests major periods of glaciation prior to the current Quaternary glaciation. The best documented record of pre-Quaternary glaciation, called the Karoo Ice Age, is found in the late Paleozoic rocks in South Africa, India, South America, Antarctica, and Australia. Exposures of ancient glacial deposits are numerous in these areas. Deposits of even older glacial sediment exist on every continent except South America. These indicate that two other periods of widespread glaciation occurred during the late Precambrian, producing the Snowball Earth during the Cryogenian Period. Next glacial period In popular culture, there is often reference to "the next ice age". Technically, since the Earth is already in an ice age at present, this usually refers to the next glacial period (because the Earth is currently in an interglacial period). The next glacial seemed to be rapidly approaching, when paleoclimatologists met in 1972 to discuss this issue (a period of so-called global cooling). The previous interglacial periods seemed to have lasted about 10,000 years each. Assuming that the present interglacial period would be just as long, they concluded, "it is likely that the present-day warm epoch will terminate relatively soon if man does not intervene." Since 1972, our understanding of the climate system has improved. It is known that not all interglacial periods are of the same length and that solar heating varies in a non-linear fashion forced by the Milankovitch orbital cycles (see Causes section above). At the same time, it is also known that greenhouse gases are increasing in concentration with each passing year. Based on the variations in solar heating and on the amount of CO2 in the atmosphere, some calculations of future temperatures have been made. According to these estimates, the interglacial period the Earth is in now may persist for another 50,000 years if CO2 levels increase to 750 parts per million (ppm) (the present atmospheric concentration of CO2 is about 393 ppm by volume, but is rising rapidly as humans continue to burn fossil fuels.) If CO2 drops instead to 210 ppm, then the next glacial period may only be 15,000 years away. Moreover, studies of seafloor sediments and ice cores from glaciers around the world, namely Greenland, indicate that climatic change is not smooth. Studies of isotopic composition of the ice cores indicate the change from warm to frigid temperatures can occur in a decade or two. In addition, the ice cores show that an ice age is not uniformly cold, nor are interglacial periods uniformly warm (see also stadial). Analysis of ice cores of the entire thickness of the Greenland glacier shows that climate over the last 250,000 years has changed frequently and abruptly. The present interglacial period (the last 10,000 to 15,000 years) has been fairly stable and warm, but the previous one was interrupted by numerous frigid spells lasting hundreds of years. If the previous period was more typical than the present one, the period of stable climate in which humans flourished—inventing agriculture and thus civilization—may have been possible only because of a highly unusual period of stable temperature. - Gradstein, Felix; et al (2004). A Geologic Time Scale 2004. New York: Cambridge University Press. p. 412. ISBN 978-0-521-78673-7. - Augustin, Laurent; et al (2004). "Eight glacial cycles from an Antarctic ice core". Nature 429 (6992): 623–628. doi:10.1038/nature02599. PMID 15190344. - http://www.nasa.gov/centers/goddard/news/topstory/2003/1023esuice.html NASA - Why were there Ice Ages? - Discovery of the Ice Age - EO Library: Milutin Milankovitch - Why do glaciations occur? - EO Library: Milutin Milankovitch Page 3 - Atmospheric carbon dioxide linked with Mesozoic and early Cenozoic climate change : Abstract : Nature Geoscience - Joos, Fortunat; Prentice, I. Colin (2004). "A Paleo-Perspective on Changes in Atmospheric CO2 and Climate" (PDF). The Global Carbon Cycle: Integrating Humans, Climate, and the Natural World. Scope 62. Washington D.C.: Island Press. pp. 165–186. Retrieved 2008-05-07. - Glaciers and Glaciation - EO Newsroom: New Images - Panama: Isthmus that Changed the World - CVO Website - Glaciations and Ice Sheets - FENNIA 2002 - Polish Geological Institute - Continuous GPS measurements of postglacial adjustment in Fennoscandia 1. Geodetic results - EO Newsroom: New Images - Sand Hills, Nebraska - Nebraska Sand Hills - Ice Ages- Illinois State Museum - When have Ice Ages occurred? - Our Changing Continent - Geotimes - April 2003 - Snowball Earth - Revkin, Andrew C. (2003-11-03). "When Will the Next Ice Age Begin?". The New York Times. Retrieved 2008-05-07. - Schlesinger, James (2003-07-07). "Climate Change: The Science Isn't Settled". The Washington Post. Retrieved 2008-05-07. - Berger, A.; Loutre, M. F. (2002-08-23). "An Exceptionally Long Interglacial Ahead?". Science 297 (5585): 1287–1288. doi:10.1126/science.1076120. PMID 12193773. - Tans, Pieter. "Trends in Atmospheric Carbon Dioxide – Mauna Loa". National Oceanic and Atmospheric Administration. Retrieved 2012-07-30. - Abrupt Climate Change Paleo Perspective Story - Richerson, Peter J.; Robert Boyd, Robert L. Bettinger (2001). "Was agriculture impossible during the Pleistocene but mandatory during the Holocene? A climate change hypothesis". American Antiquity 66 (3): 387–411. Retrieved 25 May 2013. |Look up glaciation in Wiktionary, the free dictionary.| - Glaciers and Glaciation - Pleistocene Glaciation and Diversion of the Missouri River in Northern Montana - Correlation of late Pleistocene glaciation in the western United States with North Atlantic Heinrich events - After the Ice Age: The Return of Life to Glaciated North America by E.C. Pielou - Alaska's Glacier and Icefields - Pleistocene glaciations(the last 2 millions years) - IPCC's Palaeoclimate(pdf)
http://en.wikipedia.org/wiki/Last_Ice_Age
13
11
According to new calculations by NASA experts, the Voyager 1 space probe is farther away from the edge of the solar system than initially calculated. The spacecraft's main mission, more than three decades after launch, is to reach interstellar space. In order for it to do that, it first needs to exit the solar system. Since there is no clear boundary set up, it is very difficult to determine when this will happen. The edge is loosely defined as the location where the influence of solar winds is defeated by that of interstellar space. The transition from one to the other is very, very smooth, explain mission controllers at the NASA Jet Propulsion Laboratory (JPL), in Pasadena, California. The Lab has been managing both Voyager 1 and 2 for more than 35 years, Space Yesterday, the Voyager 1 mission turned 35, and JPL scientists compiled a new estimate of when the spacecraft would exit the solar system. Surprisingly, they learned that it has a rather long road ahead, definitely longer than what the team believed until now. They calculated the heliopause – the theoretical boundary between the solar system and interstellar space – as a waypoint, and used data from Voyager 1 to determine that it has not yet reached this location. Previously, they thought that the probe was either on its way out or well into the heliopause. This boundary is the outermost layer of an area where solar influences begin to be counterbalanced by those of interstellar space. This region is called the heliosheath, and both Voyagers have been flying through it for several years. “The implication is that the flow of solar wind plasma in the heliosheath is more complex that we had expected,” Johns Hopkins University space physicist Robert Decker explains. He is also the lead author of a new study detailing the findings, published in the September 6 issue of the top journal Nature. At this point, researchers are unsure as to exactly how large the heliosheath is, or when the heliopause will be reached. According to Decker, experts are still debating this, and apparently, a clear answer is not yet in sight. “Based on the changes we have seen in the Voyager 1 data during the past year, I would expect that Voyager 1 will cross the heliopause within one year,” the expert concludes.
http://news.softpedia.com/news/Voyager-1-Is-Far-From-Solar-System-Edge-290487.shtml
13
25
Stellar parallax is the effect of parallax on distant stars in astronomy. It is parallax on an interstellar scale, and it can be used to determine the distance of Earth to another star directly with accurate astrometry. It was the subject of much debate in astronomy for hundreds of years, but was so difficult it was only achieved for a few of the nearest stars in the early 19th century. Even in the 21st century, stars with parallax measurements are relatively close on a galactic scale, as most distance measurements are calculated by red-shift or other methods. The parallax is usually created by the different orbital positions of the Earth, which causes nearby stars to appear to move relative to more distant stars. By observing parallax, measuring angles and using geometry, one can determine the distance to various objects in space, typically stars, although other objects in space could be used. Because other stars are far away, the angle for measurement is small and the skinny triangle approximation can be applied, the distance to an object (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687=1.3009 parsecs (4.243 ly). The first successful measurement of stellar parallax was made by Friedrich Bessel in 1838 for the star 61 Cygni using a Fraunhofer heliometer at Königsberg Observatory. Early theory and attempts Stellar parallax is so small (as to be unobservable until the 19th century) that it was used as a scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho Brahe's principal objections to Copernican heliocentrism that in order for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere (the fixed stars). James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope, but he instead discovered the aberration of light, the nutation of the Earth’s axis, and did a cataloging of 3222 stars. 19th and 20th centuries Stellar parallax is most often measured using annual parallax, defined as the difference in position of a star as seen from the Earth and Sun, i. e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer. Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues. In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. Stellar parallax remains the standard for calibrating other measurement methods (see Cosmic distance ladder). Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets. The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. Space astrometry for parallax In 1989, the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The European Space Agency's Gaia mission, due to launch in 2013, will be able to measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from earth. Other baselines The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 A.U. per year, while for halo stars the baseline is 40 A.U. per year. After several decades, the baseline can be orders of magnitude greater than the Earth-Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the precision is inversely proportion to the square root of the sample size. Other parallax in astronomy See also - Benedict, G. Fritz et al. (1999). "Interferometric Astrometry of Proxima Centauri and Barnard's Star Using HUBBLE SPACE TELESCOPE Fine Guidance Sensor 3: Detection Limits for Substellar Companions". The Astronomical Journal 118 (2): 1086–1100. arXiv:astro-ph/9905318. Bibcode:1999astro.ph..5318B. doi:10.1086/300975. - Zeilik & Gregory 1998, p. 44. - Alan W. Hirshfeld - Parallax: The Race to Measure the Cosmos (2002) - Page 259, Google Books 2010 - See p.51 in The reception of Copernicus' heliocentric theory: proceedings of a symposium organized by the Nicolas Copernicus Committee of the International Union of the History and Philosophy of Science, Torun, Poland, 1973, ed. Jerzy Dobrzycki, International Union of the History and Philosophy of Science. Nicolas Copernicus Committee; ISBN 90-277-0311-6, ISBN 978-90-277-0311-8 - Robert K. Buchheim - The sky is your laboratory: advanced astronomy projects for amateurs(2007) - Page 184, Google Books 2010 - Bessel, FW, "Bestimmung der Entfernung des 61sten Sterns des Schwans" (1838) Astronomische Nachrichten, vol. 16, pp. 65-96. - CERN paper on plate measuring machine USNO StarScan - Zeilik & Gregory 1998, § 22-3. - Henney, Paul J. "ESA's Gaia Mission to study stars". Astronomy Today. Retrieved 2008-03-08. - Popowski, Piotr; Gould, Andrew (1998-01-29). "Mathematics of Statistical Parallax and the Local Distance Scale". arXiv:astro-ph/9703140 [astro-ph]. - Hirshfeld, Alan w. (2001). Parallax: The Race to Measure the Cosmos. New York: W. H. Freeman. ISBN 0-7167-3711-6 - Whipple, Fred L. (2007). Earth Moon and Planets. Read Books. ISBN 1-4067-6413-2. - Zeilik, Michael A.; Gregory, Stephan A. (1998). Introductory Astronomy & Astrophysics (4th ed.). Saunders College Publishing. ISBN 0-03-006228-4.
http://en.wikipedia.org/wiki/Stellar_parallax
13
12
|Type||Unsorted associative array| in big O notation |Search||O(1 + n/k)||O(n)| |Delete||O(1 + n/k)||O(n)| A hash table is one type of tool for sorting and storing information. In computer science, these tools for keeping track of information, or data, are called data structures. A hash table is a data structure that uses a hash function to keep track of where data is put. Each piece of information to be stored has a name, which is called a key. For example, a key might be a person's name. Each name is matched up to one piece of data called a value, like the person's telephone number. The idea behind a hash table is to figure out which box to put data by using only its name. This means, no matter how many boxes are filled up, you can always find information quickly if you have its name. The hash table uses a hash function to figure out which number to put data in from its name. The hash function reads a name and gives back a number. A good Hash Table will always find information at the same speed, no matter how much data is put in. A lot of Hash Tables also let the user put key/value pairs (a name and its data) in and get take them out at the same speed. Because of this, Hash Tables can often find information faster than other tools, such as search trees or other table lookup structure. As a result, they are used in many kinds of computer software. They are used most for associative arrays, databases, caches, and sets.
http://simple.wikipedia.org/wiki/Hash_table
13
13
Locked inside the littlest objects of the solar system—asteroids, comets, and meteorites—is a secret history. These small fry retain many characteristics they had when they formed 4.5 billion years ago. A series of studies in 2012 pried from them telling new clues about how Earth and the other major planets came to be. One insight came from NASA’s Dawn spacecraft, which circled the asteroid Vesta in 2011 and 2012. Over the past year, using Dawn’s measurements, astronomers determined that Vesta has an iron core. That bolstered the theory that Earth and the other inner planets swallowed an earlier set of small, iron-cored planets. Vesta is a survivor of this original solar system. Astrophysicist Alan Boss at the Carnegie Institution and his colleagues pushed further into the past by analyzing small particles inside primitive meteorites, which preceded the first mini-planets. They determined that the particles contained products of the decay of radioactive iron-60. The iron-60 could have originated in a nearby supernova explosion—but how, Boss wondered, did it end up the meteorites? To investigate, he ran a 3-D computer model, and in September he published his results : A local supernova could have sprayed radioactive elements into the dusty gas cloud from which our solar system formed, and then triggered a shock wave that caused the cloud to begin collapsing. In Boss’s simulation, turbulent flows within the shock wave injected clumps of radioactive iron into the cloud, matching the observed properties of the meteorites. Boss’s computer simulations also helped solve the mystery of the comet Wild-2. The comet is icy, but it contains particles that appear to have formed in a hot zone. Boss built a 3-D model that could account for the seeming paradox. It showed that the young sun was surrounded by a disk of gas and dust so agitated that particles in the hot, inner regions might be flung out to the cold, distant zone where comets formed. This turmoil also helped keep gas giants like Jupiter far from the sun, leaving space for rocky planets to form. Without such violent mixing, Earth might not have come to exist.
http://discovermagazine.com/2013/jan-feb/25-earths-explosive-origins-revealed
13
13
Designed to observe electric fields, a spacecraft has inadvertently picked up signs of Earth's cold plasma layer. The new findings suggest that about two lbs. (1 kilogram) of cold plasma escape from Earth's atmosphere every second. As physicists further map cold plasma around Earth, they could discover more about how it reacts during solar storms and other events, deepening our understanding of space weather. Cold, electrically charged particles have long been suspected to exist tens of thousands of miles above the Earth's surface, and now scientists have detected such ions there for the first time. And they are significantly more abundant at those heights than previously imagined. Cold is, of course, a relative term. Although these low-energy ions are 1,000 times cooler than what researchers might consider hot plasma, these particles still have an energy that would correspond to about 1 million degrees Fahrenheit (500,000 degrees Celsius). But because the density of the "cold" ions in space is so low, satellites and spacecraft can orbit through them without getting destroyed. Scientists had detected the ions at altitudes of about 60 miles (100 kilometers), but for decades, researchers wanted to look for them much higher, between 12,400 and 60,000 miles (20,000 and 100,000 km). Knowing how many cold ions dwell up there could help better understand how our planet interacts with storms of charged particles from the sun — like the one that slammed into the planet on Jan. 24 — that create auroras, damage satellites and sometimes wreak havoc with power grids on Earth. However, detecting cold plasma at those high altitudes has proven difficult. Spacecraft that far up accumulate an electrical charge, due to sunlight that makes them repel the cold ions. The breakthrough came with one of the European Space Agency's four CLUSTER spacecraft. These are equipped with a detector composed of thin wire arms that measure the electric field between them as the satellite rotates. "It is surprising we found the cold ions at all with our instrument," space scientist Mats André, at the Swedish Institute of Space Physics in Uppsala, told OurAmazingPlanet. "It was not at all designed to do this. It was designed to observe electric fields." 'Ugly' electrical fields Two mysterious trends appeared when the scientists analyzed data from these detectors — strong electric fields turned up in unexpected regions of space, and as the spacecraft rotated, the measurements of the electrical fields did not fluctuate in the smoothly changing manner that investigators expected. "To a scientist, it looked pretty ugly," André said. "We tried to figure out what was wrong with the instrument. Then we realized there's nothing wrong with the instrument." Their findings suggest that cold plasma was influencing electrical fields around the satellite. Once the scientists understood that, they could measure how much of the once-hidden ions there were. "The more you look for low-energy ions, the more you find," André said. "We didn't know how much was out there. It's more than even I thought." Although the concentration of the previously hidden cold ions varies, about 50 to 70 percent of the time the researchers find they make up most of the mass of high-altitude zones. These previously elusive low-energy ions were detected even at altitudes of about 60,000 miles (100,000 km), about a third of the distance to the moon. Finding so many relatively cool ions in those regions is surprising, because the solar wind blasts Earth's high altitudes. "It is surprising that there were so many cold ions," André said. "There have been hints for a long time, and with previous spacecraft, but I do not think anyone, not me, thought this cold, hidden population could dominate so-large volumes, so-large fractions of the time." Physicists have struggled to accurately determine how many low-energy ions are leaving the planet. The new findings suggest that about two lbs. (1 kilogram) of cold plasma escape from Earth's atmosphere every second. Knowing that rate of loss for Earth might help scientists better figure out what became of the atmosphere of Mars, which is thought to once have been denser, and more similar to Earth's. The new cold plasma results might also help researchers explain atmospheric traits of other planets and moons, including alien worlds or exoplanets, André said. "If someone is living on an exoplanet, they probably want an atmosphere that is not blowing away," André said. Moreover, as physicists further map cold plasma around Earth, they could discover more about how it reacts during solar storms and other events, deepening our understanding of space weather. André compared the swaths of low-energy ions to a low-pressure area in our familiar, down-to-Earth weather. "You may want to know where the low-pressure area is, to predict a storm," he said. André and his colleague Christopher Cully detailed their findings Dec. 23 in the journal url="http://www.agu.org/pubs/crossref/pip/2011GL050242.shtml"]Geophysical Research Letters.
http://news.discovery.com/earth/earth-cold-plasma-layer-found-120127.htm
13
16
What is a Codec? and How Do they Work? A Codec refers to an algorithm or a program that is quite often embedded in a hardware piece like an IP Phone, ATA, etc. In case of VOIP, a codec can be used for converting voice signals into digital data and transmitted over any network or through the internet when a VOIP call is made. The term codec is actually used for a combination of two words. Namely: compressor/ decompressor or for coder/decoder. Codecs are essentially used for encoding / decoding, compression/ decompression or for encryption/ decryption. Let us now understand how this works. Knowing about Encoding/Decoding Normally, when we speak over a regular PSTN phone, our voice is transmitted the analogue way over the phone network. However, when you are talking over a VOIP, the voice gets converted into digital signals. It is this conversion that is known as encoding and can only a codec can achieve it. Once the encoding is done, it gets transmitted to its destination where it is decoded back into analogue form so that the person receiving the call is able to understand and hear clearly. The Compression Stage With bandwidth being scarce, most companies are looking at options to send more data at a time and enhance its overall performance. One way of doing this is by making sure that the data you send is lighter. This is where a codec is used to compress the digital data and make it less bulky. The compression process is quite complex where large data is stored in smaller spaces, otherwise known as digital bits. Normally, during the compression stage, the digital data is restricted to a packet or structure that is ideal to the compression algorithm. This compressed data is transmitted through the network and upon reaching its intended destination the data is decompressed to its original form so that it can be decoded. However, in most instances you may not need to decompress the data back to its original state since it is already available in a usable state. Now, let us see how the codec works when it is used for encryption. If you want to achieve the highest possible security for your 0300 numbers or 0800 numbers or 0845 numbers or any other freephone numbers, encryption is probably the best tool available today. In the encryption process the data is converted into an indecipherable state so that even if it is intercepted by some unauthorized elements, what you send remains confidential. Only upon reaching the intended destination, this data can be decrypted into its original form. Finally, a codec is an excellent option to transmit your data in a safe and secure manner. So, you want a FREE number? Test drive numberstore Experience the power of Numberstore with a 30 Day subscription free trial of our Connect Plan. Choose from our complete range of business telephone numbers & take advantage of professional features such as custom announcements, mobile routing, online statistics and much more!Find Out More - Non Geo Calls - implement Maximum Price first before a longer term solution says FCS - 2 hour number porting blocked by major mobile operators - A wonderful free service for UK charities - Service Provider of the Year 2008 - FCS sets up Number Portability Working Group - Ofcom announces delay to 0870 changes. - eConsole® 2.7 Released - 0870 Regulatory Changes - UK Telecoms Regulation - Safety in Numbers? - Numberstore cabs hit The Smoke UK 0800 Numbers cost versus benefits There are several benefits that any business can derive from 0800 numbers. One of the most significant is the likelihood of an increase in the number of callers who are potential customers. From a marketing perspective, these numbers are among the most efficient tools to attract new customers and implement effective advertising and marketing campaigns. Memorable number? Why UK 0800 Numbers can be top of mind for your customer Whether you plan a new venture or seek to boost your business, a powerful communication set up that can boost your advertising and marketing potential is essential. One of the most affordable ways to achieve this is by setting up a 0844 numbers. One of the common misconceptions is that these freephone numbers are expensive to run and are more a prerogative of larger organisations. However, that is far from the truth since there are plans from as low as £10 per month, suitable for the small business owner as well. In addition, you don't have to pay an exorbitant rate to pick up the cost of a call from a potential customer. It could cost as little as 2 pence per minute. You aren't bound to any long-term contract either. Measuring success how UK 0800 Numbers can be used for effective advertising campaigns One of the best ways to monitor consumer trends, stay in touch with customers and attract new ones is to set up a national number for your business. These numbers are now increasingly being used by businesses across the UK as an effective advertising and marketing tool. If you aren't already using freephone numbers, you are definitely missing out on a lot of opportunities for business growth. In addition, you give your business an edge through competitive advertising while callers enjoy the privilege of getting in touch with you for free.
http://www.numberstore.com/What-is-a-Codec-and-How-Do-they-Work.html
13
13
Our solar system may have been created in a huge mixing process much bigger than previously imagined, according to research published today. New analysis suggests a gigantic swirling disk of dust and gas that formed four and a half billion years ago later became our planets, asteroids and comets. The huge dust cloud was billions of kilometres across. The findings, published in the journal Science, come from material collected by NASA's Stardust spacecraft from a comet called Wild 2. Because comets are probably among the oldest large objects in the solar system, researchers believe the dust can provide insights into how Earth and other planets were formed. Scientists from Imperial College London and the Natural History Museum have been given a rare opportunity to analyse the comet dust. 'This is the first time scientists can work in the laboratory to study material undoubtedly from a comet, as it was taken directly from source,' said Anton Kearsley, X-ray Microanalyst at the Museum. 'The mission to secure material from the comet was remarkable. It flew further than any other return mission, 4.6 billion kilometres, meeting the comet between the orbit of Mars and the main asteroid belt, and then safely delivered samples to Earth.' Gigantic mixing before the planets formed The team found the comet dust is made up of many different mineral compositions rather than a single dominant one. This implies the dust was formed in many different environments before coming together to make the comet, indicating a great deal of mixing in the early solar system prior to the formation of planets. Particularly significant was the discovery of calcium- and aluminium-rich inclusions, which are among the oldest solids in the solar system. They are thought to have formed close to the young sun. This suggests components of the comet came from all over the early solar system. Some dust was formed close to the sun, while other material came from the asteroid belt between Mars and Jupiter. Since Wild 2 formed in the outer solar system, this means some of its composite material has travelled huge distances. Although scientists now know much more about Wild 2 and its composition, the research raises new questions. Is this a typical comet and have other comets had very different histories? Research on comet dust has only just begun and the examination of these samples is truly the new frontier of planetary science. Further research at the Museum The Museum and partner organisations have been awarded a €2.6 million grant by the European Commission to bring together a multi-disciplinary team of European scientists for the first time. The project, called Origins, seeks to better understand the origins of our planetary system and those beyond it. Sara Russell, Head of Meteoritics and Cosmic Mineralogy at the Museum, is spearheading the effort, which will start next month. 'The quest for knowledge about the origins of planets and stars has always been a central preoccupation of humankind,' said Sara. 'This project goes someway to satisfying this quest and will enrich the cosmochemical research community within Europe.'
http://www.nhm.ac.uk/about-us/news/2006/december/news_10156.html
13
23
Graph a Sine Function Using Amplitude The sine function and any of its variations have two important characteristics: the amplitude and period of the curve. You can determine these characteristics by looking at either the graph of the function or its equation. The amplitude of the sine function is the distance from the middle value or line running through the graph up to the highest point. In other words, the amplitude is half the distance from the lowest value to the highest value. In the sine and cosine equations, the amplitude is the coefficient (multiplier) of the sine or cosine. For example, the amplitude of y = sin x is 1. To change the amplitude, multiply the sine function by a number. Take a look at the preceding figure, which shows the graphs of As you can see, multiplying by a number greater than 1 makes the graph extend higher and lower. The amplitude of y = 3sin x is 3. Conversely, multiplying by a number smaller than 1 (but bigger than 0) makes the graph shrink in value — it doesn’t go up or down as far.
http://www.dummies.com/how-to/content/graph-a-sine-function-using-amplitude.navId-420747.html
13
12
All stars follow the same basic series of steps in their lives: Gas Cloud Main Sequence Red Giant (Planetary Nebula or Supernova) Remnant. How long a star lasts in each stage, whether a planetary nebula forms or a spectacular supernova occurs, and what type of remnant will form depends on the initial mass of the star. A giant molecular cloud is a large, dense gas cloud (with dust) that is cold enough for molecules to form. Thousands of giant molecular clouds exist in the disk part of our galaxy. Each giant molecular cloud has 100,000's to a few million solar masses of material. One nearby example is the Orion Molecular Cloud Complex that stretches from the belt of the Orion constellation to his sword of which the Orion Nebula is a part. The Orion Complex is about 1340 light years away, several hundred light years across, and has enough material to form many tens of thousands of suns. The giant molecular clouds have dust in them to shield the densest parts of them from the harsh radiation of nearby stars so that molecules can form in them. Therefore, they are very dark and very cold with a temperature of only about 10 K. In addition to the most common molecule, molecular hydrogen, over 80 other molecules have been discovered in the clouds from simple ones like carbon monoxide to complex organic molecules such as methanol and acetone. Radio telescopes are used to observe these very dark, cold clouds. The clouds are dense relative to the rest of the gas between the stars but are still much less dense than the atmosphere of a planet. Typical cloud densities are 100 to 1000 molecules per cubic centimeter while each cubic centimeter of the air you breath has about 2.5 × 1019 molecules---a molecular cloud is tens to hundreds of times "emptier" than the best vacuum chambers we have on Earth! In the parts of a giant molecular cloud where very hot stars (O and B-type) have formed, the hydrogen gas surrounding them can be made to glow in the visible band to make what is called a H II region. The Orion Nebula is an example of this. It is the fuzzy patch you can see in the sword part of the Orion constellation. It is a bubble about 26 light years across that has burst out of one side of the Orion Complex. The nebula is lit up by the fluorescence of the hydrogen gas around a O-type star in the Trapezium cluster of four stars at the heart of the nebula. The O-type star is so hot that it produces a large amount of ultraviolet light. The ultraviolet light ionizes the surrounding hydrogen gas. When the electrons recombine with the hydrogen nuclei, they produce visible light. Several still-forming stars are seen close to the Trapezium stars. They appear as oblong blobs in the figure below with their long axis pointed toward the hot Trapezium stars. If you select the image, an expanded view of the Trapezium cluster will appear in another window. Both images are from the Hubble Space Telescope (courtesy of Space Telescope Science Institute). H II regions mark sites of star formation because they are formed by hot, young stars. Recall from the table at the beginning of the chapter that O-type stars live just a few million years, a very short time for a star! They do not live long enough to move out from where they were formed. Behind the visible part of the Orion Nebula is a much denser region of gas and dust that is cool enough for molecules to form. Several hundred stars are now forming inside the Orion Nebula. Fragments of giant molecular clouds with tens to hundreds of solar masses of material a piece will start collapsing for some reason all at about the same time. Possible trigger mechanisms could be a shock wave from the explosion of a nearby massive star at its death or from the passage of the cloud through regions of more intense gravity as found in the spiral arms of spiral galaxies. These shock waves compress the gas clouds enough for them to gravitationally collapse. Gas clouds may start to collapse without any outside force if they are cool enough and massive enough to spontaneously collapse. Whatever the reason, the result is the same: gas clumps compress to become protostars. A protostar will reach a temperature of 2000 to 3000 K, hot enough to glow a dull red with most of its energy in the infrared. The cocoon of gas and dust surrounding them blocks the visible light. The surrounding dust warms up enough to produce copious amounts of infrared and the cooler dust further out glows with microwave energy. This longer wavelength electromagnetic radiation can pass through the dust. The infrared telescopes are able to observe the protostars themselves and their cocoons in dust clouds in our galaxy while the microwave telescopes probe the surrounding regions. The power of infrared detectors is illustrated in the images below. The part of the nebula above and to the right of the Trapezium stars is actually forming many stars. They can only be seen in the infrared image on the right side of the figure. If you select the figure, an expanded view will appear in another window. Both images are from the Hubble Space Telescope (courtesy of Space Telescope Science Institute). The low-mass protostars (those up to about 5 solar masses) are initially much more luminous than the main sequence star they will become because of their large surface area. As these low-mass protostars collapse, they decrease in luminosity while staying at roughly a constant surface temperature. A star remains in the protostar stage for only a short time, so it is hard to catch many stars in that stage of their life. More massive protostars collapse quicker than less massive ones. Fusion starts in the core and the outward pressure from those reactions stops the core from collapsing any further. But material from the surrounding cloud continues to fall onto the protostar. Most of the energy produced by the protostar is from the gravitational collapse of the cloud material. Young stars are social---fragmentation of the giant molecular cloud produces protostars that form at about the same time. Stars are observed to be born in clusters. Other corroborating evidence for this is that there are no isolated young stars. This observation is important because a valuable test of the stellar evolution models is the comparison of the models with star clusters. That analysis is based on the assumption that the stars in the clusters used to validate the models all formed at about the same time. The Hubble Space Telescope has directly observed protostars in the Orion Nebula and the Eagle Nebula (in the Serpens constellation). The protostars it has observed have been prematurely exposed. The intense radiation from nearby hot O or B-type stars has evaporated the dust and driven away the gas around the smaller still-forming stars. In more than one case in the Orion Nebula, all of the gas has been blown away to leave just the dark dust disk with the protostar in the center. One example of a totally exposed dust disk seen almost face-on is shown in the figure above. It is the black spot to the right of the prominent cocoon nebula around the protostar at the center. The teardrop-shaped cocoon nebula around the center protostar is oriented toward the Trapezium stars to the right of the figure above. The evaporation of the dark, dense fingers of dust and gas in the Eagle Nebula was captured in the famous ``gas pillars'' picture on the right side of the figure below. Selecting the figure will bring up an expanded view of the Hubble Space Telescope image in another window (courtesy of Space Telescope Science Institute). Note that the tiniest fingers you see sticking out of the sides of the pillars are larger than our entire solar system! AAO image --- HST image A nice interactive showing how the iconic "Pillars of Creation" image from HST was created from putting together various filter images is the The Pillars of Creation interactive from NOVA's Origins series that was broadcast on PBS (selecting the link will bring it up in another window). Another beautiful example from the Hubble Space Telescope is the huge panoramic image of the Carina Nebula released by the Space Telescope Science Institute in mid-2007. This nebula has at least a dozen stars that are 50 to 100 times the mass of the Sun and plenty of Bok globules, pillars, jets from forming stars. For this one you need to sample various parts of the image available from the link. Go back to previous section -- Go to next section last updated: June 8, 2010
http://www.astronomynotes.com/evolutn/s3.htm
13
16
The Chemistry of Biology Proteins are organic compounds that contain the element nitrogen as well as carbon, hydrogen, and oxygen. Proteins are the most diverse group of biologically important substances and are often considered to be the central compound necessary for life. In fact, the translation from the Greek root word means “first place.” Skin and muscles are composed of proteins; antibodies and enzymes are proteins; some hormones are proteins; and some proteins are involved with digestion, respiration, reproduction, and even normal vision, just to mention a few. There are obviously many types of proteins, but they are all made from amino acids bonded together by the dehydration synthesis. By continually adding amino acids, called peptides, two amino acids join together to form dipeptides; as more peptides join together, they form polypeptides. Proteins vary in length and complexity based on the number and type of amino acids that compose the chain. There are about 20 different amino acids, each with a different chemical structure and characteristics; for instance, some are polar, others are nonpolar. The final protein structure is dependent upon the amino acids that compose it. Protein function is directly related to the structure of that protein. A protein's specific shape determines its function. If the three-dimensional structure of the protein is altered because of a change in the structure of the amino acids, the protein becomes denatured and does not perform its function as expected. Humans must obtain nine essential amino acids through their food because our bodies are not capable of manufacturing them. A missing amino acid restricts the protein synthesis and may lead to a protein deficiency, which is a serious type of malnutrition. Remedy: Eat lots of corn, grains, beans, and legumes as part of your normal, balanced diet. The three-dimensional geometry of a protein molecule is so important to its function that four levels of structure are used to describe a protein. The first level, or primary structure, is the linear sequence of amino acids that creates the peptide chain. In the secondary structure, hydrogen bonding between different amino acids creates a three-dimensional geometry like an alpha helix or pleated sheet. An alpha helix is simply a spiral or coiled molecule, whereas a pleated sheet looks like a ribbon with regular peaks and valleys as part of the fabric. The tertiary structure describes the overall shape of the protein. Most tertiary structures are either globular or fibrous. Generally, nonstructural proteins such as enzymes are globular, which means they look spherical. The enzyme amylase is a good example of a globular protein. Structural proteins are typically long and thin, and hence the name, fibrous. Quaternary structures describe the protein's appearance when a protein is composed of two or more polypeptide chains. Often the polypeptide chains will hydrogen bond with each other in unique patterns to create the desired protein configuration. Most enzymes are proteins and therefore their function is specific to their structure. Enzymes function as a catalyst to increase the rate of virtually all the chemical reactions that take place in a living system. The enzymes, like all catalysts, are not consumed but are constantly reused to catalyze the same specific reaction. Enzymes depend on the correct structural alignment and orientation at the active site of the protein and the appropriate site of the reactants, or substrate, before the reaction can proceed. This geometric interaction between the enzyme and the substrate is referred to as the “lock-and-key model” because the enzyme's action parallels the action of a lock into which is fitted the key (substrate). If the key and lock do not match, the action does not work. It is the same with enzymes and substrates. The active site for the enzyme and the appropriately matched site of the substrate must physically join before the reaction can occur. That is why the structure of the enzyme is so important. The enzyme binds with the appropriate substrate only in the correct alignment and orientation to connect the molecules. The resulting enzyme-substrate complex enables the reaction to occur. Finally, the products are formed and the enzyme is released to catalyze the same reaction for another substrate of the same type of molecule. Enzymes may fail to function if they are denatured. Remember the model simplifies your understanding of the process; in reality they are three-dimensional molecules. Hormones are chemical messengers produced in one part of the body to function in a different part of the body. Although fat-soluble hormones are made from steroids, water-soluble hormones such as the growth hormone are made from amino acids. Hormones function similarly to enzymes in that both require a specific receptor and perform a specific function. After a hormone is created and secreted by a cell, it travels—usually via the bloodstream—to its target cell. The target cell is the point of action that the hormone recognizes, binds to, and thereby delivers the chemical message. The hormone identifies the target cell by its receptor protein and employs the same lock-and-key process. Excerpted from The Complete Idiot's Guide to Biology © 2004 by Glen E. Moulton, Ed.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.
http://life.familyeducation.com/cig/biology/proteins.html
13
15
elementary algebraArticle Free Pass elementary algebra, branch of mathematics that deals with the general properties of numbers and the relations between them. Algebra is fundamental not only to all further mathematics and statistics but to the natural sciences, computer science, economics, and business. Along with writing, it is a cornerstone of modern scientific and technological civilization. Earlier civilizations—Babylonian, Greek, Indian, Chinese, and Islamic—all contributed in important ways to the development of elementary algebra. It was left for Renaissance Europe, though, to develop an efficient system for representing all real numbers and a symbolism for representing unknowns, relations between them, and operations. Elementary algebra is concerned with the following topics: - Real and complex numbers, constants, and variables—collectively known as algebraic quantities. - Rules of operation for such quantities. - Geometric representations of such quantities. - Formation of expressions involving algebraic quantities. - Rules for manipulating such expressions. - Formation of sentences, also called equations, involving algebraic expressions. - Solution of equations and systems of equations. The principal distinguishing characteristic of algebra is the use of simple symbols to represent numerical quantities and mathematical operations. Following a system that originated with the 17th-century French thinker René Descartes, letters near the beginning of the alphabet (a, b, c,…) typically represent known, but arbitrary, numbers in a problem, while letters near the end of the alphabet, especially x, y, and z, represent unknown quantities, or variables. The + and − signs indicate addition and subtraction of these quantities, but multiplication is simply indicated by adjacent letters. Thus, ax represents the product of a by x. This simple expression can be interpreted, for example, as the interest earned in one year by a sum of a dollars invested at an annual rate of x. It can also be interpreted as the distance traveled in a hours by a car moving at x miles per hour. Such flexibility of representation is what gives algebra its great utility. Another feature that has greatly increased the range of algebraic applications is the geometric representation of algebraic quantities. For instance, to represent the real numbers, a straight line is imagined that is infinite in both directions. An arbitrary point O can be chosen as the origin, representing the number 0, and another arbitrary point U chosen to the right of O. The segment OU (or the point U) then represents the unit length, or the number 1. The rest of the positive numbers correspond to multiples of this unit length—so that 2, for example, is represented by a segment OV, twice as long as OU and extended in the same direction. Similarly, the negative real numbers extend to the left of O. A straight line whose points are thus identified with the real numbers is called a number line. Many earlier mathematicians realized there was a relationship between all points on a straight line and all real numbers, but it was the German mathematician Richard Dedekind who made this explicit as a postulate in his Continuity and Irrational Numbers (1872). In the Cartesian coordinate system (named for Descartes) of analytic geometry, one horizontal number line (usually called the x-axis) and one vertical number line (the y-axis) intersect at right angles at their common origin to provide coordinates for each point in the plane. For example, the point on a vertical line through some particular x on the x-axis and on the horizontal line through some y on the y-axis is represented by the pair of real numbers (x, y). A similar geometric representation (see the figure) exists for the complex numbers, where the horizontal axis corresponds to the real numbers and the vertical axis corresponds to the imaginary numbers (where the imaginary unit i is equal to the square root of −1). The algebraic form of complex numbers is x + iy, where x represents the real part and iy the imaginary part. This pairing of space and number gives a means of pairing algebraic expressions, or functions, in a single variable with geometric objects in the plane, such as straight lines and circles. The result of this pairing may be thought of as the graph (see the figure) of the expression for different values of the variable. What made you want to look up "elementary algebra"? Please share what surprised you most...
http://www.britannica.com/EBchecked/topic/184192/elementary-algebra
13
38
Language Arts Lessons Useful Word Lists Help and Information - British Spelling - Custom Sentences and Definitions - Funding Sources - FAQs - Frequently Asked Questions - Getting Started Welcome Letters - Handwriting Worksheets - Our Educational Awards - Single Sign On via Gmail - Sequential Spelling Program - Student Writing Practice - Teacher Training Videos - Standards Correlation - The Importance of Spelling - Improve Your Writing Skills - Recommended Learning Resources - Writing Prompts that Motivate - Reading Comprehension - Research on Spelling Automaticity - Incorporating Spelling Into Reading - SpellingCity and NComputing - Title 1 Schools - Vocabulary/Word Study - Welcome Edmodo - Welcome Google Chrome Fourth Grade Math Vocabulary VocabularySpellingCity has created these fourth grade math word lists as tools teachers and parents can use to supplement the fourth grade math curriculum with interactive, educational math vocabulary games. Simply choose a list from a particular math area, and then select one of the 25 learning activities available. The material for these lists was specifically designed to be used in a fourth grade math class. The math vocabulary lists are based on the Common Core Fourth Grade Math Standards. VocabularySpellingCity ensures that these academic vocabulary lists are level-appropriate for fourth graders. Teachers can import these lists into their accounts, and edit or add to them to suit their purposes. Common Core State Standards Overview for Fourth Grade Math Click for more information on Math Vocabulary and the Common Core Standards in general. For information pertaining to 4th grade in particular, please refer to the chart above. To preview the definitions, select a list and use the Flash Cards. For help on using the site, watch one of our short videos on how to use the site. Elementary students can not only achieve enrichment in fourth grade math terms through interactive exercises, but they can also acquire necessary understanding of pivotal math concepts while playing educational online math vocabulary games. The themed lists are organized so that students are given challenging 4th grade math vocabulary in such a way that fourth graders can quickly excel in the comprehension of important math concepts. Animated interactive games greatly enhance students' learning of 4th grade math words. Students not only learn elementary math words while having a great time, but also gain confidence in a subject that many consider daunting. The math vocabulary lists are based on the Common Core Fourth Grade Math Standards. Teachers and parents can count on the effective and accurate grouping of these math vocabulary lists and have come to rely on the use of 4th grade math definitions in interactive games to activate students' math comprehension. More than a traditional 4th grade math dictionary, this assortment of targeted lists, combined with exciting and challenging elementary math vocabulary drill and practice games, makes learning math words fun for fourth graders everywhere! Fourth Grade Math Vocabulary Words at a Glance: Operations & Algebraic Thinking: variable, inequality, equivalent, differences, factor, equation, product, comparison, expression, similarity, inequality, relationship, similarity, comparison, differences, factor, equation, variable, extraneous, equivalent Base Ten Operations Number & Operations in Base Ten: comparison, equation, relationship, equivalent, inequality, factor, rounding, regroup, variable, similarity, size, inverse operation, gram, calculate, compare, composite number, million, decimal number, simplify, relative, addend, product, symmetry, centimeter, fahrenheit, celsius, differences, polyhedron, extraneous, estimation Number & Operations - Fractions: proper fraction, percent, consecutive, common fraction, ordinal number, factor, multiples, improper fraction, mixed number, fraction, compare, dividend, denominator, remainder, divisor, quotient, more than, numerator, less than, equivalent Measurement & Data Units & Coordinates: y-axis, line graph, customary units, non-standard units, x-axis, coordinates, coordinate, system, data, unit conversion, unit Length: meter, length, width, kilometer, measurement, inch, yard, centimeter, metric, foot Problem Solving: probability, predict, array, survey, chance, likely, unlikely, certainty, data collection, tendency Quantity/Size: volume, liter, ounce, pint, kilogram, weight, mass, quart, gallon, balance Time/Temperature: Celsius, Fahrenheit, measurement, minute, second, event, degree, time , temperature, hour Interpretation: mean, median, mode, range, likelihood, ordered pairs, statistics, interpret, graph, data Presentation: tree diagram, pie chart, diagram, data, circle graph, Venn diagram, tally, bar graph, frequency table, measure Angles: congruent, acute angle, obtuse angle, rotate, straight angle, degrees, angle, right angle, triangle, perpendicular Classification: similarity, translation, congruent, reflection, rectangular, symmetry, closed figure, open figure, rotation, transformation Lines: intersection, perpendicular, length, line segment, circumference, point, distance, grid, side, line of symmetry Measurement: square unit, area, capacity, degrees, distance, grid, radii, height, diameter, length Polygons: polygon, pentagon, quadrilateral, hexagon, rhombus, pentagon, parallelogram, plane figure, octagon, polyhedron Prisms: prism, base, face, solid, sphere, horizontal, parallel lines, cube, cylinder, cone For a complete online Math curriculum in Kindergarten Math, First Grade Math, Second Grade Math, Third Grade Math, Fourth Grade Math, Fifth Grade Math, Sixth Grade Math, Seventh Grade Math, or Eighth Grade Math visit Time4Learning.com. Here are some fun Math Games from LearningGamesForKids by grade level: Kindergarten Math Games, First Grade Math Games, Second Grade Math Games, Third Grade Math Games, Fourth Grade Math Games, Fifth Grade Math Games, Addition Math Games, Subtraction Math Games, Multiplication Math Games, or Division Math Games.
http://www.spellingcity.com/fourth-grade-math-vocabulary.html
13
17
Explaining Variables and Terms Study Guide Introduction to Explaining Variables and Terms The human mind has never invented a labor-saving machine equal to algebra. In this section, you'll learn the language of algebra, how to define variables and terms, and a short review of integers. Math topics always seem to have scary sounding names: trigonometry, combinatorics, calculus, Euclidean plane geometry —and algebra. What is algebra? Algebra is the representation of quantities and relationships using symbols. More simply, algebra uses letters to hold the place of numbers. That does not sound so bad. Why do we use these letters? Why not just use numbers? Because in some situations, we do not always have all the numbers we need. Let's say you have 2 apples and you buy 3 more. You now have 5 apples, and we can show that addition by writing the sentence 2 + 3 = 5. All of the values in the sentence are numbers, so it is easy to see how you went from 2 apples to 5 apples. Now, let's say you have a beaker filled with 134 milliliters of water. After pouring more water into the beaker, you look closely and see that you now have 212 milliliters of water. How much water was added to the beaker? Before you perform any mathematical operation, that quantity of water is unknown. If we do not know the value of a quantity in a problem, that value is an unknown. We can write a sentence to show what happened to the volume of water n the beaker even though we don't know how much water was added. A symbol can hold the place of the quantity of water that was added. Although we could use any symbol to represent this quantity, we usually use letters, and the most commonly used letter in algebra is x. There is no clear reason why x came to be used most often to represent unknowns. René Descartes, a French mathematician, was one of the first to use x, y, and z to represent unknown quantities—back in 1637! While many have tried to determine why he used these letters, no one knows for certain. The beaker had 134 milliliters of water in it when x milliliters were added to it. Read that sentence again. We describe the unknown quantity in the same way we would a real number. When a symbol, such as x, takes the place of a number, it is called a variable. We can perform the same operations on variables that we perform on real numbers. After x milliliters are added to the beaker, the beaker contains 212 milliliters. We can write this addition sentence as 134 + x = 212. Later in this book, we will learn how to solve for the value of x and other variables. In the sentence 134 + x = 212, 134 and 212 are numbers and x is a variable. Because the variable x holds the place of a number, we can perform the same operations on it that we would perform on a number. We can add 4 to the variable x by writing x + 4. We can subtract 4 from x by writing x – 4. Multiplication we show a little bit differently. Because the letter x looks like the multiplication symbol (×), we show multiplication by placing the number that multiplies the variable right next to the variable, with no space. To show 4 multiplied by x, we write 4x. There is no operation symbol between 4 and x, and that tells us to multiply 4 and x. Multiplication is sometimes shown by two adjacent sets of parentheses. Another way to show 4 multiplied by x is (4)(x). Division is most often written as a fraction. x divided by 4 is . This could also be written as or x ÷4, but these notations are less common. Add your own comment Today on Education.com WORKBOOKSMay Workbooks are Here! WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities Local SAT & ACT Classes - Kindergarten Sight Words List - The Five Warning Signs of Asperger's Syndrome - What Makes a School Effective? - Child Development Theories - Why is Play Important? Social and Emotional Development, Physical Development, Creative Development - 10 Fun Activities for Children with Autism - Test Problems: Seven Reasons Why Standardized Tests Are Not Working - Bullying in Schools - A Teacher's Guide to Differentiating Instruction - First Grade Sight Words List
http://www.education.com/study-help/article/explaining-variables-terms/
13
19
of a predicatea truth-valued function In mathematics, a function associates one quantity, the argument of the function, also known as the input, with another quantity, the value of the function, also known as the output. A function assigns exactly one output to each input. The argument and the value may be real numbers, but they can... is the set of tuple In mathematics and computer science, a tuple is an ordered list of elements. In set theory, an n-tuple is a sequence of n elements, where n is a positive integer. There is also one 0-tuple, an empty sequence. An n-tuple is defined inductively using the construction of an ordered pair... s of values that, used as arguments, satisfy the predicate. Such a set of tuples is a relation In set theory and logic, a relation is a property that assigns truth values to k-tuples of individuals. Typically, the property describes a possible connection between the components of a k-tuple... For example the statement "d2 is the weekday following d1 can be seen as a truth function associating to each tuple (d2 the value true . The extension of this truth function is, by convention, the set of all such tuples associated with the By examining this extension we can conclude that "Tuesday is the weekday following Saturday" (for example) is false. Using set-builder notation In set theory and its applications to logic, mathematics, and computer science, set-builder notation is a mathematical notation for describing a set by stating the properties that its members must satisfy... , the extension of the n can be written as Relationship with characteristic function If the values 0 and 1 in the range of a characteristic function In mathematics, characteristic function can refer to any of several distinct concepts:* The most common and universal usage is as a synonym for indicator function, that is the function* In probability theory, the characteristic function of any probability distribution on the real line is given by... are identified with the values false and true, respectivelymaking the characteristic function a predicate, then for all relations R the following two statements are equivalent: - is the characteristic function of R; - R is the extension of .
http://www.absoluteastronomy.com/topics/Extension_(predicate_logic)
13
20
Want to stay on top of all the space news? Follow @universetoday on Twitter In 1957, the Soviet Union launched the world’s first satellite, known as Sputnik. This changed the course of world history and led the United States, their chief rival in the Space Race, to mount a massive effort of its own to put manned craft in orbit and land a man on the moon. Since then, the presence of satellites in our atmosphere has become commonplace, which has muted the sense of awe and wonder involved. However, for many, especially students studying in engineering and aerospace programs, the question of How Satellites Work is still one of vital importance. Satellite perform a wide array of functions. Some are observational, such as the Hubble Space Telescope – providing scientists with images of distant stars, nebulas, galaxies, and other deep space phenomena. Others are dedicated to scientific research, particularly the behavior of organisms in low-gravity environments. Then there are communications satellites which relay telecommunications signals back and forth across the globe. GPS satellites offer navigational aid and tracking aides to people looking to transport goods or navigate their way across land and oceans. And military satellites are used to observe and monitor enemy installations and formations on the ground while also helping the airforce and navy guide their ordinance to enemy targets. Satellites are deployed by attaching them to rockets which then ferry them into orbit around the planet. Once deployed, they are typically powered by rechargeable batteries which are recharged through solar panels. Other satellites have internal fuel cells that convert chemical energy to electrical energy, while a few rely on nuclear power. Small thrusters provide attitude, altitude, and propulsion control to modify and stabilize the satellite’s position in space. When it comes to classifying the orbit of a satellite, scientists use a varying list to describe the particular nature of their orbits. For example, Centric classifications refer to the object which the satellite orbits (i.e. planet Earth, the Moon, etc). Altitude classifications determine how far the satellite is from Earth, whether it is in low, medium or high orbit. Inclination refers to whether the satellite is in orbit around the equatorial plane, the polar regions, or the polar-sun orbit that passes the equator at the same local time on every pass so as to stay in the light. Eccentricity classifications describe whether the orbit is circular or elliptical, while Synchronous classifications describe whether or not the satellite’s rotation matches the rotational period of the object (i.e. a standard day). Depending on the nature of their purpose, satellites also carry a wide range of components inside their housing. This can include radio equipment, storage containers, camera equipment, and even weaponry. In addition, satellites typically have an on-board computer to send and receive information from their controllers on the ground, as well as compute their positions and calculate course corrections. If you’d like more info on satellites, check out these articles: National Geographics article about Orbital Objects Satellites and Space Weather We’ve also recorded an episode of Astronomy Cast about the space shuttle. Listen here, Episode 127: The US Space Shuttle.
http://www.universetoday.com/93078/how-satellites-work/
13
19
Schools & Districts All of Shmoop Cite This Page Best of the Web Table of Contents Purchase the Sequences Pass and get full access to this Calculus chapter. No limits found here. Defining Sequences and Evaluating Terms Let's imagine, instead of numbers, we have a sequence of animals. The sequence begins with three animals: green, tap dancing elephant, purple, singing cockatoo, and orange, quarterback tiger. We co... Evaluating terms of a sequence, given the formula, isn't so bad. Going the other way around is a little trickier. It can be a bit like juggling buzzing chainsaws while riding a unicycle and chewing... Sequences Can Start at Remember the Count from Sesame Street? Now a little bit older and with a few gray hairs, he needs a good pick-me-up in the morning, "One. One large coffee. Ha.Ha.Ha." Why would we begin counting se... An arithmetic sequence is a sequence where the step from one term to the next is constant. That is, you always add the same thing to get from one term to the next. An arithmetic sequence is like go... We already know that an arithmetic sequence is one where the difference between successive terms is constant. The distance each term is the same. A geometric sequence is a lot like an arithmetic se... For those who like pictures better than formulas, we can visualize sequences on number lines and on graphs. For those who like Kit Kats, we can visualize a giant Kit Kat bar. Either way, creating a... Other Useful Sequence Words Imagine a Kung Fu black belt took a function and chopped through it, leaving only discrete values. Those discrete values would form a sequence. Because the sequence is just a coarse-chopped list of... Sequences, especially arithmetic and geometric ones, are good for word problems. Sequence story problems come in two main flavors. If these flavors were ice cream, they'd be vanilla and rocky road.... © 2013 Shmoop University, Inc. All rights reserved. We love your brain and respect your privacy. | © 2013 Shmoop University, Inc. All rights reserved. We love your brain and respect your privacy.
http://www.shmoop.com/sequences/examples.html
13
11
When black holes collide Scientists have created a computer simulation of two black holes colliding which will, ultimately, aid the search for elusive gravitational waves. For the first time, scientists from the Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Germany, have simulated two black holes merging into each other in a grazing collision. Fully three-dimensional simulations are essential for the detection of gravitational waves. Its expected that, with new instruments, these waves which are, literally, ripples in the fabric of space will be detectable at the beginning of the next century. Numerical simulations can provide observers with reliable ways of recognizing the waves produced by black holes. "Colliding black holes are one of the hottest candidates for gravitational waves" says Prof. Seidel. Construction of several gravitational wave detectors has started around the world. The scientists hope to measure the short transit of gravitational perturbations from in-spiral black hole collisions but they expect only one event per year at distances around 600 million light years from Earth. "Astronomers now tell us that they know the locations of many thousands of black holes, but we can't do any experiments with them on earth. The only way we will learn the details is to build numerical substitutes for them inside our computers and watch what they do" explains Prof. Bernard Schutz, a Director of the Albert Einstein Institute. This June, an international team used a super computer at the University of Illinois to provide the first detailed simulations of grazing collisions of black holes of unequal mass and spin. Werner Benger, at the Konrad-Zuse-Zentrum, was able to create stunning visualizations of the collision process. During the last moments, the black holes spiral inward, emitting weak gravitational signals periodically. The symmetrical horizon of each object, from which light itself can't escape, is stretched. In a very short time of some milliseconds or less, the two horizons coalesce like waterdrops. The amplitude and frequency of the gravitational waves are increasing strongly. The two united black holes form a new common horizon, which oscillates as it 'rings down' and settles to a quiet final black hole.
http://www.abc.net.au/science/articles/1999/09/07/49967.htm?site=science&topic=latest
13
13
Earth From Space: Landsat Satellites' 40-Year Legacy Explained (Infographic) Since 1972, the Landsat series of satellites have provided high resolution images of the Earth’s surface that are used by businesses, scientists, governments and the military. The Landsat image database provides the longest continuous record of the Earth's continents as seen from space. The latest in the series is the Landsat Data Continuity Mission spacecraft, to be renamed Landsat 8 once it becomes operational. At launch, the Landsat 8 satellite weighs 6,133 pounds (2,782 kilograms). Sensors on the spacecraft capture light in various wavelengths including visible, near infrared, shortwave-infrared and thermal infrared. Images taken by Landsat 8 have a resolution of 49 to 328 feet (15 to 100 meters). The spacecraft's computer can store 3.14 terabits on a solid state recorder. Data beams down from the satellite at a rate of up to 384 megabits per second. [Photos: The Landsat 8 Earth-Observing Spacecraft] Landsat 8 carries 870 pounds (395 kg) of hydrazine fuel, enough to last 10 years. The spacecraft orbits at an altitude of 438 miles (705 kilometers). The Landsat program became operational on July 23, 1972 with the launch of the Earth Resources Technology Satellite (later renamed Landsat 1). ERTS was a modified weather satellite, and was the first satellite launched specifically to study the Earth’s landmasses. On Oct. 6, 1993, Landsat 6 was launched and soon tumbled out of control, failing to reach the proper orbit. The Landsat spacecraft is placed into a near-polar orbit, traveling north to south as it crosses the equator. At a speed of 4.7 miles per second (7.5 km/sec), each orbit takes nearly 99 minutes. Landsat completes just over 14 orbits per day. Landsat’s instruments cover the Earth's surface from 81 degrees north to 81 degrees south every 16 days. - Earth Pictures From Space: Landsat Satellite Legacy - Earth Quiz: Do You Really Know Your Planet? - Top 10 Game-Changing Earth Photos From Landsat Satellites
http://www.space.com/19716-nasa-landsat-satellites-earth-space-infographic.html
13
20
Surface area is a two-dimensional property of a three-dimensional figure. Cylinders are similar to prisms in that they have congruent, parallel bases, except cylinders have circles as their bases. To conceptualize surface area of cylinders, we can imagine that the lateral area of a cylinder can be "unrolled" into a rectangle with one side equaling the circumference of the circle and the other side equal to the height of the cylinder (unless it is oblique). Surface area is the amount of area on the outside of a polyhedron so you can almost think of it as if you had a bucket of paint how much paint would you need to cover the outside of the cylinder? Well to do that let's kind of separate the difference pieces of the cylinder, we can take this top which is a circle and I'm going to draw that circle right there we have some radius r. So we have the top and the bottom is also going to be a circle, so I can draw the bottom right there. So these 2 are going to be congruent, now this middle piece in between the 2 congruent circles what is that going to look like? Well if you took pair of scissors and made a cut right here and unraveled it, it would be a rectangle. So the net for our cylinder is going to look something like this, where we're going to have a circle on top which you could fold over, you're going to wrap this part around and then you're going to fold the other base. So in order to calculate the surface area you're going to need to add up the area of this circle, the area of this rectangle and the area of that other circle. Well let's start with the easy part, the area of the 2 circles, so I'm going to say the surface area is equal to 2 times pi r squared. So that piece right there will calculate the surface area of the top circle and the surface area of the bottom circle now you want to be careful on those homework problems, where they leave out the top. What you're going to do then is you're going to omit these 2 and you're just going to say pi r squared. And now what about this middle piece? Well the middle piece has a height of h so I'm going to say that this right here is h now what is this other distance? Well that other distance is the distance that you would walk around that circle which is also known as the circumference. So this dimension right here is your circumference which is equal to 2 times pi times r. So to find the area of this base you're going to need to multiply 2 times pi times r times h, so the surface area of the whole cylinder is going to be the area of the 2 circles plus the area of that middle piece the lateral area. So when you're calculating surface area you're going to need to know a couple of things, the radius of your circle and the height of your cylinder.
http://www.brightstorm.com/math/geometry/area/surface-area-of-cylinders-problem-2/
13
11
Lesson Plan 2: Hot Dog Sales - Solving Linear Equations and Inequalities Teachers will need the following: Students will need the following: - Graphing calculator - Graph paper - Large sheets of poster paper 1. Explain that the class has been asked to help the football coach determine the practicality of selling hot dogs to raise funds to buy new jerseys. 2. Explain that the coach needs $450 for start-up costs and plans to sell the hot dogs for $0.50 each. The start-up costs include the purchase of 2,500 hot dogs, buns and condiments, and wages for the vendors. 3. Ask the class to generate a list of questions that the coach might ask himself to determine how well his plan will work. Record the students' questions. Possible questions include: How many hot dogs will he have to sell before he makes a profit? How many hot dogs can reasonably be sold at the football game? How much does the coach need to make to pay for the jerseys? Is $0.50 a reasonable price to charge per hot dog? 4. Discuss the student questions, and save them for use at the end of the lesson. 1. Tell students that for now the class will focus on answering the question of how many hot dogs the coach needs to sell in order to make a profit. 2. Solicit student thoughts about the number of hot dogs he needs to sell to break even. 3. Ask the class to devise a table that would produce information about the problem. Students should state that the table should show the number of hot dogs sold, the revenue made selling them, and the profit earned by selling them. 4. Have students create the table, starting with 100 hot dogs sold and increasing by increments of 100. The table should continue until students find the break-even point. The table should look like the one shown below: |Hot dogs sold 5. Ask students to write formulas that can be used to find the values for both the revenue and the profit. 6. Discuss students' formulas for each of the columns. Examples may include: Elicit that the formulas can also be written as: R = 0.50H and P = 0.50H - 450. - Revenue = 0.50 x number of hot dogs sold - Profit = revenue - start-up costs. 7. Inform students that to use a graphing calculator, the formulas must be written using y and x for variables. This means the profit formula would be written in the form y = 0.50x - 450. Discuss the meaning of the 0.50 and the 450 in the equation. 8. Have the whole class create an electronic table using the calculator. To do this, type the equation into the "y =" area of the calculator. Then, set the table on the calculator to match the table the students made on their papers. 9. Students should note that the table they made using the calculator matches the table they made on paper. They should also notice that the calculator's table continues indefinitely. Tell them they just solved this problem using a numeric method that looks at tables of values. Note that the break-even point occurs when 900 hot dogs are sold. 10. Students will now find the break-even point using a graph. Discuss possible values that they should select for the graph's scale. Possibilities include: xmin = 0, xmax = 2500, xscl = 100, ymin = -500, ymax = 800, and yscl = 100. 11. Ask students to determine which point on the graph represents the break-even point. They should recognize that this is the point where the line crosses the x-axis; it is located at (900, 0), and it represents the fact that 900 hot dogs sold will bring a profit of $0.00. Tell them they just solved the problem using a graphical method. 12. Students will now find the break-even point using algebra. Elicit from the class that if they want to find the break-even point, then they should let P = 0, and then solve the equation for H. For example: 13. Point out that the three different methods of solving this problem all produced the same solution. Ask students which method they preferred and why. - P = 0.50H - 450 - 0 = 0.50H - 450 - 450 = 0.50H - 900 = H 14. Now, hand out the activity sheet that lists the coach's questions for the class. The students should work together in groups, answer the questions and prepare a presentation for one of the questions. The students will need to answer all of the questions by solving the problems using tables, graphs, and algebra. Two of the questions involve solving an inequality, so students can see that these three methods of solving equations also work for solving inequalities. 15. Have students present their findings to the class. For homework, ask students to answer the questions the class generated at the beginning of the lesson. They should also find the break-even point for the coach if he lowered his start-up costs by $100 and charged $1.00 for each hot dog.
http://learner.org/workshops/algebra/workshop2/lessonplan2b.html
13
16
Measures of Central Tendency Statistical measures of central tendency or central location are numerical values that are indicative of the central point or the greatest frequency concerning a set of data. The most common measures of central location are the mean, median and mode. The statistical mean of a set of observations is the average of the measurements in a set of data. The population mean and sample mean are defined as follows: Given the set of data values x1, x2, .... xN from a finite population of size N, the population mean m is calculated as Given the set of data values x1, x2, .... xn from a sample of size n, the sample mean is calculated as: The sample mean is often used as an estimator of the mean of the population from whence the sample was taken. In fact, the sample mean is statistically proven to be a most effective estimator for the population mean. The median of a set of observations is that value that, when the observations are arranged in an ascending or descending order, satisfies the following condition: - If the number of observations is odd, the median is the middle value. - If the number of observations is even, the median is the average of the two middle values. The median is the same as the 50th percentile of a set of data. It is denoted by . The mode of a set of observations is the specific value that occurs with the greatest frequency. There may be more than one mode in a set of observations, if there are several values that all occur with the greatest frequency. A mode may also not exist; this is true if all the observations occur with the same frequency. Another measure of central location that is occasionally used is the midrange. It is computed as the average of the smallest and largest values in a set of data. Example of Central Tendency EX. Given the following set of data 1.2, 1.5, 2.6, 3.8, 2.4, 1.9, 3.5, 2.5, 2.4, 3.0 It can be sorted in ascending order: 1.2, 1.5, 1.9, 2.4, 2.4, 2.5, 2.6, 3.0, 3.5, 3.8 The mean, median and mode are computed as follows: = (1 / 10) · (1.2 + 1.5 + 2.6 + 3.8 + 2.4 + 1.9 + 3.5 + 2.5 + 2.4 + 3.0) = (2.4 + 2.5) / 2 The mode is 2.4, since it is the only value that occurs twice. The midrange is (1.2 + 3.8) / 2 = 2.5. Note that the mean, median and mode of this set of data are very close to each other. This suggests that the data is very symmetrically distributed.
http://www.course-notes.org/Statistics/Statistical_Measure_of_Data/Parameters_and_Statistics_Measures_of_Central_Tendency
13
11
As well as worksheets some ideas on how to get across the idea of thousands, hundreds, tens and units. Place value and partitioning as well as writing larger numbers in words and figures. Many children find working with larger numbers very difficult, even with the help of a calculator. When counting up or down in tens or hundreds it can get tricky either side of the thousand boundary. These pages will help children gain confidence counting on and back. Finding one more than a number sounds easy, but finding one more than numbers such as 2999 can prove tricky. The more than and less than signs should be familiar, now they can be put to use with negative numbers. There’s more on ordering larger numbers as well. Finding the number half way between two other numbers is not always easy. There are number lines to help with this as well as counting on in ones, tens and hundreds. The cards at the end of this module are needed to complete these pages. Print onto card and cut them out. Reading temperature is a great way to practise using negative numbers. Try ordering negative numbers as well. More ordering negative numbers on a number line, including ‘tables’ in negative numbers! Find the missing numbers in the sequences and then write the rule that the sequence follows. A number square is a great resource for exploring number patterns. Describe patterns and predict the next number in a sequence. By entering 4 + + = you can make your calculator into an ‘add 4’machine. Follow the instructions on these pages. Useful when adding the same number lots of times or looking at patterns and sequences. Revision of odd and even numbers, including sequences and rules about adding an odd number and an even number etc. A look at decimal fractions, how to say them, how to order them and how to work out the value of the digits. Each rectangle is one whole one and each part is one tenth. How much is shown by the shading? These pages show how money is one of the best ways to explain decimal fractions. There are also worksheets on conversions of metric measures of length, using decimals. It’s time to take a closer look at hundredths. Work includes completing number lines and ordering numbers with two decimal places. More on decimal fractions, including some tricky estimating on number lines and calculator work which requires sharp mental skills. Subjects you do not see a lot of: ratio and proportion. It’s a bit like buying coffee: for every nine cups you get one free. More on ration and proportion; this time we look at shape patterns. Remember to clear the display before beginning to use the calculator’ then try this selection of problems. Use your calculator as a check to see if these negative numbers questions are correct. A guide to the expectations in Year 4 on understanding and using numbers.
http://urbrainy.com/maths/year-4-age-8-9/counting-and-number-year-4
13
14
Applications of Integration by M. Bourne 1. Applications of the Indefinite Integral shows how to find displacement (from velocity) and velocity (from acceleration) using the indefinite integral. There are also some electronics applications in this section. In primary school, we learned how to find areas of shapes with straight sides (e.g. area of a triangle or rectangle). But how do you find areas when the sides are curved? We'll find out how in: 4. Volume of Solid of Revolution explains how to use integration to find the volume of an object with curved sides, e.g. wine barrels. 5. Centroid of an Area means the centre of mass. We see how to use integration to find the centroid of an area with curved sides. 6. Moments of Inertia explains how to find the resistance of a rotating body. We use integration when the shape has curved sides. 7. Work by a Variable Force shows how to find the work done on an object when the force is not constant. This section includes Hooke's Law for springs. Before you start this section, it's a good idea to revise: - Graph of the Quadratic Function - Graphs of Exponential and Log Functions - Plane Analytic Geometry - Curve Sketching (This chapter is easier if you can draw curves confidently.) You may also wish to see the Introduction to Calculus. 8. Electric Charges have a force between them that varies depending on the amount of charge and the distance between the charges. We use integration to calculate the work done when charges are separated. 9. Average Value of a curve can be calculated using integration. Head Injury Criterion is an application of average value and used in road safety research. 10. Force by Liquid Pressure varies depending on the shape of the object and its depth. We use integration to find the force. In each case, we solve the problem by considering the simple case first. Usually this means the area or volume has straight sides. Then we extend the straight-sided case to consider curved sides. We need to use integration because we have curved sides and cannot use the simple formulas any more. The chapter begins with 1. Applications of the Indefinite Integral » Didn't find what you are looking for on this page? Try search: Online Algebra Solver This algebra solver can solve a wide range of math problems. (Please be patient while it loads.) Go to: Online algebra solver Ready for a break? Play a math game. (Well, not really a math game, but each game was made using math...) The IntMath Newsletter Sign up for the free IntMath Newsletter. Get math study tips, information, news and updates each fortnight. Join thousands of satisfied students, teachers and parents! Short URL for this Page Save typing! You can use this URL to reach this page: Calculus Lessons on DVD Easy to understand calculus lessons on DVD. See samples before you commit. More info: Calculus videos
http://www.intmath.com/applications-integration/applications-integrals-intro.php
13
11
As convection is dependent on the bulk movement of a fluid it can only occur in liquids, gases and multiphase mixtures. Natural convection has attracted a great deal of attention from researchers because of its presence both in nature and engineering applications. In nature, convection cells formed from air raising above sunlight warmed land or water, are a major feature all weather systems. Convection is also seen in the rising plume of hot air from fire, oceanic currents, and sea-wind formation (where upward convection is also modified by Coriolus forces). In engineering applications, convection is commonly visualized in the formation of microstructures during the cooling of molten metals, and fluid flows around shrouded heat-dissipation fins, and solar ponds. A very common industrial application of natural convection is free air cooling without the aid of fans: this can happen on small scales (computer chips) to large scale process equipment. The parameter is the volume expansivity (K-1), g is acceleration due to gravity, T is the temperature difference between the hot surface and the bulk fluid (K), L is the characteristic length (this depends on the object) and ν is the viscosity. For liquids, values of are tabulated. Additionally can be calculated from: For an ideal gas, this number may be simply found: Therefore, for an ideal gas is simply: Thus, the Grashof number can be thought of as the ratio of the upwards buoyancy of the heated fluid to the internal friction slowing it down. In very sticky, viscous fluids, fluid movement is restricted, along with natural convection. In the extreme case of infinite viscosity, the fluid could not move and all heat transfer would be through conductive heat transfer. A similar equation can be written for natural convection occurring due to a concentration gradient, sometimes termed thermo-solutal convection. In this case, a concentration of hot fluid diffuses into a cold fluid, in much the same way that ink poured into a container of water diffuses to dye the entire space. The relative magnitudes of the Grashof and Reynolds number determine which form of convection dominates, if forced convection may be neglected, whereas if natural convection may be neglected. If the ratio is approximately one both forced and natural convection need to be taken into account. Natural convection is highly dependent on the geometry of the hot surface, various correlations exist in order to determine the heat transfer coefficent. The Rayleigh number () is frequently used, where: A general correlation that applies for a variety of geometries is The value of f4(Pr) is calculated using the following formula Nu is the Nusselt number and the values of Nu0 and the characteristic length used to calculate Ra are listed below: |Inclined Plane||x (Distance along plane)||0.68| |Inclined Disk||9D/11 (D = Diameter)||0.56| |Vertical Cylinder||x (height of cylinder)||0.68| |Cone||4x/5 (x = distance along sloping surface)||0.54| |Horizontal Cylinder||(D = Diameter of cylinder)||0.36| Forced convection is a mechanism, or type of heat transport in which fluid motion is generated by an external source (like a pump, fan, suction device, etc.). Forced convection is often encountered by engineers designing or analyzing heat exchangers, pipe flow, and flow over a plate at a different temperature than the stream (the case of a shuttle wing during re-entry, for example). However, in any forced convection situation, some amount of natural convection is always present. When the natural convection is not negligible, such flows are typically referred to as mixed convection. When analysing potentially mixed convection, a parameter called the Archimedes number (Ar) parametizes the relative strength of free and forced convection. The Archimedes number is the ratio of Grashof number and the square of Reynolds number, which represents the ratio of buoyancy force and inertia force, and which stands in for the contribution of natural convection. When Ar >> 1, natural convection dominates and when Ar << 1, forced convection dominates. When natural convection isn't a significant factor, mathematical analysis with forced convection theories typically yields accurate results. The parameter of importance in forced convection is the Peclet number, which is the ratio of advection (movement by currents) and diffusion (movement from high to low concentrations) of heat. When the Peclet number is much greater than unity (1), advection dominates diffusion. Similarly, much smaller ratios indicate a higher rate of diffusion relative to advection. High volume carburizing technology.(Heat Treating Progress)(equipment using convective heat transfer as alternative to radiant heating in heat treating developed by Ford Motor Co and Ipsen International Inc) Apr 01, 1999; High-flow convective heat transfer rapidly heats dense workloads to within a temperature band narrower than can be achieved using...
http://www.reference.com/browse/wiki/Convective_heat_transfer
13
17
Imagine we have four bags containing numbers from a sequence. What numbers can we make now? A collection of games on the NIM theme Caroline and James pick sets of five numbers. Charlie chooses three of them that add together to make a multiple of three. Can they stop him? Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The loser is the player who takes the last counter. Choose four consecutive whole numbers. Multiply the first and last numbers together. Multiply the middle pair together. What do you notice? How many pairs of numbers can you find that add up to a multiple of 11? Do you notice anything interesting about your results? Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number? Can you explain the surprising results Jo found when she calculated the difference between square numbers? Janine noticed, while studying some cube numbers, that if you take three consecutive whole numbers and multiply them together and then add the middle number of the three, you get the middle number. . . . Charlie and Lynne put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you The Tower of Hanoi is an ancient mathematical challenge. Working on the building blocks may help you to explain the patterns you Choose any two numbers. Call them a and b. Work out the arithmetic mean and the geometric mean. Which is bigger? Repeat for other pairs of numbers. What do you notice? List any 3 numbers. It is always possible to find a subset of adjacent numbers that add up to a multiple of 3. Can you explain why and prove it? Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13. Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . . Jo has three numbers which she adds together in pairs. When she does this she has three different totals: 11, 17 and 22 What are the three numbers Jo had to start with?” Sets of integers like 3, 4, 5 are called Pythagorean Triples, because they could be the lengths of the sides of a right-angled triangle. Can you find any more? Charlie has made a Magic V. Can you use his example to make some more? And how about Magic Ls, Ns and Ws? Imagine we have four bags containing a large number of 1s, 4s, 7s and 10s. What numbers can we make? A country has decided to have just two different coins, 3z and 5z coins. Which totals can be made? Is there a largest total that cannot be made? How do you know? When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... An article which gives an account of some properties of magic squares. A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why? A little bit of algebra explains this 'magic'. Ask a friend to pick 3 consecutive numbers and to tell you a multiple of 3. Then ask them to add the four numbers and multiply by 67, and to tell you. . . . Pick the number of times a week that you eat chocolate. This number must be more than one but less than ten. Multiply this number by 2. Add 5 (for Sunday). Multiply by 50... Can you explain why it. . . . An account of some magic squares and their properties and and how to construct them for yourself. A game for 2 players Imagine starting with one yellow cube and covering it all over with a single layer of red cubes, and then covering that cube with a layer of blue cubes. How many red and blue cubes would you need? The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of What is the ratio of the area of a square inscribed in a semicircle to the area of the square inscribed in the entire circle? A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs? Is there a relationship between the coordinates of the endpoints of a line and the number of grid squares it crosses? A game for 2 players. Set out 16 counters in rows of 1,3,5 and 7. Players take turns to remove any number of counters from a row. The player left with the last counter looses. What is the volume of the solid formed by rotating this right angled triangle about the hypotenuse? These gnomons appear to have more than a passing connection with the Fibonacci sequence. This problem ask you to investigate some of Can you tangle yourself up and reach any fraction? Can you find sets of sloping lines that enclose a square? Spotting patterns can be an important first step - explaining why it is appropriate to generalise is the next step, and often the most interesting and important. This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 With one cut a piece of card 16 cm by 9 cm can be made into two pieces which can be rearranged to form a square 12 cm by 12 cm. Explain how this can be done. A package contains a set of resources designed to develop pupils’ mathematical thinking. This package places a particular emphasis on “generalising” and is designed to meet the. . . . Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. The sum of the numbers 4 and 1 [1/3] is the same as the product of 4 and 1 [1/3]; that is to say 4 + 1 [1/3] = 4 × 1 [1/3]. What other numbers have the sum equal to the product and can this be so for. . . . Try entering different sets of numbers in the number pyramids. How does the total at the top change? An article for teachers and pupils that encourages you to look at the mathematical properties of similar games. Pick a square within a multiplication square and add the numbers on each diagonal. What do you notice? Build gnomons that are related to the Fibonacci sequence and try to explain why this is possible.
http://nrich.maths.org/public/leg.php?code=72&cl=3&cldcmpid=867
13
22
When we look at the solar system now, we see it after it’s had billions of years of evolution under its belt. Things have changed a lot since it first formed out a swirling disk of material, 4.5 billion years ago. We can make some pretty good guesses about the way things looked back then, though. We can see other systems forming around other stars, for example, to get an idea of what things look like when they’re young. But we can also look at our own solar system, look at the planets, the comets, the asteroids, and, like astronomical archaeologists, get a glimpse into our own cosmic past. We know that asteroids formed along with the rest of the system back then. We also know that there are many kinds of asteroids: rocky, metallic, chondritic, some even have ice on or near their surface. Some formed far out in the solar system, and some formed near in. The thing is, we think the vast majority of the asteroids that formed close to the Sun were absorbed by — and by that, I mean smacked into and became part of — the inner planets, including Earth. Only a handful of those asteroids still remain intact after all this time. But now we think we’ve found one: the main belt, 130 km-long asteroid Lutetia. Using a fleet of telescopes, astronomers carefully measured the spectrum of Lutetia — including spectra taken by the European Rosetta space probe, which visited Lutetia in July 2010 and returned incredible close-up images (see the gallery below). The spectra were then compared to spectra of meteorites found on Earth — meteorites come from asteroids after a collision blasts material from them, so they represent a collection of different kinds of asteroids that we can test in the lab here on Earth. They found that the spectrum of Lutetia matches a very specific type of meteorite found on Earth, called enstatite chondrites. These rare rocks have a very unusual composition that indicates they were formed very near the Sun, where the heat from our star strongly affected their formation. They have a clearly different composition than meteorites which formed in asteroids farther out in the solar system, and are an excellent indication that Lutetia formed in the inner solar system, in the same region where the Earth did. So Lutetia is a local! There aren’t many like it in the asteroid main belt between Mars and Jupiter, and in fact it’s a bit of a mystery how it got there; perhaps a near encounter with Earth or Venus flung it out that way, and then the influence of Jupiter made its orbit circular. And there it sits, a relatively pristine example of what the solar system was like when it was young. Currently, the Dawn space mission is orbiting the large asteroid Vesta, and will make its way to Ceres, the largest asteroid, after that. I have to wonder if NASA is eyeing Lutetia as another possible target. It’s an amazing chance to visit an object that may yield a lot of insight into our own planet when it was but a youth. After all, you can take the asteroid out of the inner solar system, but you can’t take the inner solar system out of the asteroid. Image credit: ESA 2010 MPS for OSIRIS Team. MPS/UPD/LAM/IAA/RSSD/INTA/UPM/DASP/IDA
http://blogs.discovermagazine.com/badastronomy/2011/11/15/lutetia-may-have-witnessed-the-birth-of-the-earth
13
18
Unit A: The Life Processes 1. Get Set to Explore - diffusion: The movement of particles from an area of higher concentration to an area of lower concentration. - organelle: Small structures inside a cell that perform specific functions. - ratio: The relationship between two quantities, which can be expressed as a fraction. - surface area: The area of the surface of an object; the surface area of a cell can be expressed as 4πr2, where r is the radius of the cell. - volume: The amount of space something occupies; the volume of a cell can be expressed by the formula ()πr3, where r is the radius of the cell. - Review what a cell is and draw a circle on the board to represent a cell. Referring students to Student Edition pages A8–A10, call on volunteers to come up and draw different organelles inside the cell. Have them label each with its name and function. - Go over the idea that some organisms are made of one cell and some are made of many cells. Ask students to name some single-celled organisms. Discuss the size of these organisms. Elicit the idea that single-celled organisms tend to be small, in comparison to multi-cellular organisms. Write the Discover! question on the board and give students a chance to discuss answers. Record their ideas on the board. - Present the vocabulary words, reinforcing the math concepts related to ratio, surface area, and volume. Define π for students. Explain that the vocabulary words give hints to help them answer the Discover! question. Encourage students to try to formulate an answer to the question. Let them revise the answers already written on the board. 2. Guide the Exploration - Have students launch the Discover! Simulation. Tell them to listen closely to the directions. Point out the scroll bar for changing the size of the cell. Explain that moving the cursor over the controls gives definitions of key terms. - Students should try changing the cell to several different sizes, taking notes on what they observe. They should pay attention to how the bar graphs comparing surface area and volume change as the size of the cell changes. They should watch the animation and listen to the narration at the conclusion of each animation. - Before students do Step 3 of the simulation, return to the Discover! question and let students propose answers. Encourage them to revise or add to the ideas already up on the board. Then, direct them to complete the simulation. - Ask students to summarize Step 3's Wrap-Up text in their own words. Encourage them to explain what happened in the simulation when the cell got too large. Make sure students understand the relationship between a cell's size and its need and ability to take in nutrients. Allow students to revise their previous answers to the Discover! question, which are recorded on the board. To tie concepts together, have a volunteer read the final paragraph of the Wrap-Up text aloud to the class. - Refer students to the drawing of the cell on the board the class made before doing the simulation. Students can return to the simulation to do the Extension activity. Challenge them to look for all of the organelles shown in the drawing on the board. Students should be able to identify these following organelles in the computer cell: cell membrane, nucleus, mitochondria, endoplasmic reticulum, ribosomes, Golgi apparatus. If time permits, present students with the following question and activity: - Critical Thinking Classify Study the structure of and organelles in the cell pictured in the simulation. What kind of cell is shown? Answer: An animal cell is shown. Students can infer this is an animal cell because of the lack of cell wall and chloroplasts. - Inquiry Skill Use Models Use a rubber band or another type of thin, stretchy material, to make a model of a cell membrane. Use beads of different colors to represent nutrients and wastes. Show how the cell membrane can change to aid a single-celled organism in taking food into the cell. Answer: Students can use the rubber band to make arms like an amoeba for surrounding and ingesting food. They may be able to modify a rubber band to represent cilia and model the process a paramecium uses to take in food. If necessary, refer students to Student Edition page A16. 4. Reaching All Learners English Language Learners Group English Language Learners with native English speakers who are good at explaining things and are strong in science and math. The groups should go over the vocabulary terms. Let students remain in their groups to do the simulation. Check in with the native English speakers working with the groups from time to time to help clarify concepts for groups.
http://www.eduplace.com/science/hmsc/5/a/simulation/sim_5a.shtml
13
10
Neptune Comes Full Circle Neptune has arrived at the same location in space where it was discovered nearly 165 years ago. To commemorate the event, NASA's Hubble Space Telescope has taken "anniversary pictures" of the blue-green giant planet. Neptune is the most distant major planet in our solar system. Studying thtis unique planet can help astrobiologists understand the diversity of planetary bodies that could exist in the Universe. Understanding Neptune's role in the formation and evolution of our solar system can also provide clues about how the Earth itself became habitable for life as we know it. German astronomer Johann Galle discovered Neptune on September 23, 1846. At the time, the discovery doubled the size of the known solar system. The planet is 2.8 billion miles (4.5 billion kilometers) from the Sun, 30 times farther than Earth. Under the Sun's weak pull at that distance, Neptune plods along in its huge orbit, slowly completing one revolution approximately every 165 years. The four Hubble images of Neptune below were taken with the Wide Field Camera 3 on June 25-26, during the planet's 16-hour rotation. The snapshots were taken at roughly four-hour intervals, offering a full view of the planet. The images reveal high-altitude clouds in the northern and southern hemispheres. The clouds are composed of methane ice crystals. The giant planet experiences seasons just as Earth does, because it is tilted 29 degrees, similar to Earth's 23-degree-tilt. Instead of lasting a few months, each of Neptune's seasons continues for about 40 years. The snapshots show that Neptune has more clouds than a few years ago, when most of the clouds were in the southern hemisphere. These Hubble views reveal that the cloud activity is shifting to the northern hemisphere. It is early summer in the southern hemisphere and winter in the northern hemisphere. In the Hubble images, absorption of red light by methane in Neptune's atmosphere gives the planet its distinctive aqua color. The clouds are tinted pink because they are reflecting near-infrared light. The temperature difference between Neptune's strong internal heat source and its frigid cloud tops, about minus 260 degrees Fahrenheit, might trigger instabilities in the atmosphere that drive large-scale weather changes. Neptune has an intriguing history. It was Uranus that led astronomers to Neptune. Uranus, the seventh planet from the Sun, is Neptune's inner neighbor. British astronomer Sir William Herschel and his sister Caroline found Uranus in 1781, 55 years before Neptune was spotted. Shortly after the discovery, Herschel noticed that the orbit of Uranus did not match the predictions of Newton's theory of gravity. Studying Uranus in 1821, French astronomer Alexis Bouvard speculated that another planet was tugging on the giant planet, altering its motion. Twenty years later, Urbain Le Verrier of France and John Couch Adams of England, who were mathematicians and astronomers, independently predicted the location of the mystery planet by measuring how the gravity of a hypothetical unseen object could affect Uranus's path. Le Verrier sent a note describing his predicted location of the new planet to the German astronomer Johann Gottfried Galle at the Berlin Observatory. Over the course of two nights in 1846, Galle found and identified Neptune as a planet, less than a degree from Le Verrier's predicted position. The discovery was hailed as a major success for Newton's theory of gravity and the understanding of the universe. Galle was not the first to see Neptune. In December 1612, while observing Jupiter and its moons with his handmade telescope, astronomer Galileo Galilei recorded Neptune in his notebook, but as a star. More than a month later, in January 1613, he noted that the "star" appeared to have moved relative to other stars. But Galileo never identified Neptune as a planet, and apparently did not follow up those observations, so he failed to be credited with the discovery. Neptune is not visible to the naked eye, but may be seen in binoculars or a small telescope. It can be found in the constellation Aquarius, close to the boundary with Capricorn. Neptune-mass planets orbiting other stars may be common in our Milky Way galaxy. NASA's Kepler mission, launched in 2009 to hunt for Earth-size planets, is finding increasingly smaller extrasolar planets, including many the size of Neptune. Hubble is a project of international cooperation between NASA and the European Space Agency. Goddard manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy Inc. in Washington.
http://www.astrobio.net/includes/html_to_doc_execute.php?id=4090&component=news
13
24
A musician will call the frequency of a sound its pitch. When the frequencies of two pitches differ by a factor of two, we say they harmonize. We call this interval between the two pitches an octave. This perception of harmony happens because the two sounds reinforce each other completely. Indeed, some people have trouble telling two notes apart when they differ by an octave. This trivial example of harmony is true for all powers of two, including . Classical European music divided that perfectly harmonious “factor of two” interval into eight asymmetric steps; for this historical reason, it is called an octave. Other cultures divide this same interval into different numbers of steps with different intervals. More modern European music further subdivides the octave, creating a 12-step system. The most modern version of this system has 12 equally spaced intervals, a net simplification over the older 8-step system. The pitches are assigned names using flats () and sharps (), leading to each pitch having several names. We’ll simplify this system slightly, and use the following 12 names for the pitches within a single octave: The eight undecorated names (A through G and an A with a frequency double the original A) form our basic octave; the additional notes highlight the interesting asymetries. For example, the interval from A to B is called a whole step or a second, with A being half-way between. The interval from B to C, however is only a half step to begin with. Also, it is common to number the various octaves as though the octaves begin with the C, not the A. So, some musicians consider the basic scale to be C, D, E, F, G, A, B, with a C in the next higher octave. The higher C is twice the frequency of the lower C. The tuning of an instrument to play these pitches is called its temperament. A check on the web for reference material on tuning and temperament will reveal some interesting ways to arrive at the tuning of a musical instrument. It is suprising to learn that there are many other ways to arrive at the 12 steps of the scale. This demonstrates that our ear is either remarkably inaccurate or remarkably forgiving of errors in tuning. We’ll explore a number of alternate systems for deriving the 12 pitches of a scale. We’ll use the simple equal-temperament rules, plus we’ll derive the pitches from the overtones we hear, plus a more musical rule called the circle of fifths, as well as a system called Pythagorean Tuning. Interesting side topics are the questions of how accurate the human ear really is, and can we really hear the differences? Clearly, professional musicians will spend time on ear training to spot fine gradations of pitch. However, even non-musicians have remarkably accurate hearing and are easily bothered by small discrepencies in tuning. Musicians will divide the octave into 1200 cents. Errors on the order of 50 cents, 1/24 of the octave, are noticable even to people who claim they are “tone deaf”. When two tunings produce pitches with a ratio larger than 1.0293, it is easily recognized as out of tune. These exercises will make extensive use of loops and the list data structure. The equal temperament tuning divides the octave into twelve equally sized steps. Moving up the scale is done by multiplying the base frequency by some amount between 1 and 2. If we multiply a base frequency by 2 or more, we have jumped to another octave. If we multiply a base frequency by a value between 0 and 0.5, we have jumped into a lower octave. When we multiply a frequency by values between 0.5 and 1, we are computing lower pitches in the same octave. Similarly, multiplying a frequency by values between 1 and 2 computes a higher pitch in the same octave. We want to divide the octave into twelve steps: when we do a sequence of twelve multiplies by this step, we should arrive at an exact doubling of the base frequency. The steps of the octave, then, would be . This step value, therefore is the following value. If we multiply this 12 times for each of the 12 steps, we find the following. For a given pitch number, p, from 0 to 88, the following formula gives us the frequency. We can plug in a base frequency, b of 27.5 Hz for the low A on a piano and get the individual pitches for each of the 88 keys. Actual piano tuning is a bit more subtle than this, but these frequencies are very close to the ideal modern piano tuning. Equal Temperament Pitches. Develop a loop to generate these pitches and their names. If you create a simple tuple of the twelve names shown above (from A to G ), you can pick out the proper name from the tuple for a given step, s, using int( s % 12 ). Check Your Results. You should find that an “A” has a pitch of 440, and the “G” ten steps above it will be 783.99 Hz. This 440 Hz “A” is the most widely used reference pitch for tuning musical instruments. A particular musical sound consists of the fundamental pitch, plus a sequence of overtones of higher frequency, but lower power. The distribution of power among these overtones determines the kind of instrument we hear. We can call the overtones the spectrum of frequencies created by an instrument. A violin’s frequency spectrum is distinct from the frequency spectrum of a clarinet. The overtones are usually integer multiples of the base frequency. When any instrument plays an A at 440 Hz, it also plays A’s at 880 Hz, 1760 Hz, 3520 Hz, and on to higher and higher frequencies. While we are not often consciously aware of these overtones, they are profound, and determine the pitches that we find harmonious and discordant. If we expand the frequency spectrum through the first 24 overtones, we find almost all of the musical pitches in our equal tempered scale. Some pitches (the octaves, for example) match precisely, while other pitches don’t match very well at all. This is a spread of almost five octaves of overtones, about the limit of human hearing. Even if we use a low base frequency, b, of 27.5 Hz, it isn’t easy to compare the pitches for the top overtone, , with a lower overtone like : they’re in two different octaves. However, we can divide each frequency by a power of 2, which will normalize it into the lowest octave. Once we have the lowest octave version of each overtone pitch, we can compare them against the equal temperament pitch for the same octave. The following equation computes the highest power of 2, , for a given overtone multiplier, , such that . Given this highest power of highest power of 2, , we can normalize a frequency by simple division to create what we could call the first octave pitch, . The list of first octave pitches arrives in a peculiar order. You’ll need to collect the values into a list and sort that list. You can then produce a table showing the 12 pitches of a scale using the equal temperament and the overtones method. They don’t match precisely, which leads us to an interesting musical question of which sounds “better” to most listeners. Overtone Pitches. Develop a loop to multiply the base frequency of 27.5 Hz by values from 3 to 24, compute the highest power of 2 required to normalize this back into the first octave, , and compute the first octave values, . Save these first octave values in a list, sort it, and produce a report comparing these values with the closest matching equal temperament values. Note that you will be using 22 overtone multipliers to compute twelve scale values. You will need to discard duplicates from your list of overtone frequencies. Check Your Results. You should find that the 6th overtone is 192.5 Hz, which noralizes to 48.125 in the fist octave. The nearest comparable equal-tempered pitch is 48.99 Hz. This is an audible difference to some people; the threshold for most people to say something sounds wrong is a ratio of 1.029, these two differ by 1.018. When we look at the overtone analysis, the second overtone is three times the base frequency. When we normalize this back into the first octave, it produces a note with the frequency ratio of 3/2. This is almost as harmonious as the octave, which had a frequency ratio of exactly 2. In the original 8-step scale, this was the 5th step; the interval is called a fifth for this historical reason. It is also called a dominant. Looking at the names of our notes, this is “E”, the 7th step of the more modern 12-step scale that starts on “A”. This pitch has an interesting mathematical property. When we look at the 12-step tuning, we see that numbers like 1, 2, 3, 4, and 6 divide the 12-step octave evenly. However, numbers like 5 and 7 don’t divide the octave evenly. This leads to an interesting cycle of notes that are separated by seven steps: A, E, B, F , C , .... We can see this clearly by writing the 12 names of notes around the outside of a circle. Put each note in the position of an hour with A in the 12-o’clock position. You can then walk around the circle in groups of seven pitches. This is called the Circle of Fifths because we see all 12 pitches by stepping through the names in intervals of a fifth. This also works for the 5th step of the 12-step scale; the interval is called a fourth in the old 8-step scale. Looking at our note names, it is the “D”. If we use this interval, we create a Circle of Fourths. Write two loops to step around the names of notes in steps of 7 and steps of 5. You can use something like range( 0, 12*7, 7 ) or range( 0, 12*5, 5 ) to get the steps, s. You can then use names[s % 12] to get the specific names for each pitch. You’ll know these both work when you see that the two sequences are the same things in opposite orders. Circle of Fifths Pitches. Develop a loop similar to the one in the overtones exercise; use multipliers based on 3/2: 3/2, 6/2, 9/2, .... to compute the 12 pitches around the circle of fifths. You’ll need to compute the highest power of 2, using (2), and normalize the pitches into the first octave using (3). Save these first octave values in a list, indexed by s % 12; you don’t need to sort this list, since the pitch can be computed directly from the step. Rational Circle of Fifths. Use the Python rational number module, fractions to do these calculations, also. Check Your Results. Using this method, you’ll find that “G” could be defined as 49.55 Hz. The overtone analysis suggested 48.125 Hz. The equal temperament suggested 48.99 Hz. When we do the circle of fifths calculations using rational numbers instead of floating point numbers, we find a number of simple-looking fractions like 3/2, 4/3, 9/8, 16/9 in our results. These fractions lead to a geometrical interpretation of the musical intervals. These fractions correspond with some early writings on music by the mathematician Pythagoras. We’ll provide one set of commonly-used list of fractions for Pythagorean tuning. These can be compared with other results to make the whole question of scale tuning even more complex. Pythagorean Pitches. Develop a simple representation for the above ratios. A list of tuples works well, for example. Use the ratio to compute the frequencies for the various pitches, using 27.5 Hz for the base frequency of the low “A”. Compare these values with equal temperament, overtones and circle of fifths tuning. Check Your Results. The value for “G” is . The subject of music is rich with cultural and political overtones. We’ll try to avoid delving too deeply into anything outside the basic accoustic properties of pitches. One of the most popular alternative scales divides the octave into five equally-spaced steps. This tuning produces pitches that are distinct from those in the 12 pitches available in European music. The original musical tradition behind the blues once used a five step scale. You can revise the formula in (1) to use five steps instead of twelve. This will provide a new table of frequencies. The intervals should be called something distinctive like “V”, “W”, “X”, “Y”, “Z” and “V” in the second octave. Five-Tone Pitches. Develop a loop similar to the 12-tone Equal Temperament (Equal Temperament) to create the 5-tone scale pitches. Note that the 12-tone scale leads to 88 distinct pitches on a piano; this 5-tone scale only needs 36. Compare 12-Tone and 5-Tone Scales. Produce a three column table with the 12-tone pitch names and frequencies aligned with the 5-tone frequencies. You will have to do some clever sorting and matching. The frequencies for “V” will match the frequencies for “A” precisely. The other pitches, however, will fall into gaps. The resulting table should look like the following |name 12||freq.||name 5||freq.|
http://www.itmaybeahack.com/homepage/books/python/html/p05/p05c03_pitches.html
13
67
Jupiter has 63 confirmed moons, giving it the largest retinue of moons with "reasonably secure" orbits of any planet in the Solar System. The most massive of them, the four Galilean moons, were discovered in 1610 by Galileo Galilei and were the first objects found to orbit a body that was neither Earth nor the Sun. From the end of the 19th century, dozens of much smaller Jovian moons have been discovered and have received the names of lovers, conquests, or daughters of the Roman god Jupiter, or his Greek equivalent, Zeus. Eight of Jupiter's moons are regular satellites, with prograde and nearly circular orbits that are not greatly inclined with respect to Jupiter's equatorial plane. The Galilean satellites are spheroidal in shape, and so would be considered dwarf planets if they were in direct orbit about the Sun. The other four regular satellites are much smaller and closer to Jupiter; these serve as sources of the dust that makes up Jupiter's rings. Jupiter's other 54 or 55 moons are tiny irregular satellites, whose prograde and retrograde orbits are much farther from Jupiter and have high inclinations and eccentricities. These moons were likely captured by Jupiter from solar orbits. There are 13 recently-discovered irregular satellites that have not yet been named, plus a 14th whose orbit has not yet been established. The moons' physical and orbital characteristics vary widely. The four Galileans are all over 3000 km in diameter; the largest Galilean, Ganymede, is the largest object in the Solar System outside the Sun and the eight planets. All other Jovian moons are less than 250 km in diameter, with most barely exceeding five km. Even Europa, the smallest of the Galileans, is five thousand times more massive than all the non-Galilean moons combined. Orbital shapes range from nearly perfectly circular to highly eccentric and inclined, and many revolve in the direction opposite to Jupiter's spin (retrograde motion). Orbital periods range from seven hours (taking less time than Jupiter does to spin around its axis), to some 3000 times more (almost three Earth years). Origin and evolutionEdit Jupiter's regular satellites are believed to have formed from a circumplanetary disk, a ring of accreting gas and solid debris analogous to a protoplanetary disk. They may be the remnants of a score of Galilean-mass satellites that formed early in Jupiter's history. Simulations suggest that, while the disk had a relatively low mass at any given moment, over time a substantial fraction (several tens of a percent) of the mass of Jupiter captured from the Solar nebula was processed through it. However, the disk mass of only 2% that of Jupiter is required to explain the existing satellites. Thus there may have been several generations of Galilean-mass satellites in Jupiter's early history. Each generation of moons would have spiraled into Jupiter, due to drag from the disk, with new moons then forming from the new debris captured from the Solar nebula. By the time the present (possibly fifth) generation formed, the disk had thinned out to the point that it no longer greatly interfered with the moons' orbits. The current Galilean moons were still affected, falling into and being partially protected by an orbital resonance which still exists for Io, Europa, and Ganymede. Ganymede's larger mass means that it would have migrated inward at a faster rate than Europa or Io. The outer, irregular moons are thought to have originated from passing asteroids while the protolunar disk was still massive enough to absorb much of their momentum and thus capture them into orbit. Many broke up by the stresses of capture, or afterward by collisions with other small bodies, producing the families we see today. The first claimed observation of one of Jupiter's moons is that of the Chinese astronomer Gan De around 364 BC. However, the first certain observations of Jupiter's satellites were those of Galileo Galilei in 1609. By March 1610, he had sighted the four massive Galilean moons with his 30x magnification telescope: Ganymede, Callisto, Io, and Europa. No additional satellites were discovered until E.E. Barnard observed Amalthea in 1892. With the aid of telescopic photography, further discoveries followed quickly over the course of the twentieth century. Himalia was discovered in 1904, Elara in 1905, Pasiphaë in 1908, Sinope in 1914, Lysithea and Carme in 1938, Ananke in 1951, and Leda in 1974. By the time Voyager space probes reached Jupiter around 1979, 13 moons had been discovered, while Themisto was observed in 1975, but due to insufficient initial observation data, it was lost until 2000. The Voyager missions discovered an additional three inner moons in 1979: Metis, Adrastea, and Thebe. For two decades no additional moons were discovered; but between October 1999 and February 2003, researchers using sensitive ground-based detectors found another 32 moons, most of which were discovered by a team led by Scott S. Sheppard and David C. Jewitt. These are tiny moons, in long, eccentric, generally retrograde orbits, and average of 3 km (1.9 mi) in diameter, with the largest being just 9 km (5.6 mi) across. All of these moons are thought to be captured asteroidal or perhaps cometary bodies, possibly fragmented into several pieces, but very little is actually known about them. A number of 14 additional moons were discovered since then, but not yet confirmed, bringing the total number of observed moons of Jupiter at 63. As of 2008, this is the most of any planet in the Solar System, but additional undiscovered, tiny moons may exist. - Main article: Naming of moons The Galilean moons of Jupiter (Io, Europa, Ganymede and Callisto) were named by Simon Marius soon after their discovery in 1610. However, until the 20th century these fell out of favor, and instead they were referred to in the astronomical literature simply as "Jupiter I", "Jupiter II", etc., or as "the first satellite of Jupiter", "Jupiter's second satellite", and so on. The names Io, Europa, Ganymede, and Callisto became popular in the 20th century, while the rest of the moons, usually numbered in Roman numerals V (5) through XII (12), remained unnamed. By a popular though unofficial convention, Jupiter V, discovered in 1892, was given the name Amalthea, first used by the French astronomer Camille Flammarion. The other moons, in the majority of astronomical literature, were simply labeled by their Roman numeral (i.e. Jupiter IX) until the 1970s. In 1975, the International Astronomical Union's (IAU) "Task Group for Outer Solar System Nomenclature" granted names to satellites V–XIII, and provided for a formal naming process for future satellites to be discovered. The practice was to name that newly discovered moons of Jupiter after lovers and favorites of the god Jupiter (Zeus), and since 2004, after their descendants also. All of Jupiter's satellites from XXXIV (Euporie) are named after daughters of Jupiter or Zeus. Some asteroids share the same names as moons of Jupiter: 9 Metis, 38 Leda, 52 Europa, 85 Io, 113 Amalthea, 239 Adrastea. Two more asteroids previously shared the names of Jovian moons until spelling differences were made permanent by the IAU: Ganymede and asteroid 1036 Ganymed; and Callisto and asteroid 204 Kallisto. These are split into two groups: - Inner satellites or Amalthea group—they orbit very close to Jupiter: Metis, Adrastea, Amalthea, and Thebe. The innermost two orbit in less than a Jovian day, while the latter two are respectively the fifth and seventh largest moons in the Jovian system. Observations suggest that at least the largest member, Amalthea, did not form on the present orbit, but that it was formed farther from the planet, or that it is a captured Solar System body. These moons, along with a number of as-yet-unseen inner moonlets, replenish and maintain Jupiter's faint ring system. Metis and Adrastea help to maintain Jupiter's main ring, while Amalthea and Thebe each maintain their own faint outer rings. - Main group or Galilean moons—the four massive satellites: Ganymede, Callisto, Io, and Europa. With radii that are larger than any of the dwarf planets, they are some of the largest objects in the Solar System outside the Sun and the eight planets in terms of diameter. Respectively the first, third, fourth, and sixth largest natural satellites in the Solar System, they contain almost 99.999% of the total mass in orbit around Jupiter. Jupiter is about five thousand times more massive than the Galilean moons.[note 1] The inner moons also participate in a 1:2:4 orbital resonance. Models suggest that they formed by slow accretion in the low-density Jovian subnebula—a disc of the gas and dust that existed around Jupiter after its formation—which lasted up to 10 million years in the case of Callisto. - Main article: Irregular satellite The irregular satellites are substantially smaller objects with more distant and eccentric orbits. They form families with shared similarities in orbit (semi-major axis, inclination, eccentricity) and composition; it is believed that these are at least partially collisional families that were created when larger (but still small) parent bodies were shattered by impacts from asteroids captured by Jupiter's gravitational field. These families bear the names of their largest members. The identification of satellite families is tentative, but the following are typically listed: - Prograde satellites: - The irregular retrograde satellites are thought to have originally been asteroids that were captured by drag from the tenuous outer regions of Jupiter's accretion disk while the Solar system was still forming, and were later shattered by impacts. They are far enough from Jupiter that their orbits are significantly disturbed by the gravitational field of the Sun. - S/2003 J 12 is the innermost of the retrograde moons, and is not part of a known family. - The Ananke group has a relatively wider spread than the previous groups, over 2.4 Gm in semi-major axis, 8.1° in inclination (between 145.7° and 154.8°), and eccentricities between 0.02 and 0.28. Most of the members appear gray, and are believed to have formed from the breakup of a captured asteroid. - The Pasiphae group is quite dispersed, with a spread over 1.3 Gm, inclinations between 144.5° and 158.3°, and their eccentricities between 0.25 and 0.43. The colors also vary significantly, from red to grey, which might be the result of multiple collisions. Sinope, sometimes included into Pasiphae group, is red and given the difference in inclination, it could have been captured independently; Pasiphae and Sinope are also trapped in secular resonances with Jupiter. - S/2003 J 2 is the outermost moon of Jupiter, and is not part of a known family. The moons of Jupiter are listed below by orbital period. Moons massive enough for their surfaces to have collapsed into a spheroid are highlighted in bold. These are the four Galilean moons, which are comparable in size to Earth's Moon. The four inner moons are much smaller. The irregular captured moons are shaded light gray when prograde and dark gray when retrograde. | Semi-major axis| | Orbital period| | Discovery year| |1||XVI||Metis||ˈmiːtɨs||60×40×34||~3.6||127,690||+7h 4m 29s||0.06°||0.000 02||1979|| Synnott| |2||XV||Adrastea||ˌædrəˈstiːə||20×16×14||~0.2||128,690||+7h 9m 30s||0.03°||0.0015||1979|| Jewitt| |3||V||Amalthea||ˌæməlˈθiːə||250×146×128||208||181,366||+11h 57m 23s||0.374°||0.0032||1892||Barnard||Inner| |4||XIV||Thebe||ˈθiːbiː||116×98×84||~43||221,889||+16h 11m 17s||1.076°||0.0175||1979||Synnott| |8,900,000||421,700||+1.769 137 786||0.050°||0.0041||1610||Galilei||Galilean| |6||II||Europa||jʊˈroʊpə||3,121.6||4,800,000||671,034||+3.551 181 041||0.471°||0.0094||1610||Galilei||Galilean| |7||III||Ganymede||ˈgænɨmiːd||5,262.4||15,000,000||1,070,412||+7.154 552 96||0.204°||0.0011||1610||Galilei||Galilean| |8||IV||Callisto||kəˈlɪstoʊ||4,820.6||11,000,000||1,882,709||+16.689 018 4||0.205°||0.0074||1610||Galilei||Galilean| |9||XVIII||Themisto||θɨˈmɪstoʊ||8||0.069||7,393,216||+129.87||45.762°||0.2115||1975/2000|| Kowal & Roemer/| Sheppard et al. |14||—||S/2000 J 11||4||0.009 0||12 570 424||+287.93||27.584°||0.2058||2001||Sheppard et al.||Himalia| |15||XLVI||Carpo||ˈkɑrpoʊ||3||0.004 5||17,144,873||+458.62||56.001°||0.2735||2003||Sheppard et al.||Carpo| |16||—||S/2003 J 12||1||0.000 15||17,739,539||−482.69||142.680°||0.4449||2003||Sheppard et al.||?| |17||XXXIV||Euporie||juːˈpoʊrɨ.iː||2||0.001 5||19,088,434||−538.78||144.694°||0.0960||2002||Sheppard et al.||Ananke| |18||—||S/2003 J 3||2||0.001 5||19,621,780||−561.52||146.363°||0.2507||2003||Sheppard et al.||Ananke| |19||—||S/2003 J 18||2||0.001 5||19,812,577||−569.73||147.401°||0.1569||2003||Gladman et al.||Ananke| |20||XLII||Thelxinoe||θɛlkˈsɪnoʊ.iː||2||0.001 5||20,453,753||−597.61||151.292°||0.2684||2003||Sheppard et al.||Ananke| |21||XXXIII||Euanthe||juːˈænθiː||3||0.004 5||20,464,854||−598.09||143.409°||0.2000||2002||Sheppard et al.||Ananke| |22||XLV||Helike||ˈhɛlɨkiː||4||0.009 0||20,540,266||−601.40||154.586°||0.1374||2003||Sheppard et al.||Ananke| |23||XXXV||Orthosie||ɔrˈθɒsɨ.iː||2||0.001 5||20,567,971||−602.62||142.366°||0.2433||2002||Sheppard et al.||Ananke| |24||XXIV||Iocaste||ˌaɪ.əˈkæstiː||5||0.019||20,722,566||−609.43||147.248°||0.2874||2001||Sheppard et al.||Ananke| |25||—||S/2003 J 16||2||0.001 5||20,743,779||−610.36||150.769°||0.3184||2003||Gladman et al.||Ananke| |26||XXVII||Praxidike||prækˈsɪdɨkiː||7||0.043||20,823,948||−613.90||144.205°||0.1840||2001||Sheppard et al.||Ananke| |27||XXII||Harpalyke||hɑrˈpælɨkiː||4||0.012||21,063,814||−624.54||147.223°||0.2440||2001||Sheppard et al.||Ananke| |28||XL||Mneme||ˈniːmiː||2||0.001 5||21,129,786||−627.48||149.732°||0.3169||2003||Gladman et al.||Ananke| |29||XXX||Hermippe||hɚˈmɪpiː||4||0.009 0||21,182,086||−629.81||151.242°||0.2290||2002||Sheppard et al.||Ananke?| |30||XXIX||Thyone||θaɪˈoʊniː||4||0.009 0||21,405,570||−639.80||147.276°||0.2525||2002||Sheppard et al.||Ananke| |32||—||S/2003 J 17||2||0.001 5||22,134,306||−672.75||162.490°||0.2379||2003||Gladman et al.||Carme| |33||XXXI||Aitne||ˈaɪtniː||3||0.004 5||22,285,161||−679.64||165.562°||0.3927||2002||Sheppard et al.||Carme| |34||XXXVII||Kale||ˈkeɪliː||2||0.001 5||22,409,207||−685.32||165.378°||0.2011||2002||Sheppard et al.||Carme| |35||XX||Taygete||teiˈɪdʒɨtiː||5||0.016||22,438,648||−686.67||164.890°||0.3678||2001||Sheppard et al.||Carme| |36||—||S/2003 J 19||2||0.001 5||22,709,061||−699.12||164.727°||0.1961||2003||Gladman et al.||Carme| |37||XXI||Chaldene||kælˈdiːniː||4||0.007 5||22,713,444||−699.33||167.070°||0.2916||2001||Sheppard et al.||Carme| |38||—||S/2003 J 15||2||0.001 5||22,720,999||−699.68||141.812°||0.0932||2003||Sheppard et al.||Ananke?| |39||—||S/2003 J 10||2||0.001 5||22,730,813||−700.13||163.813°||0.3438||2003||Sheppard et al.||Carme?| |40||—||S/2003 J 23||2||0.001 5||22,739,654||−700.54||148.849°||0.3930||2004||Sheppard et al.||Pasiphaë| |41||XXV||Erinome||ɨˈrɪnəmiː||3||0.004 5||22,986,266||−711.96||163.737°||0.2552||2001||Sheppard et al.||Carme| |42||XLI||Aoede||eɪˈiːdiː||4||0.009 0||23,044,175||−714.66||160.482°||0.6011||2003||Sheppard et al.||Pasiphaë| |43||XLIV||Kallichore||kəˈlɪkəriː||2||0.001 5||23,111,823||−717.81||164.605°||0.2041||2003||Sheppard et al.||Carme?| |44||XXIII||Kalyke||ˈkælɨkiː||5||0.019||23,180,773||−721.02||165.505°||0.2139||2001||Sheppard et al.||Carme| |46||XVII||Callirrhoe||kəˈlɪroʊ.iː||9||0.087||23,214,986||−722.62||139.849°||0.2582||2000||Gladman et al.||Pasiphaë| |47||XXXII||Eurydome||jʊˈrɪdəmiː||3||0.004 5||23,230,858||−723.36||149.324°||0.3769||2002||Sheppard et al.||Pasiphaë?| |48||XXXVIII||Pasithee||pəˈsɪθɨ.iː||2||0.001 5||23,307,318||−726.93||165.759°||0.3288||2002||Sheppard et al.||Carme| |49||XLIX||Kore||ˈkoʊriː||2||0.001 5||23,345,093||−776.02||137.371°||0.1951||2003||Sheppard et al.||Pasiphaë| |50||XLVIII||Cyllene||sɨˈliːniː||2||0.001 5||23,396,269||−731.10||140.148°||0.4115||2003||Sheppard et al.||Pasiphaë| |51||XLVII||Eukelade||juːˈkɛlədiː||4||0.009 0||23,483,694||−735.20||163.996°||0.2828||2003||Sheppard et al.||Carme| |52||—||S/2003 J 4||2||0.001 5||23,570,790||−739.29||147.175°||0.3003||2003||Sheppard et al.||Pasiphaë| |53||VIII||Pasiphaë||pəˈsɪfeɪ.iː||60||30||23,609,042||−741.09||141.803°||0.3743||1908||Gladman et al.||Pasiphaë| |54||XXXIX||Hegemone||hɨˈdʒɛməniː||3||0.004 5||23,702,511||−745.50||152.506°||0.4077||2003||Sheppard et al.||Pasiphaë| |55||XLIII||Arche||ˈɑrkiː||3||0.004 5||23,717,051||−746.19||164.587°||0.1492||2002||Sheppard et al.||Carme| |56||XXVI||Isonoe||aɪˈsɒnoʊ.iː||4||0.007 5||23,800,647||−750.13||165.127°||0.1775||2001||Sheppard et al.||Carme| |57||—||S/2003 J 9||1||0.000 15||23,857,808||−752.84||164.980°||0.2761||2003||Sheppard et al.||Carme| |58||—||S/2003 J 5||4||0.009 0||23,973,926||−758.34||165.549°||0.3070||2003||Sheppard et al.||Carme| |60||XXXVI||Sponde||ˈspɒndiː||2||0.001 5||24,252,627||−771.60||154.372°||0.4431||2002||Sheppard et al.||Pasiphaë| |61||XXVIII||Autonoe||ɔːˈtɒnoʊ.iː||4||0.009 0||24,264,445||−772.17||151.058°||0.3690||2002||Sheppard et al.||Pasiphaë| |62||XIX||Megaclite||ˌmɛgəˈklaɪtiː||5||0.021||24,687,239||−792.44||150.398°||0.3077||2001||Sheppard et al.||Pasiphaë| |63||—||S/2003 J 2||2||0.001 5||30,290,846||−1 077.02||153.521°||0.1882||2003||Sheppard et al.||?| - Galilean moons - Jupiter's moons in fiction - Rings of Jupiter - Natural satellites of Earth · Mars · Saturn · Uranus · Neptune - ↑ Jupiter Mass of 1.898 × 1027 kg / Mass of Galilean moons 3.93 × 1023 kg = 4,828 - ↑ Order refers to the position among other moons with respect to their average distance from Jupiter. - ↑ Label refers to the Roman numeral attributed to each moon in order of their discovery. - ↑ Diameters with multiple entries such as "60×40×34" reflect that the body is not a perfect spheroid and that each of its dimensions have been measured well enough. - ↑ Periods with negative values are retrograde. - ↑ "?" refers to group assignments that are not considered sure yet. - ↑ "Solar System Bodies". JPL/NASA. Retrieved on 2008-09-09. - ↑ 2.0 2.1 2.2 2.3 2.4 Canup, Robert M.; Ward, William R. (2009). "Origin of Europa and the Galilean Satellites". Europa, University of Arizona Press (in press), http://adsabs.harvard.edu/abs/2008arXiv0812.4995C. - ↑ Alibert, Y.; Mousis, O. and Benz, W. (2005). "Modeling the Jovian subnebula I. Thermodynamic conditions and migration of proto-satellites". Astronomy & Astrophysics 439: 1205–13. doi:10.1051/0004-6361:20052841, http://adsabs.harvard.edu/abs/2005A%26A...439.1205A. - ↑ 4.0 4.1 Chown, Marcus (2009-03-07). "Cannibalistic Jupiter ate its early moons". New Scientist. Retrieved on 2009-03-18. - ↑ Jewitt, David; Haghighipour, Nader (2007). "Irregular Satellites of the Planets: Products of Capture in the Early Solar System" (pdf). Annual Review of Astronomy and Astrophysics 45: 261–95. doi:10.1146/annurev.astro.44.051905.092459, http://www.ifa.hawaii.edu/~jewitt/papers/2007/JH07.pdf. - ↑ Xi, Zezong Z. (1981). "The Discovery of Jupiter's Satellite Made by Gan De 2000 years Before Galileo". Acta Astrophysica Sinica 1 (2): 87. - ↑ Galilei, Galileo (1989). Translated and prefaced by Albert Van Helden. ed.. Sidereus Nuncius. Chicago & London: University of Chicago Press. pp. 14–16. ISBN 0226279030. - ↑ Van Helden, Albert (March 1974). "The Telescope in the Seventeenth Century". Isis (The University of Chicago Press on behalf of The History of Science Society) 65 (1): 38–58. doi:10.1086/351216. - ↑ Barnard, E. E. (1892). "Discovery and Observation of a Fifth Satellite to Jupiter". Astronomical Journal 12: 81–85. doi:10.1086/101715, http://adsabs.harvard.edu//full/seri/AJ.../0012//0000081.000.html. - ↑ "Discovery of a Sixth Satellite of Jupiter". Astronomical Journal 24 (18): 154B;. 1905-01-9. doi:10.1086/103654, http://adsabs.harvard.edu//full/seri/AJ.../0024//0000154I002.html. - ↑ Perrine, C. D. (1905). "The Seventh Satellite of Jupiter". Publications of the Astronomical Society of the Pacific 17 (101): 62–63, http://adsabs.harvard.edu//full/seri/PASP./0017//0000062.000.html. - ↑ Melotte, P. J. (1908). "Note on the Newly Discovered Eighth Satellite of Jupiter, Photographed at the Royal Observatory, Greenwich". Monthly Notices of the Royal Astronomical Society 68 (6): 456–457, http://adsabs.harvard.edu/cgi-bin/nph-data_query?bibcode=1908MNRAS..68..456.&db_key=AST&link_type=ABSTRACT&high=40daf3f6f927275. - ↑ Nicholson, S. B. (1914). "Discovery of the Ninth Satellite of Jupiter". Publications of the Astronomical Society of the Pacific 26: pp. 197–198. doi:10.1086/122336, http://adsabs.harvard.edu//full/seri/PASP./0026//0000197.000.html. - ↑ Nicholson, S.B. (1938). "Two New Satellites of Jupiter". Publications of the Astronomical Society of the Pacific 50: 292–293. doi:10.1086/124963, http://adsabs.harvard.edu//full/seri/PASP./0050//0000292.000.html. - ↑ Nicholson, S. B. (1951). "An unidentified object near Jupiter, probably a new satellite". Publications of the Astronomical Society of the Pacific 63 (375): 297–299. doi:10.1086/126402, http://adsabs.harvard.edu//full/seri/PASP./0063//0000297.000.html. - ↑ Kowal, C. T.; Aksnes, K.; Marsden, B. G.; and Roemer, E. (1974). "Thirteenth satellite of Jupiter". Astronomical Journal 80: pp. 460–464. doi:10.1086/111766, http://adsabs.harvard.edu//full/seri/AJ.../0080//0000460.000.html. - ↑ Marsden, Brian G. (3 October 1975). "Probable New Satellite of Jupiter" (discovery telegram sent to the IAU). International Astronomical Union Circulars (Cambridge, US: Smithsonian Astrophysical Observatory) 2845, http://cfa-www.harvard.edu/iauc/02800/02845.html. Retrieved on 3 September 2008. - ↑ Synnott, S.P. (1980). "1979J2: The Discovery of a Previously Unknown Jovian Satellite". Science 210 (4471): 786–788. doi:10.1126/science.210.4471.786. PMID 17739548. - ↑ 19.0 19.1 19.2 19.3 "Gazetteer of Planetary Nomenclature". Working Group for Planetary System Nomenclature (WGPSN). U.S. Geological Survey (2008-11-07). Retrieved on 2008-08-02. - ↑ 20.0 20.1 20.2 20.3 20.4 Sheppard, Scott S.; Jewitt, David C. (May 5, 2003). "An abundant population of small irregular satellites around Jupiter". Nature 423: 261–263. doi:10.1038/nature01584. - ↑ 21.0 21.1 21.2 21.3 21.4 Sheppard, Scott S.. "Jupiter's Known Satellites". Departament of Terrestrial Magnetism at Carniege Institution for science. Retrieved on 2008-08-28. - ↑ 22.0 22.1 Marazzini, C. (2005). "The names of the satellites of Jupiter: from Galileo to Simon Marius" (in Italian). Lettere Italiane 57 (3): 391–407. ISSN 0024-1334. - ↑ Nicholson, Seth Barnes (April 1939). "The Satellites of Jupiter". Publications of the Astronomical Society of the Pacific 51 (300): 85–94. doi:10.1086/125010, http://adsabs.harvard.edu//full/seri/PASP./0051//0000093.000.html. - ↑ Payne-Gaposchkin, Cecilia; Haramundanis, Katherine (1970). Introduction to Astronomy. Englewood Cliffs, N.J.: Prentice-Hall. ISBN 0-134-78107-4. - ↑ 25.0 25.1 Marsden, Brian G. (03 October 1975). "Satellites of Jupiter". International Astronomical Union Circulars 2846, http://cfa-www.harvard.edu/iauc/02800/02846.html#Item6. Retrieved on 28 August 2008. - ↑ 26.0 26.1 Template:Cite report - ↑ Anderson, J.D.; Johnson, T.V.; Shubert, G.; et al. (2005). "Amalthea’s Density Is Less Than That of Water". Science 308: 1291–1293. doi:10.1126/science.1110422. PMID 15919987, http://adsabs.harvard.edu/abs/2005Sci...308.1291A. - ↑ Burns, J.A.; Simonelli, D. P.; Showalter, M.R. et al. (2004). "Jupiter’s Ring-Moon System". in Bagenal, F.; Dowling, T.E.; McKinnon, W.B.. Jupiter: The Planet, Satellites and Magnetosphere, Cambridge University Press. - ↑ Burns, J. A.; Showalter, M. R.; Hamilton, D. P.; et al. (1999). "The Formation of Jupiter's Faint Rings". Science 284: 1146–1150. doi:10.1126/science.284.5417.1146. - ↑ Canup, Robin M.; Ward, William R. (2002). "Formation of the Galilean Satellites: Conditions of Accretion" (pdf). The Astronomical Journal 124: 3404–3423. doi:10.1086/344684, http://www.boulder.swri.edu/~robin/cw02final.pdf. - ↑ 31.0 31.1 31.2 31.3 Grav, Tommy; Holman, Matthew J.; Gladman, Brett J.; Aksnes, Kaare (2003). "Photometric survey of the irregular satellites". Icarus 166 (1): 33–45. doi:arXiv:astro-ph/0301016v1. - ↑ Sheppard, Scott S.; Jewitt, David C.; Porco, Carolyn (2004). "Jupiter's outer satellites and Trojans" (pdf). in Fran Bagenal, Timothy E. Dowling, William B. McKinnon. Jupiter. The planet, satellites and magnetosphere. 1. Cambridge, UK: Cambridge University Press. pp. 263–280. ISBN 0-521-81808-7, http://www.ifa.hawaii.edu/~jewitt/papers/JUPITER/JSP.2003.pdf. - ↑ Nesvorný, David; Beaugé, Cristian; Dones, Luke (2004). "Collisional Origin of Families of Irregular Satellites" (PDF). The Astronomical Journal 127: 1768–1783. doi:10.1086/382099, http://www.boulder.swri.edu/~davidn/papers/irrbig.pdf. - ↑ 34.0 34.1 34.2 "Natural Satellites Ephemeris Service". "Note: some semi-major axis were computed using the µ value, while the eccentricities were taken using the inclination to the local Laplace plane" - ↑ 35.0 35.1 35.2 35.3 35.4 35.5 35.6 35.7 Template:Cite report - Jupiter's Moons by NASA's Solar System Exploration - "43 more moons orbiting Jupiter" article appeared in 2003 in the San Francisco Chronicle - Articles on the Jupiter System in Planetary Science Research Discoveries - An animation of the Jovian system of moons Template:Featured listals:Liste der Jupitermonde ast:Satélites de Xúpiter be-x-old:Спадарожнікі Юпітэра bs:Jupiterovi prirodni sateliti br:Loarennoù Yaou bg:Естествени спътници на Юпитер ca:Satèl·lits de Júpiter cs:Měsíce Jupiteru da:Jupiters månerel:Δορυφόροι του Δία es:Satélites de Júpiter eo:Listo da jupiteraj lunoj eu:Jupiterren satelite fa:فهرست ماههای مشتری fr:Satellites naturels de Jupiterhr:Jupiterovi prirodni sateliti ilo:Bulbulan iti Jupiter it:Satelliti naturali di Giove ka:იუპიტერის ბუნებრივი თანამგზავრები lv:Jupitera pavadoņi lb:Jupitermounden lt:Jupiterio palydovai nah:Huēyitzitzimicītlalli īmētz nl:Lijst van manen van Jupiterno:Jupiters måner nn:Jupitermånane nds:List von de Jupiter-Maanden pl:Lista naturalnych satelitów Jowisza pt:Satélites naturais de Júpiter ro:Sateliţii naturali ai lui Jupiter ru:Спутники Юпитера simple:List of Jupiter's moons sk:Mesiace Jupitera sl:Jupitrovi naravni sateliti sr:Јупитерови природни сателити sh:Jupiterovi prirodni sateliti fi:Jupiterin kuut sv:Jupiters naturliga satelliter th:ดวงจันทร์ของดาวพฤหัสบดี tg:Радифҳои Муштарӣ tr:Jüpiter'in doğal uyduları uk:Супутники Юпітера
http://gravity.wikia.com/wiki/Moons_of_Jupiter
13
22
Stuve Diagrams are one type of thermodynamic diagram used to represent or plot atmospheric data as recorded by weather balloons in their ascent through the atmosphere. The data the balloons record are called soundings. To see how to make your own Stuve diagram try following the sounding exercises. The example below shows atmospheric data from the Miramar Naval Station near At first glance, this diagram probably appears rather cluttered and somewhat difficult to understand. But let’s break it down layer by layer, and look at it more closely. Figure 2 shows our diagram at its most basic. The left (Y) axis represents air pressure in millibars and elevation in meters, while the X-axis shows temperature in Celsius and Kelvin. Since the horizontal lines which originate from the Y-axis relate to air pressure, they are also called isobars. The vertical lines which originate from the X-axis are called isotherms, as they are lines of constant temperature. Along the right side of the chart are barbs showing wind speed and direction. The red line shows how the air temperature varies with altitude. Note that on this example chart the temperature at first decreases with altitude, then begins to increase (an inversion layer) around 550m, then decreases again by about 1300m on up. Figure 3 shows our diagram with some added information. The dashed green lines represent the saturation mixing ratio, which is the amount of water vapor which would need to be present in a parcel of air in order for the air to be “saturated” or, in other words, to produce a cloud, or fog, or rain. If one happens to know a particular air parcel’s pressure and temperature, then the saturation mixing ratio can be read directly from the chart. For example, if the pressure of an air parcel is 950 mb, and its temperature is 24˚C, then its saturation mixing ratio would be 20g H20/kg of dry air. Table of Saturation Mixing Ratios. The dashed black line shows how the temperature of the dew point changes with altitude. If one knows the dewpoint and pressure of an air parcel, then one can tell how much water vapor the air actually contains. This is the actual mixing ratio. So, using Fig. 3 above, where the pressure (P) is 950 mb, and the dewpoint (Td) is 9˚C, then its actual mixing ratio would be 8 g/kg. If the actual mixing ratio is the same as the saturation mixing ratio, then the air is said to be “saturated” and a cloud (or fog) will normally form. In this example, the air at 950 mb is “unsaturated” because its actual mixing ratio is less than the saturation mixing ratio. In Figure 4, we’ve added additional elements to the chart. The yellow line represents the temperature that an air parcel would have if it were lifted from near the ground (around 950 mb or 500 m). The solid diagonal lines are called dry adiabats and show the rate at which dry (or “unsaturated”) air will cool down as it rises up through the atmosphere. This rate is approximately 10˚C/km. If an air parcel is initially unsaturated (i.e. if the actual mixing ratio is less than the saturated mixing ratio) it will cool off at the dry adiabatic lapse rate as it rises (note that the yellow line is parallel to the solid diagonal lines). In the above example let’s assume the parcel starts off at an altitude of 500m (950 mb pressure level) with a temperature of 22˚C. If it then gets lifted up it will cool off at 10ºC for every km it rises. This is shown by the yellow line. At an altitude of 2000m or pressure level of 800 mb it will have cooled to a temperature of about 7˚C. At this point it has cooled down enough that it is now saturated. Let’s see why. Remember that we can find out the saturation mixing ratio for any temperature and pressure from the dashed green lines on the graph. Well, at 800 mb and 7˚C (where the yellow line ends) the dashed green line which would go through this point would have a value of 8 g/kg. This is the saturation mixing ratio for this point. But this was also our actual mixing ratio for this air parcel. So now the air has cooled down enough that the actual mixing ratio is the same as the saturation mixing ratio. The altitude(or pressure level) at which this happens is the lifting condensation level (LCL). This is the point at which moisture contained in a rising parcel of air can begin to condense. (Note, this is shown in the list of data at the right-hand side of the figure. Look under “PARCEL”, then find “LCL:800”. This indicates that the lifted air parcel would reach its lifting condensation level at 800 mb.) We’ve now added the final elements to the chart - solid green lines, called saturated adiabats, that show the rate at which saturated air cools as it rises. The lines are somewhat curved as the saturated adiabatic lapse rate ranges between 2˚C/km to nearly 10˚/km(the dry adiabatic lapse rate), depending on the amount of moisture present in the particular air parcel. As unsaturated air parcels rise, they tend to follow the dry adiabatic lapse rate. But, once they saturate, they then tend to follow the saturated adiabatic lapse rate. The yellow line reflects this. Since our air parcel has now become saturated at around 800 mb (Fig. 4), the slope of the line has now changed to follow the curve of the saturated adiabat (Fig. 5). We can use the position of the yellow line relative to the position of the red line to see what weather conditions to expect in various locations. Wherever the yellow line lies to the left of the red line means that the air parcel is cooler than the surrounding air and will only rise by forcing (through vertical winds for example), and the atmosphere is said to be stable. These kinds of conditions lead to the trapping of air (and pollution). Wherever the yellow line lies to the right of the red line means that the air parcel will rise on its own, without any forcing, because it is warmer than the surrounding air; the atmosphere is said to be unstable. This kind of condition is more likely to lead to thunderstorms. For example, if an air parcel reaches saturation and is lifted to an altitude at which it’s warmer than the surrounding environment, then the level of free condensation (LFC) is reached. This type of a situation is an ideal environment for thunderstorm generation. below (Fig. 6) shows Tropical Storm Alex near the coast of the Figure 7 shows the accompanying Stuve diagram for shows the Stuve diagram for Written by Anna Huber.
http://www.csun.edu/~hmc60533/CSUN_103/weather_exercises/soundings/smog_and_inversions/Understanding%20Stuve_v3.htm
13
54
Section 17: Geography¶ It is very common to have data in which the coordinate are “geographics” or “latitude/longitude”. Unlike coordinates in Mercator, UTM, or Stateplane, geographic coordinates are not cartesian coordinates. Geographic coordinates do not represent a linear distance from an origin as plotted on a plane. Rather, these spherical coordinates describe angular coordinates on a globe. In spherical coordinates a point is specified by the angle of rotation from a reference meridian (longitude), and the angle from the equator (latitude). You can treat geographic coordinates as approximate cartesian coordinates and continue to do spatial calculations. However, measurements of distance, length and area will be nonsensical. Since spherical coordinates measure angular distance, the units are in “degrees.” Further, the approximate results from indexes and true/false tests like intersects and contains can become terribly wrong. The distance between points get larger as problem areas like the poles or the international dateline are approached. For example, here are the coordinates of Los Angeles and Paris. - Los Angeles: POINT(-118.4079 33.9434) - Paris: POINT(2.3490 48.8533) The following calculates the distance between Los Angeles and Paris using the standard PostGIS cartesian ST_Distance(geometry, geometry). Note that the SRID of 4326 declares a geographic spatial reference system. SELECT ST_Distance( ST_GeometryFromText('POINT(-118.4079 33.9434)', 4326), -- Los Angeles (LAX) ST_GeometryFromText('POINT(2.5559 49.0083)', 4326) -- Paris (CDG) ); Aha! 121! But, what does that mean? The units for spatial reference 4326 are degrees. So our answer is 121 degrees. But (again), what does that mean? On a sphere, the size of one “degree square” is quite variable, becoming smaller as you move away from the equator. Think of the meridians (vertical lines) on the globe getting closer to each other as you go towards the poles. So, a distance of 121 degrees doesn’t mean anything. It is a nonsense number. In order to calculate a meaningful distance, we must treat geographic coordinates not as approximate cartesian coordinates but rather as true spherical coordinates. We must measure the distances between points as true paths over a sphere – a portion of a great circle. Starting with version 1.5, PostGIS provides this functionality through the geography type. Different spatial databases have different approaches for “handling geographics” - Oracle attempts to paper over the differences by transparently doing geographic calculations when the SRID is geographic. - SQL Server uses two spatial types, “STGeometry” for cartesian data and “STGeography” for geographics. - Informix Spatial is a pure cartesian extension to Informix, while Informix Geodetic is a pure geographic extension. - Similar to SQL Server, PostGIS uses two types, “geometry” and “geography”. Using the geography instead of geometry type, let’s try again to measure the distance between Los Angeles and Paris. Instead of ST_GeometryFromText(text), we will use ST_GeographyFromText(text). SELECT ST_Distance( ST_GeographyFromText('POINT(-118.4079 33.9434)'), -- Los Angeles (LAX) ST_GeographyFromText('POINT(2.5559 49.0083)') -- Paris (CDG) ); A big number! All return values from geography calculations are in meters, so our answer is 9124km. Older versions of PostGIS supported very basic calculations over the sphere using the ST_Distance_Spheroid(point, point, measurement) function. However, ST_Distance_Spheroid is substantially limited. The function only works on points and provides no support for indexing across the poles or international dateline. The need to support non-point geometries becomes very clear when posing a question like “How close will a flight from Los Angeles to Paris come to Iceland?” Working with geographic coordinates on a cartesian plane (the purple line) yields a very wrong answer indeed! Using great circle routes (the red lines) gives the right answer. If we convert our LAX-CDG flight into a line string and calculate the distance to a point in Iceland using geography we’ll get the right answer (recall) in meters. SELECT ST_Distance( ST_GeographyFromText('LINESTRING(-118.4079 33.9434, 2.5559 49.0083)'), -- LAX-CDG ST_GeographyFromText('POINT(-21.8628 64.1286)') -- Iceland ); So the closest approach to Iceland on the LAX-CDG route is a relatively small 532km. The cartesian approach to handling geographic coordinates breaks down entirely for features that cross the international dateline. The shortest great-circle route from Los Angeles to Tokyo crosses the Pacific Ocean. The shortest cartesian route crosses the Atlantic and Indian Oceans. SELECT ST_Distance( ST_GeometryFromText('Point(-118.4079 33.9434)'), -- LAX ST_GeometryFromText('Point(139.733 35.567)')) -- NRT (Tokyo/Narita) AS geometry_distance, ST_Distance( ST_GeographyFromText('Point(-118.4079 33.9434)'), -- LAX ST_GeographyFromText('Point(139.733 35.567)')) -- NRT (Tokyo/Narita) AS geography_distance; geometry_distance | geography_distance -------------------+-------------------- 258.146005837336 | 8833954.76996256 In order to load geometry data into a geography table, the geometry first needs to be projected into EPSG:4326 (longitude/latitude), then it needs to be changed into geography. The ST_Transform(geometry,srid) function converts coordinates to geographics and the Geography(geometry) function “casts” them from geometry to geography. CREATE TABLE nyc_subway_stations_geog AS SELECT Geography(ST_Transform(geom,4326)) AS geog, name, routes FROM nyc_subway_stations; Building a spatial index on a geography table is exactly the same as for geometry: CREATE INDEX nyc_subway_stations_geog_gix ON nyc_subway_stations_geog USING GIST (geog); The difference is under the covers: the geography index will correctly handle queries that cover the poles or the international date-line, while the geometry one will not. There are only a small number of native functions for the geography type: - ST_AsText(geography) returns text - ST_GeographyFromText(text) returns geography - ST_AsBinary(geography) returns bytea - ST_GeogFromWKB(bytea) returns geography - ST_AsSVG(geography) returns text - ST_AsGML(geography) returns text - ST_AsKML(geography) returns text - ST_AsGeoJson(geography) returns text - ST_Distance(geography, geography) returns double - ST_DWithin(geography, geography, float8) returns boolean - ST_Area(geography) returns double - ST_Length(geography) returns double - ST_Covers(geography, geography) returns boolean - ST_CoveredBy(geography, geography) returns boolean - ST_Intersects(geography, geography) returns boolean - ST_Buffer(geography, float8) returns geography - ST_Intersection(geography, geography) returns geography Creating a Geography Table¶ The SQL for creating a new table with a geography column is much like that for creating a geometry table. However, geography includes the ability to specify the object type directly at the time of table creation. For example: CREATE TABLE airports ( code VARCHAR(3), geog GEOGRAPHY(Point) ); INSERT INTO airports VALUES ('LAX', 'POINT(-118.4079 33.9434)'); INSERT INTO airports VALUES ('CDG', 'POINT(2.5559 49.0083)'); INSERT INTO airports VALUES ('REK', 'POINT(-21.8628 64.1286)'); In the table definition, the GEOGRAPHY(Point) specifies our airport data type as points. The new geography fields don’t get registered in the geometry_columns view. Instead, they are registered in a view called geography_columns. SELECT * FROM geography_columns; f_table_name | f_geography_column | srid | type -------------------------------+--------------------+------+---------- nyc_subway_stations_geography | geog | 0 | Geometry airports | geog | 4326 | Point Casting to Geometry¶ While the basic functions for geography types can handle many use cases, there are times when you might need access to other functions only supported by the geometry type. Fortunately, you can convert objects back and forth from geography to geometry. The PostgreSQL syntax convention for casting is to append ::typename to the end of the value you wish to cast. So, 2::text with convert a numeric two to a text string ‘2’. And 'POINT(0 0)'::geometry will convert the text representation of point into a geometry point. The ST_X(point) function only supports the geometry type. How can we read the X coordinate from our geographies? SELECT code, ST_X(geog::geometry) AS longitude FROM airports; code | longitude ------+----------- LAX | -118.4079 CDG | 2.5559 REK | -21.8628 By appending ::geometry to our geography value, we convert the object to a geometry with an SRID of 4326. From there we can use as many geometry functions as strike our fancy. But, remember – now that our object is a geometry, the coordinates will be interpretted as cartesian coordinates, not spherical ones. Why (Not) Use Geography¶ Geographics are universally accepted coordinates – everyone understands what latitude/longitude mean, but very few people understand what UTM coordinates mean. Why not use geography all the time? - First, as noted earlier, there are far fewer functions available (right now) that directly support the geography type. You may spend a lot of time working around geography type limitations. - Second, the calculations on a sphere are computationally far more expensive than cartesian calculations. For example, the cartesian formula for distance (Pythagoras) involves one call to sqrt(). The spherical formula for distance (Haversine) involves two sqrt() calls, an arctan() call, four sin() calls and two cos() calls. Trigonometric functions are very costly, and spherical calculations involve a lot of them. If your data is geographically compact (contained within a state, county or city), use the geometry type with a cartesian projection that makes sense with your data. See the http://spatialreference.org site and type in the name of your region for a selection of possible reference systems. If, on the other hand, you need to measure distance with a dataset that is geographically dispersed (covering much of the world), use the geography type. The application complexity you save by working in geography will offset any performance issues. And, casting to geometry can offset most functionality limitations. ST_Distance(geometry, geometry): For geometry type Returns the 2-dimensional cartesian minimum distance (based on spatial ref) between two geometries in projected units. For geography type defaults to return spheroidal minimum distance between two geographies in meters. ST_GeographyFromText(text): Returns a specified geography value from Well-Known Text representation or extended (WKT). ST_Transform(geometry, srid): Returns a new geometry with its coordinates transformed to the SRID referenced by the integer parameter. ST_X(point): Returns the X coordinate of the point, or NULL if not available. Input must be a point. |||(1, 2) | The buffer and intersection functions are actually wrappers on top of a cast to geometry, and are not carried out natively in spherical coordinates. As a result, they may fail to return correct results for objects with very large extents that cannot be cleanly converted to a planar representation. For example, the ST_Buffer(geography,distance) function transforms the geography object into a “best” projection, buffers it, and then transforms it back to geographics. If there is no “best” projection (the object is too large), the operation can fail or return a malformed buffer. Table Of Contents Previous: Section 16: Projection Exercises This work is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License. Feel free to use this material, but we ask that you please retain the OpenGeo branding, logos and style.
http://workshops.opengeo.org/postgis-intro/geography.html
13
18
|This article does not cite any references or sources. (April 2013)| A comparison sort is a type of sorting algorithm that only reads the list elements through a single abstract comparison operation (often a "less than or equal to" operator or a three-way comparison) that determines which of two elements should occur first in the final sorted list. The only requirement is that the operator obey two of the properties of a total order: - if a ≤ b and b ≤ c then a ≤ c (transitivity) - for all a and b, either a ≤ b or b ≤ a (totalness or trichotomy). It is possible that both a ≤ b and b ≤ a; in this case either may come first in the sorted list. In a stable sort, the input order determines the sorted order in this case. A metaphor for thinking about comparison sorts is that someone has a set of unlabelled weights and a balance scale. Their goal is to line up the weights in order by their weight without any information except that obtained by placing two weights on the scale and seeing which one is heavier (or if they weigh the same). Some of the most well-known comparison sorts include: - Quick sort - Heap sort - Merge sort - Intro sort - Insertion sort - Selection sort - Bubble sort - Odd-even sort - Cocktail sort - Cycle sort - Merge insertion (Ford-Johnson) sort There are many integer sorting algorithms that are not comparison sorts; they include: - Radix sort (examines individual bits of keys) - Counting sort (indexes using key values) - Bucket sort (examines bits of keys) Performance limits and advantages of different sorting techniques There are fundamental limits on the performance of comparison sorts. A comparison sort must have a lower bound of Ω(n log n) comparison operations. This is a consequence of the limited information available through comparisons alone — or, to put it differently, of the vague algebraic structure of totally ordered sets. In this sense, mergesort, heapsort, and introsort are asymptotically optimal in terms of the number of comparisons they must perform, although this metric neglects other operations. The three non-comparison sorts above achieve O(n) performance by using operations other than comparisons, allowing them to sidestep this lower bound (assuming elements are constant-sized). Nevertheless, comparison sorts offer the notable advantage that control over the comparison function allows sorting of many different datatypes and fine control over how the list is sorted. For example, reversing the result of the comparison function allows the list to be sorted in reverse; and one can sort a list of tuples in lexicographic order by just creating a comparison function that compares each part in sequence: function tupleCompare((lefta, leftb, leftc), (righta, rightb, rightc)) if lefta ≠ righta return compare(lefta, righta) else if leftb ≠ rightb return compare(leftb, rightb) else return compare(leftc, rightc) Balanced ternary notation allows comparisons to be made in one step, whose result will be one of "less than", "greater than" or "equal to". Comparison sorts generally adapt more easily to complex orders such as the order of floating-point numbers. Additionally, once a comparison function is written, any comparison sort can be used without modification; non-comparison sorts typically require specialized versions for each datatype. This flexibility, together with the efficiency of the above comparison sorting algorithms on modern computers, has led to widespread preference for comparison sorts in most practical work. Number of comparisons required to sort a list The number of comparisons that a comparison sort algorithm requires increases at least in proportion to , where is the number of elements to sort. This bound is asymptotically tight. Given a list of distinct numbers (we can assume this because this is a worst-case analysis), there are n factorial permutations exactly one of which is the list in sorted order. The sort algorithm must gain enough information from the comparisons to identify the correct permutation. If the algorithm always completes after at most f(n) steps, it cannot distinguish more than 2f(n) cases because the keys are distinct and each comparison has only two possible outcomes. Therefore, - , or equivalently From Stirling's approximation we know that is . This provides the lower-bound part of the claim. An identical upper bound follows from the existence of the algorithms that attain this bound in the worst case. The above argument provides an absolute, rather than only asymptotic lower bound on the number of comparisons, namely comparisons. This lower bound is fairly good (it can be approached within a linear tolerance by a simple merge sort), but it is known to be inexact. For example, , but the minimal number of comparisons to sort 13 elements has been proved to be 34 . Determining the exact number of comparisons needed to sort a given number of entries is a computationally hard problem even for small n, and no simple formula for the solution is known. For some of the few concrete values that have been computed, see A036604. Lower bound for the average number of comparisons A similar bound applies to the average number of comparisons. Assuming that - all keys are distinct, i.e. every comparison will give either a>b or a<b, and - the input is a random permutation, chosen uniformly from the set of all possible permutations of n elements, it is impossible to determine which order the input is in with fewer than log2(n!) comparisons on average. This can be most easily seen using concepts from information theory. The Shannon entropy of such a random permutation is log2(n!) bits. Since a comparison can give only two results, the maximum amount of information it provides is 1 bit. Therefore after k comparisons the remaining entropy of the permutation, given the results of those comparisons, is at least log2(n!) - k bits on average. To perform the sort, complete information is needed, so the remaining entropy must be 0. It follows that k must be at least log2(n!). Note that this differs from the worst case argument given above, in that it does not allow rounding up to the nearest integer. For example, for n = 3, the lower bound for the worst case is 3, the lower bound for the average case as shown above is approximately 2.58, while the highest lower bound for the average case is 8/3, approximately 2.67. In the case that multiple items may have the same key, there is no obvious statistical interpretation for the term "average case", so an argument like the above cannot be applied without making specific assumptions about the distribution of keys. - Marcin Peczarski: The Ford-Johnson algorithm is still unbeaten for less than 47 elements. Inf. Process. Lett. 101(3): 126-128 (2007) doi:10.1016/j.ipl.2006.09.001 - Donald Knuth. The Art of Computer Programming, Volume 3: Sorting and Searching, Second Edition. Addison-Wesley, 1997. ISBN 0-201-89685-0. Section 5.3.1: Minimum-Comparison Sorting, pp. 180–197. - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. ISBN 0-262-03293-7. Section 8.1: Lower bounds for sorting, pp. 165–168.
http://en.wikipedia.org/wiki/Comparison_sort
13
50
Non-Programmer's Tutorial for Python 3/Defining Functions To start off this chapter I am going to give you an example of what you could do but shouldn't (so don't type it in): a = 23 b = -23 if a < 0: a = -a if b < 0: b = -b if a == b: print("The absolute values of", a, "and", b, "are equal") else: print("The absolute values of", a, "and", b, "are different") with the output being: The absolute values of 23 and 23 are equal The program seems a little repetitive. Programmers hate to repeat things -- that's what computers are for, after all! (Note also that finding the absolute value changed the value of the variable, which is why it is printing out 23, and not -23 in the output.) Fortunately Python allows you to create functions to remove duplication. Here is the rewritten example: a = 23 b = -23 def absolute_value(n): if n < 0: n = -n return n if absolute_value(a) == absolute_value(b): print("The absolute values of", a, "and", b, "are equal") else: print("The absolute values of", a, "and", b, "are different") with the output being: The absolute values of 23 and -23 are equal The key feature of this program is the def (short for define) starts a function definition. def is followed by the name of the function absolute_value. Next comes a '(' followed by the parameter n is passed from the program into the function when the function is called). The statements after the ':' are executed when the function is used. The statements continue until either the indented statements end or a return is encountered. The return statement returns a value back to the place where the function was called. We already have encountered a function in our very first program, the Notice how the values of b are not changed. Functions can be used to repeat tasks that don't return values. Here are some examples: def hello(): print("Hello") def area(width, height): return width * height def print_welcome(name): print("Welcome", name) hello() hello() print_welcome("Fred") w = 4 h = 5 print("width =", w, "height =", h, "area =", area(w, h)) with output being: Hello Hello Welcome Fred width = 4 height = 5 area = 20 That example shows some more stuff that you can do with functions. Notice that you can use no arguments or two or more. Notice also when a function doesn't need to send back a value, a return is optional. Variables in functions When eliminating repeated code, you often have variables in the repeated code. In Python, these are dealt with in a special way. So far all variables we have seen are global variables. Functions have a special type of variable called local variables. These variables only exist while the function is running. When a local variable has the same name as another variable (such as a global variable), the local variable hides the other. Sound confusing? Well, these next examples (which are a bit contrived) should help clear things up. a = 4 def print_func(): a = 17 print("in print_func a = ", a) print_func() print("a = ", a) When run, we will receive an output of: in print_func a = 17 a = 4 Variable assignments inside a function do not override global variables, they exist only inside the function. Even though a was assigned a new value inside the function, this newly assigned value was only relevant to print_func, when the function finishes running, and the a's values is printed again, we see the originally assigned values. Here is another more complex example. a_var = 10 b_var = 15 e_var = 25 def a_func(a_var): print("in a_func a_var = ", a_var) b_var = 100 + a_var d_var = 2 * a_var print("in a_func b_var = ", b_var) print("in a_func d_var = ", d_var) print("in a_func e_var = ", e_var) return b_var + 10 c_var = a_func(b_var) print("a_var = ", a_var) print("b_var = ", b_var) print("c_var = ", c_var) print("d_var = ", d_var) in a_func a_var = 15 in a_func b_var = 115 in a_func d_var = 30 in a_func e_var = 25 a_var = 10 b_var = 15 c_var = 125 d_var = Traceback (most recent call last): File "C:\def2.py", line 19, in <module> print("d_var = ", d_var) NameError: name 'd_var' is not defined In this example the variables d_var are all local variables when they are inside the function a_func. After the statement return b_var + 10 is run, they all cease to exist. The variable a_var is automatically a local variable since it is a parameter name. The variables d_var are local variables since they appear on the left of an equals sign in the function in the statements b_var = 100 + a_var and d_var = 2 * a_var . Inside of the function a_var has no value assigned to it. When the function is called with c_var = a_func(b_var), 15 is assigned to a_var since at that point in time b_var is 15, making the call to the function a_func(15). This ends up setting a_var to 15 when it is inside of As you can see, once the function finishes running, the local variables b_var that had hidden the global variables of the same name are gone. Then the statement print("a_var = ", a_var) prints the value 10 rather than the value 15 since the local variable that hid the global variable is gone. Another thing to notice is the NameError that happens at the end. This appears since the variable d_var no longer exists since a_func finished. All the local variables are deleted when the function exits. If you want to get something from a function, then you will have to use One last thing to notice is that the value of e_var remains unchanged inside a_func since it is not a parameter and it never appears on the left of an equals sign inside of the function a_func. When a global variable is accessed inside a function it is the global variable from the outside. Functions allow local variables that exist only inside the function and can hide other variables that are outside the function. #! /usr/bin/python #-*-coding: utf-8 -*- # converts temperature to Fahrenheit or Celsius def print_options(): print("Options:") print(" 'p' print options") print(" 'c' convert from Celsius") print(" 'f' convert from Fahrenheit") print(" 'q' quit the program") def celsius_to_fahrenheit(c_temp): return 9.0 / 5.0 * c_temp + 32 def fahrenheit_to_celsius(f_temp): return (f_temp - 32.0) * 5.0 / 9.0 choice = "p" while choice != "q": if choice == "c": c_temp = float(input("Celsius temperature: ")) print("Fahrenheit:", celsius_to_fahrenheit(c_temp)) choice = input("option: ") elif choice == "f": f_temp = float(input("Fahrenheit temperature: ")) print("Celsius:", fahrenheit_to_celsius(f_temp)) choice = input("option: ") elif choice == "p": #Alternatively choice != "q": so that print when anything unexpected inputed print_options() choice = input("option: ") Options: 'p' print options 'c' convert from celsius 'f' convert from fahrenheit 'q' quit the program option: c Celsius temperature: 30 Fahrenheit: 86.0 option: f Fahrenheit temperature: 60 Celsius: 15.5555555556 option: q #! /usr/bin/python #-*-coding: utf-8 -*- # calculates a given rectangle area print def hello(): print('Hello!') def area(width, height): return width * height def print_welcome(name): print('Welcome,', name) def positive_input(prompt): number = float(input(prompt)) while number <= 0: print('Must be a positive number') number = float(input(prompt)) return number name = input('Your Name: ') hello() print_welcome(name) print print('To find the area of a rectangle,') print('enter the width and height below.') print w = positive_input('Width: ') h = positive_input('Height: ') print('Width =', w, 'Height =', h, 'so Area =', area(w, h)) Your Name: Josh Hello! Welcome, Josh To find the area of a rectangle, enter the width and height below. Width: -4 Must be a positive number Width: 4 Height: 3 Width = 4 Height = 3 so Area = 12 Rewrite the area2.py program from the Examples above to have a separate function for the area of a square, the area of a rectangle, and the area of a circle ( 3.14 * radius**2). This program should include a menu interface. def square(L): return L * L def rectangle(width , height): return width * height def circle(radius): return 3.14159 * radius ** 2 def options(): print() print("Options:") print("s = calculate the area of a square.") print("c = calculate the area of a circle.") print("r = calculate the area of a rectangle.") print("q = quit") print() print("This program will calculate the area of a square, circle or rectangle.") choice = "x" options() while choice != "q": choice = input("Please enter your choice: ") if choice == "s": L = float(input("Length of square side: ")) print("The area of this square is", square(L)) options() elif choice == "c": radius = float(input("Radius of the circle: ")) print("The area of the circle is", circle(radius)) options() elif choice == "r": width = float(input("Width of the rectangle: ")) height = float(input("Height of the rectangle: ")) print("The area of the rectangle is", rectangle(width, height)) options() elif choice == "q": print(" ",end="") else: print("Unrecognized option.") options() Last modified on 17 March 2013, at 07:03
http://en.m.wikibooks.org/wiki/Non-Programmer's_Tutorial_for_Python_3/Defining_Functions
13
26
Live From Mars was a precursor to Mars Team Online. Activity 1.3: Follow that Water--Investigations with Stream Tables Water is essential to life on Earth: its abundant presence on our world drives the weather and shapes the land by rain, runoff and erosion. Whenever we see what looks like evidence of liquid water elsewhere in the Universe, we become especially interested, since water is a requisite for life. In the late 19th Century astronomers peered at Mars through telescopes and saw lines stretching across its surface: Giovanni Schiaparelli, an Italian, called them "canali" meaning "channels" or "grooves", which was translated into English as "canals." Some interpreted these "canals"as evidence of intelligent life, and even an advanced Martian civilization capable of massive, planet-wide engineering projects. Now spacecraft have looked close-up at Mars, and we know there are no canals built by a Martian Corps of Engineers. But some of the channels do have shapes which look much like those we see on Earth. While it's tempting to think of them as dried-up river beds, most scientists think many of the channels resulted from sudden releases of underground water or sudden melting of underground ice, rather than from sustained rainfall and enduring rivers. How do we know we're not fooling ourselves, or misinterpreting the data, as did some of those 19th century observers? Scientists use different methods to understand the conditions under which the channels may have been formed. One method involves the use of stream tables, to simulate different rates of flow, from gentle rivers flowing for a long time, to sudden, massive floods. In this Activity, students will have the chance to discover for themselves some of the characteristic shapes created by differing volumes of water, flowing at different rates ("volume over time"). With "educated eyes"they can then turn to study images of Mars and recognize the features and discuss the mechanisms which might have caused them. Teams of students will build simple stream tables and other needed Students will vary the angle of the stream tables in order to simulate different flow rates and compare the results. Students will observe various features formed in a stream table by flowing water and compare these model features to photos of real features on Mars in order to make inferences about the possibility of water channeling on Mars. Materials: for each team of students Please note: if these materials are difficult to secure, consider using only one set for the entire class, and assigning a different Planetary Geologist team per angle, and emphasizing the Image Processing and Data Analysis process for those who must watch. Although there will be less student hands-on time, it might be better to do the Activity in this way rather than foregoing it altogether, so important is the issue of water to Martian science and mission planning. Activity 1.3 Student Work Sheet 1 wallpaper tray (poke hole about size of a quarter in one end so water can drain into a bucket) two buckets of clean play sand a third empty (catch) bucket a one gallon plastic water jug 2 plastic funnels: one with a 1/4 in. opening and one with a 1/2 in. opening several blocks of wood cut from 2 x 4s, each about 6 in. a piece of string and a small weight several stones that are flat on top and bottom, about1/2 to 1 inch in diameter and 1/2 to 1 inch high plastic lids from 1-liter soda bottles selected images of Martian surface features: 1, 2, 3 selected images of Earth, featuring dry river beds 1 (Note: The Live From Mars videos will feature such images. More may be found in the slide set and the Explorer's Guide to Mars poster, included in the LFM Teacher's Kit.) Show students pictures or video of rivers and floods on Earth (perhaps local occurrences in your region). Do they think such conditions could exist on Mars today? Ask if they think Mars could ever have had liquid water. Or consider the question of water on Mars through a discussion on the possibility of life on Mars today in contrast to the distant past. Discuss conditions that seem necessary for life to develop. Cite the August 1996 announcement of the possible discovery of ancient Martian life in a meteorite. Explore / Explain Please note: some details are provided on the Student Work Sheet and its diagram, which you should review along with this procedure. 1. Distribute materials to each student team. Explain that each team is going to work as Planetary Geologists to investigate what can happen to a surface when water flows across it, and that they will share their data to come up with some principles by which water shapes landforms in specific ways. 2. Demonstrate stream table set up and use of the protractor to align the stream table at a given angle. This table should initially be set at an angle of 5 degrees. Students will see that at angles of about 15 degrees and higher, the sand will wash out. Larger volumes of water over shorter time periods (e.g. flood conditions) carve deeper channels with steeper sides. Only at angles of around 5 degrees, simulating gentler processes (e.g. slower flow over longer times) does the water begin to create curves and meanders more typical of terrestrial rivers. Remind students that most stream beds have slopes that are typically 5 degrees or less but that in this simulation the angle stands for flow rate, not the underlying topography of the planet. Also note that, as in most simulations, you can't replicate all aspects of the original condition you're trying to understand: for example, results obtained by using sand do not perfectly model rivers running through soil or over rock. But varying the angle does simulate flow rate, one key variable scientists think important for Mars. Pour 1 quart of water into the 1/4 inch funnel and allow the water to run down the tray through the groove as the teams watch. Have students describe and sketch the flow pattern which results, carefully noting such things as the shape of the flow pattern including - whether the channel cut by the water was straight or curved - how wide the channel became - how deep the channel became - how long it took for the jug to empty - was a small or large amount of sand carried down stream by - whether or not avalanching occurred - whether or not a delta was formed 3. Assign each team a slant angle (from 5 to 25 degrees) and allow time for basic set up. For the first set of trials, each team should use the plastic funnel with the 1/4 in. opening. Teams should complete Trial # 1 and record results on the Student Worksheet. 4. Before continuing, allow time for teams to contrast and compare results from the stream tables set at different angles. Discuss. 5. Smooth the damp sand back to a uniform layer. Then repeat the same experiment at the same tray angle, but this time using the funnel with the 1/2 inch opening. Repeat Steps 3-4. 6. Again, smooth the sand. Repeat the experiments, but this time tell students to place the stones and the small bottle lids in the tray in such a position that the stream of water will encounter them, working them into the sand and adding a thin layer on top. (This simulates what happens when flowing water meets the elevated rim of an impact crater.) Have students carefully observe and record the appearance of the patterns in the vicinity of the bottle caps and stones at the end of the experiments. 7. Challenge students to answer the following questions: At what slope angles (flow rates) do meanders and deltas occur? At which slope angles (flow rates) does the sand wash out completely? How does the slope angle (flow rate) affect the amount of sediment deposited down stream? What happens to the sand immediately after the water starts flowing? What happens to the sand after the water has flowed for awhile? What effect does the volume of water that flows per second have on all of the above? 8. As a last activity, simulate a large scale catastrophic flood by filling the gallon jug with water and carefully creating a uniform "waterfall" along the top of the stream table. Have students try with and without the stones and bottle lids in the flow. Again record and discuss results. 9. Finally, refer to Viking images of Mars. Ask students to look carefully at each one and challenge them to compare examples of the different types of patterns they created in their stream table experiments with what they see in the actual images of Mars. Ask them to draw conclusions about the presence of water on Mars in the past and to draw general conclusions about the differing amount and rate of flow of water in the various areas on Mars seen in the images. Ask them to search for signs of liquid water on Mars in the Viking images (i.e., on Mars today). Challenge them to hypothesize where they think all the water went. Research the various theories as to how water was released onto the Martian landscape at various times in the past and where scientists think it is today. Have students examine a map showing the geological surface features over the entire surface of Mars. Have them mark the location of outflow channels. Have them do the same with the location of valley networks. Ask them to describe the differences in their geographical distribution and challenge them to explain the reasons for this. Provide students with the prime landing site for Pathfinder as well as the coordinates of the Viking 1 and 2 landing sites. Ask students to describe these locations relative to the location of outflow channels and valley networks. Challenge them to hypothesize why scientists chose these particular locations to put spacecraft down on the surface of Mars. Research meandering streams. What is an oxbow lake and how is it formed? Why does a river bed change over time? Compare and contrast each terrestrial feature to landforms on Mars. Go on-line and download Mars images. Create a visual display illustrating the various landforms on Mars. If you or your students have documented the flow table experiments, prepare poster displays relating flow rate to surface feature (and submit to Passport to Knowledge on-line or in hard copy!) Read about Giovanni Schiaparelli. Compose a letter he might have written (or e-mailed) to NASA regarding his concerns about the veracity of new data coming from Mars. Write a news article about the stream bed simulations and report on your data. Noting the scale of the map, have students measure and calculate the area of some prominent Martian outflow channels. Compare these areas to related places on Earth such as the Nile River Valley, the channeled Scablands region of Washington State or an area of their home state. Research the Scablands region of Washington State. Note: this Activity and Activity 2.2 are adapted in part from materials and concepts developed during workshops held by JPL's Mars Exploration Directorate as part of its Education and Outreach Initiative (Meredith Olson, Project Educator.) Related Activities may be found in the series of Student and Teacher Publications created by JPL: to order, contact TERC at 617-547-0430. The first two JPL-TERC modules and a set of Mars and Earth images are part of the LFM Teacher's Kit. LFM thanks Dr. Olson for her review of the adaptations of the original activities.
http://quest.nasa.gov/mars/teachers/tg/program1/Act1.3.html
13
35
It was developed by Charles Spearman in early 1900s and as such this test is also called as Spearman’s rank correlation coefficient. In statistical analysis situation arises when the data are not available to use in numerical form for correlation analysis, but the information is sufficient to rank the data as first, second, third and so on, in this kind of situations we use quite often the rank correlation method and work out the coefficient of rank correlation. These latest developments are all covered in the Statistic Homework help, Assignment help at transtutors.com The rank correlation coefficientin fact is a measure of association which is based on the ranks of the observations and not on the numerical values of the data. In performing this, for calculating rank correlation coefficient, at first the actual observation to be replaced by their ranks, the highest value is given rank 1, rank 2 to next highest one and by following this particular order ranks are assigned for all the values. If two or more values seem to be equal, the calculated average of the ranks that should have been assigned to such values had all of them be different, is then taken and the same rank (equal to the calculated average) is given to the concerning values. The next step is to record the difference between ranks for each pair of observations and then square these differences to get a total value of such differences. Then finally the rank correlation coefficient is worked out. Here n denotes the number of paired observations. The value of Spearman’s rank correlation coefficient will always vary between -1 and +1. Spearman’s rank correlation is also known as “grade correlation”. Basically it is a non parametric measure of statistical dependence between two variables. This test makes an assessment of how well the relationship between two variables can be described using a monotonic function. All such methods are covered inStatistic Homework help, Assignment help at transtutors.com Steps involved in Spearman’s rank correlation test: The null hypothesis - "There is no relationship between the two sets of data." Ranking both sets of data from highest to the lowest position and checking for tied ranks. Subtract the two sets of ranks to get the difference d and Square the values of d. Add the squared values of d to get Sigma d2. The next step is to use the formula $ = 1-(6Sigma d2/n3-n) where n is the number of ranks there is a perfect negative correlation If the $ value is -1, if falls between -1 and -0.5, there is a strong negative correlation, if falls between -0.5 and 0, there is a weak negative correlation, if it is 0, there is no correlation, if falls between 0 and 0.5, there is a weak positive correlation, if falls between 0.5 and 1, there is a strong positive correlation, if it is 1, there is a perfect positive correlation between the two data sets. The null hypothesisis accepted if $ value is 0, otherwise it is rejected. Whenever the objective is to know if two variables are related to each other, the correlation technique is used. Our email-based homework help support provides best and intelligent insight and recreation which help make the subject practical and pertinent for any assignment help. Transtutors.com present timely homework help at logical charges with detailed answers to your Statistic questions so that you get to understand your assignments or homework better apart from having the answers. Our tutors are remarkably qualified and have years of experience providing Spearman Rank Correlation Test homework help or assignment help.
http://www.transtutors.com/homework-help/statistics/nonparametric-tests/spearman-rank-correlation/
13
10
To continue students’ understanding of data collection and statistics that they began in sixth grade, Data Distributions focuses on the following aspects of data investigation: - Questioning and surveys: formulating key questions and deciding what data to collect - Data collection: deciding how to collect data and collecting it - Analyzing data: organizing, representing, summarizing and looking for patterns - Interpreting results: predicting, comparing and identifying relationships For an in-depth explanation of goals, specific questions to ask your students and examples of core concepts from the unit, go to Data Distributions. Online resources for Data Distributions* Other online resources Note: These resources require Java in order to run.
http://www.lwsd.org/Parents/Teaching-Curriculum/Math-Resources/Seventh-Grade-Math/Pages/7th-Grade-Data-Distributions.aspx
13